Leo Jia writes: 

Underspecification.  Observed effects can have many possible causes.  So when you have say 50 models that analyze the causes from various prospectives, and all worked well in tests, but in the real world on a specific case, they present different results, and you are faced with uncertainty as to which one to take.  This also relates to our mind and life experiences, doesn't it?  In life, one has learned how to do all this and all that, but when faced with a situation, one struggles on what approach to take.

Dendi Suhubdy writes: 

It’s not fundamentally flawed at all. I’ve been working with a Turing award winner in deep learning since 2016 and built multiple startups in the field of deep learning.

I can say it relies on the backpropagation algorithm, which means it needs to have the function (linear or non-linear, nonlinear for deep neural networks) to be differentiable (what we call the backward pass). When we do inference (or the foward pass) it’s just simply matrix multiplication of (weights * prev_input) + bias.

Now to achieve a better understanding of how our world works, we need to have to learn from few-shots, or zero-shots. Also we need to be able to quickly learn from a smaller sample size. This problem is hard and I believe we (the deep learning people) are working on it.

Larry W writes: 

I have spent a lot of money developing AI trading strategies and so far…none of them are better—or even close —to man-made strategies


WordPress database error: [Table './dailyspeculations_com_@002d_dailywordpress/wp_comments' is marked as crashed and last (automatic?) repair failed]
SELECT * FROM wp_comments WHERE comment_post_ID = '13083' AND comment_approved = '1' ORDER BY comment_date




Speak your mind


Resources & Links