A Bloomberg article today reported that human-run hedge funds trounced quants in 2020 - a turnaround from the experience of recent years. Renaissance Technologies - run by Jim Simons - saw its Institutional Diversified Alpha & Global Equities funds fall by 32% and 31% <continued>
The fundamental problem with computer/AI driven trading strats is - and always has been - that the models will never be capable of evaluating novel situations which have not happened before. This was undoing of LTCM, as well as port. insurance which caused 1987 stockmarket crash.
A global pandemic was not in the past datasets, so the bots don't know what to do. AI is only good where you have sizeable and complete datasets that can 'train' the AI to recognized statistical patterns too complex for humans to extract, and use it to make accurate predictions.
This works phenomenally well in situations that regularly recur within the bounds of a confined range of possible outcomes, but it is prone to disastrous error where discontinuities can occur where outcomes can suddenly bear no resemblance to outcomes in past datasets.
How do you teach an AI to determine if there is a cat in photo? You 'train' it by showing it millions of photos labelled by humans that tell the AI whether there was in fact a cat in it or not. The AI pick ups statistical regularities & uses to make inferences on new photos.
It might get so good it works 99.999% of the time. But what happens if a cat trips over a bucket of purple paint and then walks into frame? The AI might never have seen a purple cat before. If colour is a part of its predictive model (likely), it might not recognize it as a cat.
This is why there are fundamental limitations to what AI is capable of. It can never deal w entirely novel situations. When it comes to markets/economics, we have nowhere near enough data. For eg we've had only 1-2 housing crashes in US. You'd need millions for AI to predict em.
Furthermore, the problem with AI driven trading bots is that they actually change the nature of the market itself. The past datasets they rely on become unrepresentative of the current market over time. This is what lead to the portfolio insurance debacle. PI changed the market.
Where a lot of leverage is involved, in my view it is only a matter of time before any quant/AI model blows up. This is also why, incidentally, AI-based fintech lending algos are also very dangerous. Their own lending/credit creation will change the risks they are trying to model
This has a lot in common with how credit ratings on MBSs contributed to the housing bubble & subsequent bust/GFC. The ratings were based on past datasets, which said risk of a crash was negligible. But this assessment led to rapid credit creation which changed the risk function.
AI-based fintech lending will cause another economic blow up at some point. The models will use past data that predicts low defaults to extend too much credit, and default experience will suddenly and discontinuously change when credit stops growing & blow up the models.
You can follow @LT3000Lyall.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.