The Most Powerful Artificial Intelligence Knows Nothing About Investing. That’s Perfectly Okay.

Articles

Indeed, that’s the point.

There’s no denying that 2020 was an exceptionally trying year. Few know this as well as active managers, who continued to struggle to provide the promised returns — perhaps none more than celebrated quantitative investment managers like Renaissance Technologies and Two Sigma.

Clients have expressed growing disappointment with quants, as manifested in their steadily increasing redemptions.

Quant managers certainly recognize the peril they face. Some, like Ted Aronson of AJO Partners, have confessed that their investment models no longer work and have given up the ghost. In an interview with MarketWatch, Aronson pulled no punches for his decision to shutter his firm: “Our return sucks over the past few years,” he said. “Our shit is so bad it’s unbelievable compared to our peers.”

Others, perhaps lacking the stomach for such honest self-assessment, have chosen instead to selectively close only their woefully underperforming actively managed strategies (AQR Capital Management) or to remedy their ills by “tweaking” their current models (Bridgewater Associates). Both tactics offer the illusion of change but are more likely stalling tactics built on the hope that either client inertia or luck will allow them to extend their businesses.

However, a genuine remedy for their current and, most certainly, future ills exists: artificial intelligence.

I’m not talking about machine learning (ML) techniques that quants and other managers have integrated into their investment processes over the past few years. Traditional ML techniques “represent a significant expansion of the quantitative investor’s toolkit, but they’re not qualitatively distinct from traditional statistical methods,” according to a white paper by Acadian Asset Management. 

Used to augment traditional human quant models and methods, AI is seen as, at best, a handmaiden to human intelligence — helpful, perhaps, but bound by the constraints of human intelligence. 

However, the power of advanced AI — such as deep learning (DL) and deep reinforcement learning (DRL) — is rooted in its ability to find patterns in data directly and make predictions independent of human intelligence or expertise. While investment managers readily concede that these algorithms will solve incredibly complex problems in medicineautonomous drivingengineeringrobotics, and other verticals, they staunchly deny that DL and DRL will solve investment problems and build autonomous investment strategies.

That denial will be their downfall.


It is clear that their denial is based upon the single fundamental and universally held belief that investing is essentially and necessarily a human activity. Coldwater Economics’ Michael Taylor speaks for all of asset management when he writes:

My starting point is that, one way or another, investing is and will remain a fundamentally human activity. . . . Even when computer-driven trading represents the majority of stock-market activity, I’m prepared to take it as axiomatic that investment will remain a fundamentally human activity.

This human-centric view of investing is such an integral part of the status quo that even those in the vanguard of ML quants like Jeff Shen, co-head of BlackRock’s systematic active equity, are reluctant to envision a future in which humans are not central to the investment process. “Fund management is an extraordinarily high-cognitive task,” Shen told The New York Times last year. “We’re far away from turning on a computer and letting it run on its own.”

Such a view of investing easily accommodates the use of traditional ML because, as Anne Tucker, faculty director of the Legal Analytics and Innovation Initiative at the Georgia State University College of Law, points out, this ML merely leverages “components of human judgment at scale. It’s not a replacement; it’s a tool for increasing the scale and the speed.” 

However, this view cannot accept AI that makes possible nonhuman investing, that autonomously learns and makes all the critical investment decisions and limits the role of humans to that of developers, not portfolio managers.

What is so challenging to incumbent investment managers is that this new wave of AI requires neither programming by humans to replicate the decision-making process of human experts nor deep domain knowledge of the disciplines in which it operates. Instead, through the use of deep neural networks, data, and computer power, this AI autonomously identifies in the data itself nonlinear statistical relationships undetectable to human-based and traditional ML methods.

For example, DL models used in cancer diagnosis and prognosis know nothing about medicine. Yet by focusing entirely on the data, they can achieve “unprecedented accuracy, which is even higher than that of general statistical applications in oncology,” according to a review published in the journal Cancer Letters.

It is the same with DL and DRL investing: These models know nothing about the investment canon, the CFA curriculum, value, or momentum. They are not programmed to mimic the decision-making of the greatest human investors. Instead, these algorithms hunt through the data, identifying patterns and similarities between the target and the data, and then use this knowledge to make investment predictions.

An instructive example of such powerful self-learning algorithms might be DeepMind’s AlphaGo Zero, which was initially developed to play Go, the extremely complex Chinese board game that has more possible board positions than the number of atoms in the known, observable universe.

Unlike IBM’s Deep Blue — a human-designed, human-engineered, hard-coded computer program built in the 1990s to play the simpler game of chess — AlphaGo Zero started tabula rasa, without human data or engineering and with no domain knowledge beyond the rules of the game. A DeepMind blog post from 2017 concluded that AlphaGo Zero used “a novel form of reinforcement learning, in which AlphaGo Zero becomes its own teacher. The system starts off with a neural network that knows nothing about the game of Go. It then plays games against itself, by combining this neural network with a powerful search algorithm. As it plays, the neural network is tuned and updated to predict moves, as well as the eventual winner of the games.” 

Over the course of millions of games of self-play, the blog explained, “the system progressively learned the game of Go from scratch, accumulating thousands of years of human knowledge during a period of just a few days. AlphaGo Zero also discovered new knowledge, developing unconventional strategies and creative new moves.” 

While AlphaGo Zero was specifically designed (but not programmed) to play Go, DeepMind reported that a later version of the program, AlphaZero, achieved similarly striking success with chess and shogi: “AlphaZero quickly learns each game to become the strongest player in history for each, despite starting its training from random play, with no in-built domain knowledge but the basic rules of the game.” 

The unprecedented success of these and other experiments led the DeepMind team to draw the general conclusion that reinforcement learning can be used to achieve superhuman results in other domains:

Our results comprehensively demonstrate that a pure reinforcement learning approach is fully feasible, even in the most challenging of domains: It is possible to train to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.

In the face of this peer-reviewed and highly cited research (and other similarly robust research), investment managers continue to staunchly claim that “humans will always . . . be a necessary component in the investment process.”

This is the exact point in the argument where good quants should provide an abundance of empirical evidence in support of their claim of the impossibility of nonhuman investing. 

Yet none is offered. Instead, they rest their case on suppositions, appeals to tradition, and strawman arguments. 

Machine learning won’t crack the stock market.”

Critics using this meme generally imply that DL and DRL will not achieve the same 99 percent predictive accuracy these systems achieve in computer vision, speech recognition, and even cancer detection.

Such a level of prediction accuracy in investing would immediately conjure up memories of Bernie Madoff. What these critics fail to point out is that in investing, we are not trying to “crack the code.” Rather, as William Cohan writes in Fast Company, “An exceptional trader would be thrilled with a 51 percent success rate — similar to the house edge at a Las Vegas blackjack table.”

Financial markets are not stationary. They change all the time, driven by political, social, economic, or natural events.”

Denialists claim that, unlike MRIs and board games, markets are simply too complex and random — and this complexity and randomness overwhelm the ability of DL and DRL to consistently find actionable insights in the data.

Putting aside the fact that there is no reason to believe human intelligence-based models could account for this complexity and randomness any better than any AI, it is important to acknowledge that complexity and randomness are human concepts derived from observations made by human intelligence of price activity, and that phenomena humans perceive as complex and random might not be perceived as such by DL and DRL systems. As Tom Simonite writes in Wired, “Artificial intelligence is alien intelligence, perceiving and processing the world in ways fundamentally different from the way we do.”

Jeff Glickman, co-founder of AI-based investment manager J4 Capital, makes this argument in Cohan’s Fast Company article:

Glickman uses the word random advisedly, as if the chaos of the universe is just an illusion, concealing a fundamental, if inscrutable, higher order. “It’s kind of an intellectual cop-out,” he says, “It’s when something becomes so complex that the human mind is overwhelmed by the information content, and the human mind can’t possibly ever understand it.”

But that doesn’t mean that some other intelligence won’t. “Despite the fact that you or I might perceive it as being random, there’s nothing random about it,” he continues. “There’s just an overwhelming amount of complexity that’s beyond comprehension for humans, but within the ability of a massive supercomputer to comprehend.”

And while AI is great at recognizing patterns in data and identifying similar situations observed in the past, it is usually at a loss on how best to act in new and previously unseen situations, such as the Covid-19 outbreak.”

It is true that DL and DRL are trained on historical data and that circumstances not seen in the training data could prove troublesome.

Even a technology-heavy investment manager like Renaissance Technologies admits this. Bloomberg reports that Renaissance told clients in a September letter, “It is not surprising that our funds, which depend on models that are trained on historical data, should perform abnormally (either for the better or for the worse) in a year that is anything but normal by historical standards.”

But such an admission does not disqualify DL and DRL as possible investment systems.

If it did, it would also disqualify all human intelligence-based investment methods because they too use historical inputs (e.g., data, human experience) as model inputs and are at a loss on how best to act. (“Quants rely on data from time periods that have no reflection of today’s environment. When you have volatility in markets, it makes it extremely difficult for them to catch anything because they get whipsawed back and forth,” Adam Taback, chief investment officer of Wells Fargo Private Wealth Management, told Bloomberg in November.)

[W]hile the breadth of data that can be used in finance is quite large, the time series are often very short and usually limited to a few decades. A limited number of time series observations means that any model using the data is also constrained to be proportionally small.”

There is no question that DL and DRL are capable of ingesting massive amounts of data, and because of their large capacity, more data generally results in higher prediction accuracy. 

Defenders of the status quo often point to the large data sets required to train computer vision models or autonomous vehicles and to the fact that financial market data is smaller. This limitation, therefore, disqualifies advanced AI from use.

Financial data might be limited, but the criticism fails to consider that, unlike traditional quant models, DL and DRL models are capable of ingesting nonfinancial data, including nontraditional data (e.g., geospatial data). And the type, kind, and volume of this data is growing daily, giving engineers a broader palette from which to paint. As my friend Chris Schelling points out:

Today, as a society, we create roughly 2.5 exabytes of new data each day. For perspective, an exabyte is one quintillion bytes, which is a one with 18 zeroes after it. Put another way, one exabyte is one billion gigabytes, a number more familiar to most of us. My current iPhone has a storage capacity of roughly 125 gigabytes, so I would need 8 million iPhone 7s to store just one exabyte of data, and 20 million of them to warehouse the new data created every day. Data is now ubiquitous.

Perhaps more importantly, those who disqualify DL and DRL from investing because of the perceived limitations of financial data offer no empirical evidence to support their claim. Because of the sparseness of input data (and not other factors), they cannot point to the live track record of a DL- or DRL-based investment strategy that has failed.

When it comes to applying AI [to investing], it is compulsory for us to understand exactly how each algorithm works.”

Finally, we reach the black box objection, the trump card that deniers play if all other objections fail; the last stand of the status quo.

The thing about this objection is that, unlike the others, it is true. By its very nature, advanced AI is a black box. While we can observe how AlphaGo Zero plays, even its designers cannot explain why it makes a specific move at a specific time. Similarly, a manager using DL or DRL is able to provide a general overview of its approach (e.g., “we use a recursive neural network”) and can provide the model’s inputs and outputs, but it cannot explain why it makes a specific investment decision. 

There are obvious counterarguments to this requirement of explainability. For example, the “blackness” of an investment model is the product of a specific historical epoch; option pricing models, technical analysis, program trading, optimization programs, and statistical arbitrage programs were the black boxes of their day. Others point out that we hold AI to a higher standard of interpretability than we do human intelligence because it is not possible to explain the “why” of human decision-making. As Vijay Pande, general partner at Andreessen Horowitz and former director of the biophysics program at Stanford University, wrote in The New York Times, “Human intelligence itself is — and always has been — a black box.”


The investment industry has made a clear choice. 

It has chosen “why” over “what,” explainability over accuracy. 

But this choice was inescapable — because it preserves investing as a human activity.

This choice both accommodates the use of AI techniques that are “not qualitatively distinct from traditional statistical methods” and disqualifies the use of AI systems like DL and DRL that learn, make decisions, and take actions autonomously.

Above all, it ensures “the quantitative investment process will remain recognizable in the foreseeable future,” in Acadian’s words, thereby preserving the status quo. And the status quo is not working for many active managers, especially quants.

This choice should alarm asset allocators for the simple fact that it all but guarantees the perpetuation of the cycle of manager underperformance.

It is critical that allocators realize this future is the result of managers’ explicit decision to subjugate artificial intelligence to human intelligence and abrogate advanced AI, a choice grounded in hubris and self-interest and supported by strawman arguments, misunderstandings of advanced AI, and scant empirical evidence.

AI investing can help break this cycle of poor performance, but it requires that allocators choose “what” over “why” and accept, as Selmer Bringsjord told  Vice, that “we are heading into a black future, full of black boxes.”


Angelo Calvello, PhD, is co-founder of Rosetta Analytics

Previous Post
Rosetta Analytics launches RL One Strategy
Next Post
Can Clubhouse Kill the Investment Conference?