Another Reason Why Investors Won’t Embrace AI

I’ve staked my career on the view that advanced AI methods will radically transform our investment decision-making processes and our business models. With few exceptions, asset owners, asset managers, and consultants do not share my view.

Some, like Jeff Gundlach, CIO of DoubleLine Capital, take a more dogmatic view: “I don’t believe in machines taking over finance at all.”

Many others are more circumspect. They might claim to share my view–but a deeper investigation reveals they see AI as a fad or a hedge rather than as a catalyst for an inevitably different future.

It’s disconcerting that these usually critical thinkers fail to see that AI’s transformative power allows for better predictions than traditional approaches. The disconcert is not because of our disparity between our worldviews but because of the internal inconsistency of their own views.

These same individuals accept that in the past few years advanced AI technologies such as deep learning and reinforced deep learning can solve complex problems better than humans (e.g., Deep Mind’s AlphaGo and Mount Sinai’s Deep Patient) without explicit human programming.

They fully understand the commercial benefits of AI and can often explain how many traditional, non-tech companies like WalMartUPSToyota have achieved commercial benefits by incorporating AI into their core businesses. More radically, they see how an old-world company like GE has used AI and big data to transform itself from a heavy industry and consumer product company into the world’s largest digital industrial company with the explicit objective of “using AI to solve the world’s most pressing problems across all our industrial business lines.”

Their internal inconsistency does not reflect a deficiency in their cognitive powers. Rather, it’s the result of a specific behavioral bias: algorithm aversion.

Yes, “algorithm aversion” is a real phenomenon supported by a substantial body of academic research. It is summarized by academics Berkeley Dietvorst, Joseph Simmons, and Cade Massey, who wrote in a 2014 paper, as “research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster.” In fact, “people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster.”

As AHL CIO Nick Granger’s explained it in a Bloomberg article: “‘It [algorithm aversion] shows people trust humans to do jobs even when, according to the evidence, computers are more effective.’” 

This behavioral bias does explain how investors could hold the contradictory positions that while AI can help other businesses make better predictions and decisions, it is of less value in investing.

This behavioral bias also explains why, in spite of algorithms’ forecasting powers, we cling to the belief that this power could be improved with human input.

For example, hedge fund CQS’s Michael Hintze claims that “models are a great place to begin, but not necessarily a good place to finish. It is a team effort and you need the analysts, traders and portfolio managers with the skills, experience, and judgment to use and understand sophisticated financial models.” 

Interestingly, Hintze’s comment reveals not only the problem of, but also a potential cure for, algo aversion: Give individuals a sense of control over the algos, and they are more likely to accept their forecasts.

Academic literature supports this compromise solution, but it seems more appropriate to cite an industry practitioner, Mark Rzepczynski, for an explanation of this point:

“If some modest amount of control is given to the decision maker, he will choose the algo over his own decision making. Allow the individual to adjust the model output by 10 percent, and [he is] happy. Allow the individual to reject 10 percent of the output from the model, and [he is] happy. Aversion falls when the human has some control over the model. You could say that the way to combat one behavior bias — algorithm aversion — can be through another bias, the illusion of control.”

The weakness of this accommodation is that though it might cause some investors to adopt AI, it dilutes the predictive power of the model, making it likely that those investors will be disappointed with the human/AI results.

I remain convinced that AI will transform asset management. However, I’ve come to see that the biggest challenge we face may not be developing powerful predictive AI-based investment models, but simply convincing investors not to trust their own judgment. More broadly, the winners and losers will be decided not by the current market position of a firm or even the size of its checkbook, but by its ability to overcome its anthropocentric prejudice and trust AI like it would trust a human being

Angelo Calvello, PhD, is the Managing Member of Impact Investment Partners and co-founder of Rosetta Analytics, Inc. This article originally appeared in the November issue of Institutional Investor.

Previous Post
After BlackRock’s artificial intelligence pivot, there is no turning back
Next Post
Asset Managers, Stop Blaming Others for Your Own Sloth
You must be logged in to post a comment.