Beware the known unknowns when finance meets AI

Capture investment opportunities created by megatrends

Beware the known unknowns when finance meets AI

29 September 2021 Technology & Digitalization 0

Receive free Artificial intelligence updates

The Collingridge Dilemma sounds like the title of a Sherlock Holmes mystery. It is, in fact, one of the best explanations for how difficult it can be to control risky technologies. In essence, it concerns the imbalance between imperfect information and entrenched power.

“When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time-consuming,” the academic David Collingridge wrote in his book The Social Control of Technology.

How can we act when dealing with known unknowns? That dilemma is facing regulators today as they try to assess the impact of artificial intelligence in the finance industry. As two recent reports from the Bank for International Settlements and the OECD make clear, we have now reached a critical juncture. While the benefits of AI are obvious, in terms of increased efficiency and improved service, the risks are often obscure.

In an earlier paper published while at the Massachusetts Institute of Technology, Gary Gensler warned that the broad adoption of deep learning AI models might even increase the fragility of the financial system. It is a good job Gensler now chairs the US Securities and Exchange Commission and is in a position to respond to the concerns he previously raised.

There is no shortage of views about the principles that should govern AI. According to AlgorithmWatch, a Berlin-based non-profit, at least 173 sets of AI principles have been published around the world. It is hard to disagree with the worthy intentions contained in these guidelines, promising fairness, accountability and transparency. But the challenge is to translate lofty principles into everyday practice given the complexity, ubiquity and opacity of so many cases of AI use.

Automated decision-making systems are approving mortgages and consumer loans and allocating credit scores. Natural language processing systems are conducting sentiment analysis on corporate earnings statements and writing personalised investment advice for retail investors. Insurance companies are using image recognition systems to assess the cost of car repairs.

Although the use of AI in these cases might affect the rights — and wealth — of individuals and clients, they do not pose a systemic risk. Many of these concerns are covered by forthcoming legislation, including the EU’s AI rules. These legislative initiatives sensibly put the onus on any organisation deploying an AI system to use appropriate and bias-free data, to ensure that its outputs are aligned with its goals, to explain how it operates and to help determine accountability if things go wrong.

The more unknowable question concerns the use of AI-powered trading systems, which might destabilise financial markets. There are risks of herding, gaming or collusive behaviour if systems are all trained on the same data and the same kinds of algorithms, says Sarah Gadd, head of data and AI solutions at Credit Suisse.

“You have to monitor these things incredibly closely — or do not use them,” she says. “You have to have the right kill switch to turn things off in milliseconds and have people you can fall back on. You cannot replace human intelligence with machine intelligence.” 

But others note that flash crashes occurred long before AI was ever used in financial markets. The question is whether AI systems make them worse. AI systems are nothing magical, just a statistical technique, says Ewan Kirk, the founder of Cantab Capital Partners, an investment fund that uses trading algorithms. “The only thing that AIs are great at finding are incredibly subtle effects that involve a huge amount of data and are probably not systemic in nature,” he says. The reason for a kill switch is not because the AI program might take down the financial system, he adds, but because it probably has a bug.

The best way of dealing with the Collingridge dilemma is to increase knowledge about AI in organisations and across society, and to check the power of entrenched interests that may obstruct necessary change. Several regulators are already on the case, hosting AI forums, developing regulatory sandboxes to test and verify algorithms and deploying their own machine learning systems to monitor markets.

But there is an argument that there are also unknown unknowns, as the epigrammatic former US Defense secretary Donald Rumsfeld put it, and that in some circumstances we should deploy the precautionary principle. Regulators should be prepared to ban the use of the most exotic, or ill-designed, AI systems until we better understand how they work in the real world.

john.thornhill@ft.com