“Can decisions that materially affect people’s lives be outsourced to a machine?”, asked Christopher Woolard, executive director of strategy and competition at the UK Financial Conduct Authority (FCA), at a recent London conference on artificial intelligence (AI) ethics in the financial sector — according to the written text for his speech. The question, he suggested, had a “particular resonance in finance”.
In his speech, Woolard drew upon the findings so far of joint research between the FCA, the UK’s financial services regulator, and the Bank of England of 200 firms. Full results are expected in the third quarter this year.
Woolard said that “the use of AI in the firms we regulate is best described as nascent”, with the technology being used “largely for back-office functions, with customer-facing technology largely in the exploration stage”.
The lessons of the global financial crisis were “playing on industry minds”, creating caution, he said. “Certainly, there is no desire to reverse progress on rebuilding public trust.”
He said there was a balance to be struck on this. “While awareness of regulatory and consumer risk is welcome, we don’t want this to act as a barrier to innovation in the interests of consumers.”
Companies using AI or machine learning need to ensure they have a “solid understanding” both of the technology and the governance around it, Woolard warned.
He offered boards of financial services firms the following advice: “We want to see boards asking themselves, ‘What is the worst thing that can go wrong?’ and providing mitigations against those risks.”
‘Explainable’ algorithmic decisions
The FCA is partnering with The Alan Turing Institute — the UK institute for data science and AI named after a pioneer in this field — in a yearlong project to look at the use of AI across the UK financial services sector. It will also examine the ethical and regulatory questions that arise and advise on solutions to them. Transparency and explainability will be a particular focus of the work.
Woolard said there was a growing consensus around the idea that algorithmic decision-making “needs to be ‘explainable’”.
He gave a practical example: “If a mortgage or life insurance policy is denied to a consumer, we need to be able to point to the reasons why.”
Woolard then asked, “But what level does that explainability need to be? Explainable to an informed expert, to the CEO of the firm, or to the consumer themselves?”
The challenge, he said, was that explanations do not naturally arise when using complex algorithms.
He explained: “It’s possible to ‘build in’ an explanation by using a more interpretable algorithm in the first place, but this may dull the predictive edge of the technology.”
That means there is a tradeoff to consider between a prediction’s accuracy and the ability to describe it.
Woolard predicted that a “period of profound evolution” was likely to follow the implementation of Open Banking — a recent innovation that allows FCA-regulated financial services firms secure access to the financial information of individuals and small and medium-size enterprises.
“We all know that with access to the rich datasets facilitated by Open Banking, the potential for AI for the good of consumers is huge,” Woolard told delegates.
But he said there is a caveat: Technology relies on the public’s trust and a willingness to use it. The public also needs to see the value that data creates.
He warned firms: “At a basic level, firms using this technology must keep one key question in mind — not just, ‘Is this legal?’ but ‘Is this morally right?’”
Woolard said that, internationally, the FCA is also leading work on AI for the Spain-based International Organization of Securities Commissions (IOSCO), which is the global standard-setter for the securities sector. The work is centred on trust and ethics issues and the future shape of a framework for financial services.
While underlining that the FCA’s rules in the context of AI “are sufficient — for now”, Woolard said the FCA is reviewing — in its Future of Regulation project — its principles for business and looking at how it can become a more “outcomes-based regulator”.
His final call to action to the conference delegates was to work together on AI issues. He quoted Alan Turing: “‘We can only see a short distance ahead, but we can see plenty there that needs to be done.’”
─ Oliver Rowe (Oliver.Rowe@aicpa-cima.com) is an FM magazine senior editor.