As artificial intelligence (AI) and machine learning techniques become the norm rather than the experimental exception in many segments of the financial services sector, the FCA continues to lead the international field in developing strategies for identifying and mitigating attendant risks, and indeed for harnessing the technology for its own benefit.
In comments at The Alan Turin Institute this week, Christopher Woolard, Executive Director of Strategy and Competition at the regulator, implicitly observed that AI and machine learning has really yet to take hold in the sector, with mere digital processes in many cases being mislabeled, or misunderstood, as "intelligent" systems; and of course therein lies one potential risk factor: that products, services or systems are dumber than we give them credit for.
But this is not to detract from the more important comments that Mr Woolard made about setting intentions and delivering on the promises being made around the benevolence of AI and machine learning techniques. He suggests that focus on two key topics is essential: doing good, and good governance.
Put simply, AI use cases in financial services should seek always to deliver net better outcomes for users, for the financial system, for society; we should not be using AI for its own sake. And decision-making around the design and deployment of AI should be responsible - there are some serious ethical factors at play and the stakes may be higher than we yet appreciate: "the answers we arrive at have the potential to fundamentally alter society and the established order".
For more on this topic, see our Superhuman Resources report: responsible deployment of AI in business (https://www.slaughterandmay.com/media/2536419/ai-white-paper-superhuman-resources.pdf).