Earlier this week, James Proudman, Executive Director of UK Deposit Takers Supervision at the PRA, gave a speech on the governance of artificial intelligence in financial institutions.
The speech called out a number of important things which the boards of financial institutions need to consider when deciding to utilise artificial intelligence and/or machine learning within their businesses. The three main challenges identified in the speech were concerned with:
- Data: Deciding what data should be used, how it should be modelled and tested and how the outcomes can be tested to ensure they are correct
- Human: While the solutions are often automated, they are designed and overseen by individuals, so there needs to continue to be a focus on oversight on individual incentives and accountability in systems relying on artificial intelligence/machine learning
- Execution: New and focused skillsets and controls will be required to achieve execution and mitigate inherent risks.
Introducing new technology has never been without its challenges. Then again, introducing any new product brings its own issues. New products and solutions which incorporate elements of artificial intelligence and/or machine learning will pose new problems and risks, but the basic methods and skills used to solve and mitigate those are likely to be familiar to every financial institution.
Firstly, boards need to ensure that such new products are appropriately designed and rigorously tested, so that all of the possible outcomes are fully understood, and that those outcomes are monitored to ensure no unexpected deviation.
Those involved in the relevant development need appropriate skills and expertise to understand not only what outcome is being produced, but also to understand why that is the case and how to solve any arising issues. This needs to be the case up and down the chain of command. As with so much in the financial services space, this is likely to require an education of at least some senior management individuals (i.e. at least those to whom senior management responsibility is given).
What is new is the specific skills required and the possibility that the use of artificial intelligence and/or machine learning amplifies inherent and previously unnoticed biases and errors across vast amounts of data. This latter point just serves to underline how important it is to understand the relevant product or system at the earliest possible stage.
There is a sense that internal barriers are the main reason there hasn't been an even greater uptake of artificial intelligence and/or machine learning tools within large financial institutions. Governance, and board wariness, may be one such barrier. It is vital to get this right, but deploying established (and robust) governance procedures is a very good first step.
For more thinking about the risks, opportunities and some potential ideas for responsible deployment of AI in the future, you might want to have a look at the paper we published jointly with ASI Data Science (now faculty) on the topic a couple of years ago.