This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 3 minute read

New winds blowing: Bank of England publishes speech on AI and financial stability

Last week the Bank of England (the Bank) published a speech delivered by Sarah Breeden, the Bank’s Deputy Governor for Financial Stability, on the impact of artificial intelligence on financial stability. Particularly interesting, from our perspective, were her comments on: (i) AI governance, and (ii) AI model providers and the regulatory perimeter. 

We last heard from the Bank, PRA and FCA on AI in April 2024 (see our Lens post here), when the existing regulatory framework was deemed well-equipped to handle AI risk, for now. Ms Breeden echoes that same sentiment, stating that the Bank does not consider that we are at the point where it needs to change its tech-agnostic microprudential approach, or where macroprudential policy is needed. But she also stresses that “the power and use of AI is growing fast, and we mustn't be complacent”, and observes that existing regulatory frameworks "were not built to contemplate autonomous, evolving models with potential for decision making capabilities”.

AI governance

The importance of robust AI governance is something we speak to clients about regularly. We were struck by the speech's revelation that only a third of respondents to the Bank and FCA's latest survey on AI and machine learning describe themselves as having a complete understanding of the AI they had implemented in their firms. 

As firms consider use of AI in higher impact areas of their businesses such as credit risk assessment, capital management and algorithmic trading, Ms Breeden says the Bank should expect “a stronger, more rigorous degree of oversight and challenge”, and think about “where we might be content for AI models to make automated decisions and where (and to what degree) there should be a human in the loop.”  While nothing concrete is promised, it is hinted that practical guidance is in contemplation on what ‘reasonable steps’ senior management might be expected to have taken with respect to AI systems to comply with regulatory requirements.

Model providers and the regulatory perimeter

The incoming regime for critical third parties (CTPs) to the UK financial sector has been touted as a means of tackling risk should firms come to rely on common AI service providers. It's a regime that's on our mind, with final rules expected Q4 2024. So we were interested to see Ms Breeden discuss the limitations of this regime, which was designed to address the risk of failure or operational disruption at a critical node, when considering AI and macroprudential policy. 

Indeed, “AI could lead to a different kind of reliance”, as firms are expected to ensure that third party models meet the same standards for model risk and data risk management as if they had been developed in-house—a challenging ask in the absence of visibility over model design, and the capability to interrogate it. To meet this, Ms Breeden suggests that the Bank may need to “think again about the adequacy of the regulatory perimeter and whether some requirements applying directly to model providers themselves might be necessary". This would be the case particularly if AI starts to be used in a material way for trading or core risk assessment.  How, and whether, such requirements interact with any requirements imposed by the UK’s impending AI bill will be an important question.

Final thoughts

There are a number of breadcrumbs in this speech, as Ms Breeden raises important questions for the regulators to answer: what does explainability mean in the context of generative AI, and what controls should firms have? Are existing frameworks on model risk management sufficient to ensure firms understand what their models are doing as they evolve autonomously? Can the Bank do more to ensure that firms are training AI models on high-quality, unbiased input data?

While we do not have the answers , it is clear that a departure from a tech-agonistic approach is, at the least, in contemplation. As the regulators continue to monitor AI adoption in the financial sector, including through initiatives such as the FCA’s new AI Lab (which we wrote about here), the Bank’s approach to AI is a wise one: “to be humble and to be prepared”. 

 

 

 

 

 

Ms Breeden suggests that the Bank may need to “think again about the adequacy of the regulatory perimeter and whether some requirements applying directly to model providers themselves might be necessary".

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

fig, ai