This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minutes read

Tackling AI risk in financial services: existing regulatory framework declared well equipped, for now

On 22 April 2024 the Bank of England, PRA and FCA confirmed their view that the existing regulatory framework is well equipped to capture regulated firms' use of artificial intelligence and machine learning (or ‘AI/ML’). In doing so, the regulators responded to the government's request for an update on how its five AI principles will be applied in the financial services sector (for more on the government's approach, including to generative AI, see our post here). 

For now, further tweaks to the regulatory framework to implement these five AI principles have been considered unnecessary. Most notably, a mooted dedicated Senior Management Function for AI has been taken off the table. Instead the Bank of England and PRA, and separately the FCA, detail how the existing regulatory framework maps to each of the five AI principles. In particular, the Senior Managers and Certification Regime, Operational Resilience and Outsourcing requirements, the incoming Critical Third Parties regime, the FCA's Consumer Duty and the PRA's expectations for banks' Management of Model Risk (SS1/23) are flagged as important tools in the regulatory armoury. 

The decision to, at this time, rely on the existing regulatory framework—trailed in a feedback statement published in October last year—stems from the belief that many of the risks related to AI are not unique to AI itself, and underscores the regulators' technology-agnostic approach. We should recall that this is an industry familiar with the risks that arise where models are used to make business decisions, manage risk and fulfil reporting obligations, and an industry whose use of robo-advisors and algorithmic trading long pre-dates the current AI/ML conversation.  

Importantly, as put by the Bank of England and PRA, “technology-agnostic does not mean technology-blind”. It is observed that wider adoption of AI/ML could pose system-wide financial stability risks (for example, by amplifying procyclical behaviour or increasing cyber-risk) and increase risks to consumers, where some consumers might even be excluded from the market. Further question marks are raised around developments in quantum computing and the impact of the rapid rise of Large Language Models (although it is noted that most LLM use cases identified to date are relatively low risk). 

AI adoption across UK financial markets will now be monitored closely, and future regulatory adaptations actively considered if needed. As part of this diagnostic work, the Bank of England and FCA will re-run a third edition of the machine learning survey in 2024. Clarificatory guidance on how existing rules apply to AI may also follow. As flagged by respondents in the October 2023 Feedback Statement, a helpful starting point might be guidance on the interpretation and evaluation of good consumer outcomes in the AI context under the FCA's Consumer Duty. 

This approach could be described as ‘wait and see’. As the UK regulatory framework evolves, we will be particularly interested in how AI systems used in both credit scoring and health and life insurance risk assessment and pricing for natural persons will be managed, given that these are designated as high-risk systems under the EU AI Act. But for now, the tools that regulated firms will use to respond to AI will feel reassuringly familiar. 

 

 

As the UK regulatory framework evolves, we will be particularly interested in how AI systems used in both credit scoring and health and life insurance risk assessment and pricing for natural persons will be managed, given that these are designated as high-risk systems under the EU AI Act.

Tags

fig, ai, fintech