This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minutes read

How to regulate the use of AI in financial services – still looking for answers

How to regulate AI is the subject of a wide-ranging debate, both in the UK and internationally. One of the critical issues in this debate is whether AI can be managed through clarifications to the existing regulatory framework, or whether a new approach is needed. Yesterday, the Bank of England, the PRA and the FCA published a joint Discussion Paper (DP) that poses this question in the context of UK financial services.

The aims of the DP are to: 

  • encourage a discussion on the challenges associated with the use and regulation of AI; and 
  • explore how best to address them in a way that is aligned with the regulators’ statutory objectives, provides clarity and makes a practical difference for consumers, firms and markets.

The DP forms part of the regulators’ wider programme of work related to AI in financial services, including that of the AI Public Private Forum, which published a comprehensive final report in February 2022. That report found that a lack of clarity surrounding current rules, regulations and principles - in particular, how these apply to AI and what that means for regulated firms at a practical level – is holding back AI adoption. The DP does not provide that clarity for now. Instead, it refers us back to the usual conundrums: how can regulators support the safe adoption of AI? How can policy mitigate AI risks while facilitating beneficial innovation? Is there a role for global standards? It raises a number of more specific questions structured around the following three themes:

  • Regulators’ objectives and remits: exploring the best approach to defining and/or scoping the characteristics of AI for the purposes of legal requirements and guidance.
  • Benefits and risks of AI: identifying the areas of benefits, risks, and harms in relation to which the financial services regulators should prioritise action.
  • Regulation: exploring whether the current set of legal requirements and guidance is sufficient to address the risks and harms associated with AI and how additional intervention may support safe and responsible adoption of AI.

The responses to the DP will help inform both the regulators’ thinking and any potential future policy proposals, which will sit within the wider domestic and international context of emerging AI regulation. 

So, while there are no definitive answers in the DP, it does lay the path for areas where additional clarification or guidance may (or may not) be provided in due course. The regulators are particularly interested, for example, in the additional challenges and risks that AI brings to firms’ decision-making and governance processes, and how those may be addressed through the Senior Managers and Certification Regime (SM&CR). They seem keen to solve the question of whether there should be a dedicated SMF and/or a Prescribed Responsibility for AI under the SM&CR - noting that the split in responsibilities is currently an area of uncertainty and that more guidance would help provide clarity on what might constitute ‘reasonable steps’ for the purposes of S66A(5) and/or S66B(5) of FSMA.  They appear less inclined to provide a precise definition of AI as the basis for clarification or additional or alternative regulatory requirements.

The DP sits against the backdrop of the UK government’s July policy paper ‘Establishing a pro-innovation approach to regulating AI', addressed in a previous post, and international developments from other regulators and authorities, such as the proposed AI regulation for the EU.  We'll be talking more about the UK approach in a series of thought pieces in our Regulating AI hub.   

Next steps: responses to the DP are requested by 10th February 2023. 

“We note the importance of building maintaining and reinforcing the trust of all stakeholders, including consumers in AI. Engagement between the public and private sectors will facilitate the creation of a regulatory framework that enables innovation and mitigates potential risks.”

Tags

fig, ai, fintech