This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 3 minute read

Using AI in financial services? ICO and FCA collaborate to boost confidence

A recent survey by the Financial Conduct Authority and the Bank of England identified that 85% of financial services organisations are currently using or planning to use AI, but 33% are finding data protection a constraint whilst 20% cite FCA regulations as a constraint1. Against this backdrop, the UK Information Commissioner’s Office (ICO) and FCA held a joint industry workshop resulting in them issuing on 2 June a joint statement (the Statement) on how they are supporting financial services companies’ adoption of AI. The roundtable and Statement reflect the ICO and FCA’s desire to balance driving legal compliance with supporting the UK Government’s pro-innovation strategy on AI.  

Key takeaways from the Statement are considered below.

Broader sandbox collaboration 

The ICO and FCA will collaborate to ensure participants in their own sandbox schemes receive joined-up regulatory positions . Additionally,  a revised version of the Digital Regulation Cooperation Forum (DRCF) cross-regulatory sandbox is being developed now that the pilot scheme has concluded. 

Continuing joint work on AI 

The two regulators will continue to work together to support AI uptake both directly and through the DRCF. Their plans include a follow-up workshop with small companies on AI adoption and further projects through the DRCF to develop their respective understanding of how other regulatory regimes may apply to AI. The Statement makes particular reference to agentic AI (which has greater autonomy to take decisions), indicating this form of AI is moving up the regulators’ agenda. 

Clarity on interaction of regulations

The ICO and FCA will continue to collaborate to develop regulatory clarity for financial services firms, including on the interaction between data privacy law and the Consumer Duty. As an example, the Statement highlights the impact of the ICO’s letter last year advising that the UK data privacy and e-marketing rules should not prevent savers being told about more favourable rates.

New statutory code of practice

One of the regulators’ takeaways from the roundtable was that there were not specific regulations standing in the way of innovation, but financial services organisations wanted greater clarity on ‘what good looks like’ in practice, and more opportunities for engagement to build confidence in trying new technologies. In response, the Statement says that a statutory code of practice for organisations developing or deploying AI and automated decision-making will be developed, enabling innovation while safeguarding privacy. 

The ICO’s announcement on 5 June of its new AI and biometric strategy has subsequently confirmed that the ICO plans to develop this new code of practice over the next year, alongside making updates to its existing automated decision-making and profiling guidance and producing a horizon scanning report on agentic AI. A further blog from us on the new strategy will follow.

Acknowledgement that AI adoption isn’t risk free 

The Statement acknowledges organisations’ concerns about who is responsible for third party AI tools if things go wrong, especially if the AI is making decisions about their customers. The Statement calls out the Senior Managers and Certification Regime as particularly relevant to this concern. Although not mentioned in the Statement, the potential impact of new rules for critical third parties to the UK financial sector is also often highlighted in this context by the financial regulators (more on that here).  

The Statement highlights that some financial services organisations are successfully navigating this third-party risk “in line with their risk appetite” while others are more concerned. The regulators’ conclusion that “many choices are about firms’ own risk appetites and choices” seems to be an acknowledgement that there are likely to be residual risks around AI adoption that cannot be fully mitigated, so whether to adopt or not will boil down to a question of risk tolerance. Of interest here too is prior suggestion from the Bank of England that financial sector regulatory requirements might be applied directly to model providers themselves, with potential implications for financial services organisations’ risk appetite.

Commentary 

The ongoing collaboration between the ICO and FCA is to be welcomed, particularly as the regulators are taking concrete steps to help provide financial services organisations with regulatory clarity. It will however be important for industry to input into the proposed statutory code of practice to ensure that the final version is practical and realistic, as asking for more clarity from regulators always runs the risk that the guidance sets unhelpful positions in stone. However, with the ICO and FCA keen to encourage AI adoption, this risk will hopefully be avoided. 

It will then be important that the approach to enforcement supports the position taken, as either too heavy or too light touch enforcement risks jeopardising compliant innovation.


 


 

1 As cited within the Statement. 

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

dp