This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 4 minutes read

UK diverges from EU on new plans for AI regulation

The 18th July was a busy day for AI in the UK. As well as publishing the Data Protection and Digital Information Bill, which includes AI measures, the UK Government published:

  • a new AI policy paper which outlines its proposed approach to regulating AI in the UK. The approach is based on six core, cross-sectoral, principles that regulators must apply. As part of this it issued a call for views. You can therefore share your views on putting this suggested approach into practice and help inform the Government’s work in this area (including its main AI white paper on AI regulation, expected later this year); and
  • an AI Action Plan, which shows how the Government is delivering against the National AI Strategy. The plan also sets priorities for the next year.

In this blog, I will look at the AI paper, the UK Government’s current thinking on AI regulation and its call for views.

AI paper: UK’s regulatory approach

In the National AI Strategy, the Government promised to publish a white paper setting out its ‘pro innovation’ position on regulating AI. This paper sets out its emerging proposals, and (together with the accompanying call for views) will be used to develop the main white paper.

Instead of giving responsibility for AI governance to a central regulatory body, as the EU is doing through its AI Act, the Government plans to maintain the current, sector-specific approach to regulating AI. However, by introducing six core principles which all regulators (the ICO, CMA, Medicine and Healthcare Products Regulatory Agency etc.) must follow, it is recognising that AI technologies create certain issues and risks which require a coherent response across sectors. An example would be a perceived lack of explainability when high-impact decisions are made about people using AI.

The six principles

The core principles build on the OECD Principles on AI and require developers and users to:

  1. Ensure that AI is used safely.
  2. Ensure that AI is technically secure and functions as designed.
  3. Make sure that AI is appropriately transparent and explainable.
  4. Embed considerations of fairness into AI.
  5. Identify a legal person to be responsible for AI.
  6. Clarify routes to redress or contestability.

The principles will be set on a non-statutory basis to begin with, allowing greater flexibility if changes to the approach are needed. 

Regulatory approach

Regulators will be asked to interpret, implement and prioritise the principles in a tailored way that recognises the different uses and risks in their sector. This reflects the Government’s view that we should regulate the use of AI (and any harms/risks this creates) rather than the technology itself, and that context is key.

Regulators will also be encouraged to take a risk-based, proportionate approach to regulation. This includes considering lighter touch regulatory options where possible, such as publishing guidance and creating sandboxes. While the principles provide clear steers for regulators, they will not therefore necessarily translate into mandatory obligations.

Finally, regulatory co-operation will be key to this approach. As well as relying on existing arrangements such as the Digital Regulation Cooperation Forum, the Government will continue to look at ways in which regulators can successfully co-operate to ensure coherence between their respective approaches. The Government also says, in the paper that it will “seek to ensure that organisations do not have to navigate multiple sets of guidance from multiple regulators all addressing the same principle.” This is good news for organisations already struggling to manage the different pieces of law and guidance in this space.

Next steps

The main white paper is now expected in late 2022. The Government will therefore spend the next few months refining its approach, considering feedback and discussing how best to put the approach into practice and monitor its success. This includes looking at the role of other regulatory tools such as standards and assurance mechanisms, and could include considering the roles, powers and remits of regulators. While it does not see a need for new AI legislation at present, new laws may in fact be needed to ensure UK’s regulators can implement this proposed framework in a coordinated and coherent manner.

The Government will also consider if there are any high risk areas that require an agreed timeline for relevant regulators to interpret the principles to address those risks.

From a business perspective, your next steps may be to respond to the Government’s call for views. It has set out six questions at the back of the paper for interested parties to answer over the next ten weeks (and I’ve set the questions out below for ease of reference). Now is therefore your chance to help shape AI regulation and the way in which it will be practically managed.

Call for views: questions in AI paper 

  • What are the most important challenges with our existing approach to regulating AI? Do you have views on the most important gaps, overlaps or contradictions?
  • Do you agree with the context-driven approach delivered through the UK’s established regulators set out in this paper? What do you see as the benefits of this approach? What are the disadvantages?
  • Do you agree that we should establish a set of cross-sectoral principles to guide our overall approach? Do the proposed cross-sectoral principles cover the common issues and risks posed by AI technologies? What, if anything, is missing?
  • Do you have any early views on how we best implement our approach? In your view, what are some of the key practical considerations? What will the regulatory system need to deliver on our approach? How can we best streamline and coordinate guidance on AI from regulators?
  • Do you anticipate any challenges for businesses operating across multiple jurisdictions? Do you have any early views on how our approach could help support cross-border trade and international cooperation in the most effective way?
  • Are you aware of any robust data sources to support monitoring the effectiveness of our approach, both at an individual regulator and system level?

The call for views and evidence will be open for 10 weeks, closing on 26 September 2022, and you can send your views to: evidence@officeforai.gov.uk.

"We propose developing a set of cross-sectoral principles that regulators will develop into sector or domain-specific AI regulation measures." (AI paper 18 July)

Tags

ai, regulating digital