On the day that Elon Musk and over 1000 AI experts called for a pause in the “out-of-control race” to develop ever more powerful AI, the UK Government published its long awaited “Pro-innovation approach to AI regulation” white paper (29 March 2023).
The UK’s approach
Unsurprisingly the paper does not propose an EU style cross-cutting AI law, but instead follows the sector specific approach outlined in an interim paper published last July. To try to ensure a consistent approach across the different regulators, this framework will be underpinned by the following five principles which will apply across all sectors:
- Safety, security and robustness
- Appropriate transparency and explainability
- Accountability and governance
- Contestability and redress
The principles build on the OECD’s AI principles and so will already be familiar to many AI developers. The Government has said they will not be placed on a statutory footing, at least not initially, as “new rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce [their] ability to respond quickly… to technological advances”. However, it may introduce a statutory duty on regulators, requiring them to have “due regard” to the principles after an initial implementation period.
We will be providing a more detailed update on the white paper, but in the meantime some points of interest to note are:
- AI definition: while acknowledging there is no general definition of AI that enjoys widespread consensus, the white paper does still offer its own views on a definition. It says AI should be defined by reference to two characteristics - adaptivity and autonomy – as they generate the need for a bespoke regulatory response. They can make it hard to explain, predict or control the outputs of an AI system and challenging to allocate responsibility for such outputs. The hope is that defining AI in this way and “avoiding blanket new rules for specific technologies” should help future proof the regime. However, the Government confirms it will keep the definition under review as part of its ongoing monitoring and iteration of the whole framework (see below).
- Regulating the use not the technology: the framework is context specific and will regulate based on the outcomes AI is likely to generate in particular applications. For example, an AI powered chatbot used to triage customer service queries for an online retailer should not be regulated in the same way as a similar application used as part of a medical diagnostic process.
- Regulatory co-ordination and resourcing: given the sector specific approach of the framework, and expectation that regulators will provide guidance and tools relating to the principles, there is much focus in the white paper on the need for regulatory co-ordination. Without it, businesses may face an even more confusing web of guidance and rules than they face now. Some regulators already co-ordinate (e.g. the Digital Regulation Cooperation Forum) but the Government has said it will step in further to help with that co-ordination. For example, the paper discusses the Government supporting regulators and providing guidance that helps them implement the principles. It also discusses a suite of centralised functions that are required to support implementation of the framework, including a central monitoring and evaluation framework, cross-sectoral risk function/register and multi-regulator AI sandbox (the latter being a recommendation in Sir Patrick Vallance’s Digital Technologies review). Despite assurances in the white paper, there are however still concerns that AI is used in some areas that are not heavily regulated, and that even where regulation is in place, those regulators may not have the necessary resources and expertise to manage this. Some regulators have already made efforts to upskill in relation to AI (the ICO for example has produced a lot of guidance in this space, and recently updated its main AI guidance) but this is not the case for all regulators.
- The role of standards and AI assurance: the white paper notes the importance of standards and AI assurance in supporting the regulatory framework (something previously highlighted in the UK’s AI Strategy) and promises the launch of a Portfolio of AI assurance techniques in Spring 2023. It also discusses a layered approach to AI technical standards where regulators identify relevant technical standards and encourage their adoption. Layer 1 would involve sector-agnostic standards which can be applied across use cases (e.g. risk management), layer 2 would address specific issues (like bias and transparency), and layer 3 could involve regulators encouraging adoption of sector specific technical standards.
- Iterative nature of the approach: the Government is deliberately taking an iterative approach to AI regulation and will constantly review whether the framework is working. This will include, for example, monitoring AI supply chains and whether legal responsibility for AI is effectively and fairly distributed throughout the AI lifecycle. Given the fast paced development of AI, some have however criticised the time it will take for this approach to develop a robust regime.
- Have your say: The Government launched a consultation alongside the white paper and so now is the time to have your say if you want to shape the UK’s approach. The consultation is open until 21 June 2023.
The white paper sets out a list of thing the Government will do following publication of the white paper. In the first six months, this includes publishing its response to the consultation, issuing cross-sectoral principles and initial guidance to regulators and publishing an AI regulation roadmap with plans for establishing the central functions mentioned above.
For more information on the risks and opportunities around AI, explore the different publications and podcasts from our Regulating AI series.