The Westminster government confirmed this week that the UK “is on course for more agile AI regulation.” We’ve known for some time that it is not looking to the follow the EU in proposing a dedicated AI Act, but now have confirmation (in the form of the somewhat delayed AI white paper consultation response) that it is moving forward with its combination of cross-sectoral principles and a context and sector-specific AI framework. That said, new binding rules may be on the horizon for the most advanced general purpose AI systems.
The consultation received a good response, with over 400 responses from a wide range of participants (businesses, trade unions, academia etc.). Unsurprisingly the largest number of responses came from the AI, digital and tech industry followed by those in the arts, but a wide range of other sectors also responded (18 sectors in total). Some key points from the response include:
- Sector approach underpinned by principles: There was strong support for the “pro-innovation regulatory framework for AI” proposed in its March white paper (see blog). The UK’s existing regulators will take the lead on managing AI risks in their areas, and there will be five cross-sectoral principles for them to “interpret and apply within their remits”. There has been some debate on whether these principles will be placed on a statutory footing, but the government confirmed that implementing them on a non-statutory basis in the first instance enables a necessary degree of flexibility. This will, however, be kept under review.
- Centralised functions: New, central, functions will be introduced to bring coherence to the regime and address any regulatory gaps. The response confirmed that the government has already started to develop a central function to support effective risk monitoring, regulatory coordination and knowledge exchange. It will also launch a targeted consultation this year on a cross-economy AI risk register, and will consider developing an AI risk management framework, similar to the one developed in the US by NIST.
- Regulatory guidance: Key regulators will provide guidance in this space. Some, including Ofcom and the CMA, have already been asked to publish how they are responding to AI risks and opportunities by 30 April. Others, such as the Ofgem and the Civil Aviation Authority are working on AI strategies which will be published later this year. In addition, the government has published its own guidance to regulators, to support them in implementing the principles, and has earmarked £10 million for regulators to build the tools and capabilities they need to respond to AI.
- Binding rules for advanced GenAI: There may be future, binding requirements for the (small number of) developers of the most advanced general purpose AI systems to ensure they are accountable for making the technologies sufficiently safe. The response states that these systems have the least coverage by existing regulation while presenting some of the greatest potential risks. It also recognises that highly capable general-purpose AI systems challenge the UK’s context/sector based approach to regulation. While the government will not rush to regulate, it seems unlikely that voluntary measures will be appropriate in the future. The AI Safety Institute’s work on understanding the risks around this technology will help inform the regulatory response, and the government will publish an update on its work in this space by the end of 2024.
- Cooperation: The government is continuing to support the Digital Regulation Cooperation Forum (DRCF), and regulatory co-ordination more widely. For example, it is setting up a steering committee with government and regulator representatives to support coordination across AI governance. On the day the response was published, the DRCF also published more details on its AI and Digital Hub (a multi-agency advice service with the ICO, CMA, FCA and Ofcom – see blog) which will launch in pilot form this spring.
- IP Code: Attempts by the UK IPO to agree an industry voluntary code of practice to address some of the copyright issues around GenAI (see blog) have failed. DSIT and DCMS will therefore now take the lead on this issue, and engage with relevant stakeholders (the AI and rights holder sectors) to work on a solution.
- Broad focus: International cooperation, leveraging the government’s procurement power to drive good behaviours, standards and AI assurance all continue to play an important role in the government’s approach.
The white paper response is a pretty detailed document. As well as confirming the direction of travel, discussing some key AI issues, and highlighting areas where other regulatory reform (such as the DPDI Bill and Online Safety Act) impact AI, the white paper response sets out a detailed roadmap of the government’s next steps. With a UK general election on the horizon, it remains to be seen how much real progress is actually achieved on AI regulation before the country takes to the polls.