The last few months have been a busy time for AI Regulation - from the US Executive Order, to the UK’s AI Summit (see blog) and private members bill. Most recently headlines have suggested that plans to agree the EU AI Act before Christmas could be thwarted by a lack of consensus over regulating foundation models.
To help keep track of these various developments, I’ve pulled together some high level thoughts on where we currently are in the UK and EU.
EU AI Act:
There is a push to get the EU AI Act agreed at the next political trilogue on 6th December. The EU was quick off the blocks in terms of AI regulation when it published its proposals for an AI Regulation back in 2021 but could it now be falling behind on the global stage given progress in other jurisdictions such as the US and China? There is certainly political desire to regulate - AI risks are still high on the political agenda. However, there are concerns that if the law is not agreed before Christmas, momentum may be lost. Next year’s elections at the European Parliament also mean timing will start to get tight if trilogue negotiations go into next year.
However, some big issues still remain far from agreed. Headlines suggest, for example, that the rules on how to regulate gen AI/foundation models are proving particularly difficult and could scupper agreement of the AI Act more generally. Reports discuss France, backed by Germany and Italy, being worried that the European Parliament’s proposals around this (see blog) would stifle innovation. They are therefore advocating mandatory self- regulation (in the form of a code of conduct). However, members of the European Parliament have said they cannot accept this approach. We understand that the Spanish presidency (on behalf of EU countries) has just proposed a new compromise position to try to reach agreement which will be discussed this Friday. All eyes are therefore on 6th December…
In the UK, we have been waiting for the Government to start ticking off the to-do list it set itself in its March White Paper on AI Regulation (see our blog for more details).
In the meantime it has hosted the first global AI Safety Summit which led to the Bletchley Declaration (the first international statement on frontier AI), and responded to 12 AI governance challenges set out in a House of Commons Select Committee Interim Report. In its response to this report the Government confirmed (amongst other things) that:
- it will provide an updated regulatory approach to AI in its response to the consultation that accompanied the AI White Paper. This was originally due by September but is now expected by the end of the year;
- a central AI risk function designed to monitor AI risks (which was another action set out in the White Paper) has been established within DSIT (the Department of Science, Innovation and Technology);
- the Government is taking an evidence based approach to regulation, particularly around foundation models. The White Paper stated that there will be no AI specific legislation introduced immediately, and the Government will use its response to the White Paper consultation to set out its latest thinking. DSIT has, however, continued to work with other government departments to develop the UK’s regulatory approach. Interestingly, while the Government has no stated plans to introduce new legislation, this has not stopped a private members bill (the Artificial Intelligence (Regulation) Bill) being introduced to Parliament on November 22. It passed its first reading in the House of Lords on the same day. While it is not common for private members bills to become law, if it did make it through the legislative process, it would (in particular) create an AI Authority to monitor risks, accredit auditors and ensure relevant regulators take account of AI. It would also impose obligations on those who develop, deploy or use AI (including generative AI) – for example they would need to designate an AI officer;
- the overarching objective of the Frontier AI Taskforce, which is to enable the safe and reliable development and deployment of advanced AI systems, has only become more pressing. The Taskforce (now the AI Safety Institute) will therefore become a “permanent feature of the AI ecosystem”; and
- the government remains committed to international engagement. It will continue to play a proactive role in initiatives such as the G7 Hiroshima AI Process, OECD AI governance discussions, Council of Europe Committee on AI negotiations and international Standards Development processes. It will also continue international discussions at future AI safety summits (planned next year in South Korea and France).
The ICO also launched a consultation yesterday on the guidance and toolkits available to organisations on the topic of AI, reminding us of the importance of keeping track with existing laws and guidance, as well as monitoring potential new rules coming down the line.