This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 4 minutes read

What can we learn from the UK’s AI Summit?

The UK held the world’s first AI safety summit last week at Bletchley Park – a location famous for codebreaking (it was where Alan Turing’s team famously broke the Enigma code in World War II) and which is often linked to the birth of modern computing. 

The goal was for this to be a global summit, bringing key figures together to address some of the biggest AI risks we are facing.  There was much focus in the run up to the summit on who would be there (Ursula Von der Leyen, Kamala Harris, Elon Musk) and even more on who would not (Joe Biden, Xi Jinping, Emmanuel Macron, Olaf Scholz). In the end approximately 100 world leaders, tech bosses and AI experts from all over the world, including the US, EU and (some have said, controversially) China, did attend. 

What did it cover?

The introduction to the summit confirmed that it would focus on: 

  • Two types of AI - frontier AI like LLMs, and narrow AI with dangerous capabilities like bioengineering; and 
  • Two types of risk - loss of control risk and misuse risk, the latter including AI use in cyber or biological attacks.

Key takeaways 

While many of the news headlines focussed on the UK Prime Minister’s chat show host stint interviewing Elon Musk, the summit (and the week more generally) did provide some key AI takeaways:

  • The Bletchley Declaration: the Declaration was signed by 28 countries (including the US and China) and the EU on day one of the summit. Described as the first ever international statement on frontier AI, it recognises that AI presents huge opportunities for the world, but must be developed in a way that is “human-centric, trustworthy and responsible.”
  • A number of AI Safety Institutes: the UK Government used the run-up to the summit to confirm that its Frontier AI Taskforce will evolve into the AI Safety Institute. It described the institute as a new “global hub based in the UK and tasked with testing the safety of emerging types of AI.” Interestingly, the US also decided to launch its own AI Safety Institute during the summit, confirming that it will evaluate known and emerging risks of AI. 
  • Agreement to collaborate on testing: In a statement that builds on the Bletchley Declaration, governments (including the UK, EU and US, but not China) and major AI companies (including AWS, Deepmind, Meta, Microsoft and Open AI) “recognised that both parties have a crucial role to play in testing the next generation of AI models, to ensure AI safety – both before and after models are deployed.” This includes “collaborating on testing the next generation of AI models against a range of potentially harmful capabilities, including critical national security, safety and societal harms.” The AI Safety Institutes, mentioned above, will play a crucial role in this testing. 
  • More summits! France will host the next full summit in a year’s time, with South Korea hosting a mini-virtual summit in the interim. 
  • AI “State of the Science” report: the “Godfather of AI” Yoshua Bengio will lead the first frontier AI “State of the Science” report, which will be a key input for those future summits. The report will provide a scientific assessment of existing research on the risks and capabilities of frontier AI and set out priority areas for further research to inform future work on AI safety. Bengio will be advised by an Expert Advisory Panel made up of representatives from countries attending the summit and other partner countries.
  • Arguably more focus, and action, from governments on AI: This was highlighted by Joe Biden’s wide ranging Executive Order on Safe, Secure and Trustworthy AI which he signed last Monday, just before the summit kicked off. These detailed plans on how to regulate AI show that the US is keen to retain control over regulation, as well as innovation, in this area.  Last Monday also saw the G7 announce that it had reached agreement on international guiding principles on AI and a voluntary code of practice for AI developers, and the Italian prime minister confirmed that this summit would be used as the base for the G7 event being held in Italy next year.

In addition to the main summit, there were a whole host of AI Fringe events, from dinners to panel discussions, a number of which we attended. The panel discussions from these events are available online, including a scene setting introduction from DSIT Secretary of State Michelle Donelan, a panel on how to successfully navigate the AI hype cycle with industry and academic experts, and a look back at the end of the week on “what we have learnt” from (amongst others) the Prime Minister’s Representative for the Summit. 

Comment

Even by AI’s fast paced standards, last week was a busy one both in terms of developments to keep track of, and events to attend.  The summit was largely seen as a success, and while there are various views on the extent of its success, press reports have been mostly positive. No-one can deny that it brought representatives from all over the world together to talk about the risks around AI and how to regulate it, with some acting on those risks - the timing of the US executive order on AI regulation seems hard to ignore. 

One criticism levelled at the summit was that it looked too far ahead, at existential rather than current risks. It was therefore interesting to hear Matt Clifford, the Prime Minister’s Representative for the summit, countering this at an AI Fringe event on Friday. He stated that the long term models and risks being discussed were not ‘killer robots’ but those expected to be released in 2024, and that the cyber and bioengineering risks being discussed are risks we will need to consider for next year. 

Another criticism is that the “agreement” with tech companies is voluntary at a time when some countries are looking to introduce binding obligations. It will therefore be important to monitor how the AI Safety Institutes, and global regulation, develop in this area. 

 

 

Tags

ai