This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 3 minutes read

Biden makes new deal with leading AI companies

As the AI related headlines keep coming, governments and regulators across the globe are starting to take action. The EU’s AI Act has taken much of the focus to-date, while the UK Government is moving at pace to flesh out its own, pragmatic, regime. But has the US just beaten everyone to the post? On 21 July it secured voluntary commitments from seven leading AI companies to help manage the risks posed by AI.

Who are the seven companies? 

The seven companies are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. It is, however, hoped that other organisations will follow suite.

 What are the commitments?

The White House fact sheet on the arrangement states that “These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI”. The companies have committed to:

  • Ensuring products are safe before introducing them to the public: This involves agreeing to internal and external security testing of their AI systems pre-release and sharing information across the industry and with governments etc. on managing AI risks (e.g. around attempts to circumvent safeguards and technical collaboration).

  • Building systems that put security first: The companies must invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights. Model weights are the mathematical instructions that allow AI models to function and are the most essential part of an AI system – i.e. the bit hostile states or competitors would want to steal. They must only be released when intended and when the security risks have been considered. The companies must also facilitate third-party discovery and reporting of vulnerabilities in their AI systems – it is important to have a robust reporting mechanism to enable issues that are spotted (or persist) after an AI system is released to be found and fixed quickly.

  • Earning the public’s trust: Users must know when content is AI generated, and companies must develop robust technical mechanisms (such as a watermarking system) to enable this. The commitments around watermarking have caught the headlines, and are intended to tackle problems around deep fakes and reduce the dangers of fraud and deception. Other commitments to build trust include commitments to: 
    • publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use; 
    • prioritise research on the societal risks that AI systems can pose (e.g. to avoid harmful bias and discrimination, and protect privacy); and 
    • develop and deploy advanced AI systems to help address society’s greatest challenges (such as cancer prevention and climate change).

The fact sheet also references the international work being done on AI, citing (amongst other things) the G7 work in this area, and the UK’s “leadership” in hosting a Summit on AI Safety.

How have the commitments been received?

Unsurprisingly, while the US Government and companies involved are very positive about these developments, there has been some scepticism amongst commentators given the voluntary, and sometimes vague, nature of the commitments. For example, while it is good that the companies have publicly committed to testing, they will already carry out testing (‘red-teaming’) of their models before release, and the commitment doesn’t go into detail around what that testing will look like or who will do it. Likewise, a number of the companies already release certain information about their AI models while holding back other information (citing competition and safety concerns) and so will these commitments force them to release more than they would be happy to do? There are also concerns around how successfully companies can protect against deep fakes, whether or not watermarks are used.

The White House also recognises that while these commitments are a step in the right direction, minimising the risks around AI will require new laws, oversight and enforcement. There are a variety of new AI laws coming into play at state level, and the Biden-Harris Administration has also committed to “continue to take executive action and pursue bipartisan legislation.

Tags

ai, emerging tech