This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Digital developments in focus
| 3 minutes read

In the global race to regulate AI, will the EU get there first?

As pressure to regulate AI continues to mount around the world, the European Parliament has approved its negotiating position for the EU AI Act, bringing the landmark piece of AI specific legislation one step closer.

The EU Commission published its initial proposal of the EU AI Act in April 2021, and the Council agreed its negotiating position on 6 December 2022. Now that the Parliament’s position is also agreed, the interinstitutional trilogue process is expected to commence imminently. The aim is for the final text of the EU AI Act to be formally adopted before the end of 2023.

Background to the EU AI Act

The Act proposes a cross-cutting, risk-based approach to AI regulation. Instead of opting for blanket legislation covering all AI systems, it allocates AI use into one of three risk categories: (i) unacceptable risk, which is prohibited from the EU entirely (for example, government operated social scoring); (ii) high risk, which is subject to specific legal requirements (for example, an AI tool that scans CVs to rate job applicants); and (iii) applications not explicitly banned or listed as high-risk, which are largely unregulated (for example, consumer facing chat bots). This cross-cutting approach with strict categories is different to the “sector specific” risk-based approach in place in the UK (see the Regulation section of our Regulating AI series blog for more information regarding the UK’s plans).

The Act provides for substantial fines (detailed below) as well as other remedies, such as requiring the withdrawal of the AI system from the EU. It also has a broad extra-territorial reach, meaning providers / deployers of AI systems based outside of the EU may still be captured by the legislation if they place services with AI systems in the EU’s single market or their AI systems produce outputs which are used within the EU.

Key updates from the EU Parliament

On 14 June 2023, the EU Parliament agreed its negotiating position on the EU AI Act. The EU Parliament’s proposal suggest a number of key updates, including:

  • Definition of AI: From the first draft, the definition of “AI systems” has been contentious. The original definition proposed by the Commission was criticised as being much too broad so as to capture simple software. The EU Council and the EU Parliament have sought to narrow this definition, with the EU Parliament seeking to align it more closely with the OECD’s definition, which focuses on machine learning capabilities and autonomy.
  • General purpose AI: Given the recent interest in, and increased availability of, generative AI tools, the EU Parliament has introduced specific rules for such tools. They will be subject to varying transparency, and testing requirements. Before they can be placed on the EU market, providers of “foundation models” will be required to implement certain safeguards into their models and register them in a central database managed by the EU Commission. Foundation models are AI system models that are trained on broad data at scale, are designed for generality of output, and can be adapted to a wide range of distinctive tasks. The proposals include implementing design, data governance, cybersecurity, performance, and risk mitigation safeguards.
  • Expansion of prohibited AI practices and high-risk AI: The EU Parliament has proposed that the list of high-risk AI systems in Annex III of the EU Act be expanded to include AI systems intended to be used: (i) “for influencing the outcome of an election or referendum or the voting behaviour of natural persons” in such elections or referenda; and (ii) by very large online platforms (as designated by the Digital Services Act, 2022) in their recommender systems. In addition, the EU Parliament has controversially included real-time biometric identification in publicly accessible spaces as a prohibited AI use case. This is likely to be negotiated quite heavily in the upcoming trilogue process, as many EU member states want to allow law enforcement to use this type of technology.
  • Fines: Whilst the EU Council’s negotiating position already provided for fines of €30 million, or 6% of annual global revenue, which are in excess of those provided under the GDPR, the EU Parliament has increased these penalties further. They propose penalties of up to the higher of €40 million, or 7% of annual global revenue for carrying out a prohibited AI use case. They have also included specific fines for foundation model providers – such providers who breach the EU AI Act could receive a fine of up to €10 million, or 2% of annual global revenue.

Next steps

Following the EU Parliament’s adoption of its negotiating position, negotiations between the EU Commission, the EU Parliament and the EU Council will commence – a process known as “trilogues”, in which these institutions will work together to finalise and implement the legislation. Trilogues can vary in time; the more complex the legislation, the longer they take. Following the trilogues, the EU AI Act will be adopted, possibly by the end of this year. However, organisations will still have some time to prepare, as it has a two year implementation period - the Regulation currently states that it will "apply from 24 months following the entering into force of the Regulation."  


ai, data, regulating digital, tech procurement and cloud