This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Digital developments in focus
| 3 minutes read

Are you ready for the EU AI Act? Final vote in the European Parliament passed today

The EU AI Act has passed one of its final hurdles before becoming law. While provisional political agreement on the Act was reached before Christmas (see our blog), further work was required to iron out some of the ‘technical’ details, and it still needed a number of additional EU Council and Parliament votes. The final European Parliament vote (originally planned for April) took place today and MEPs endorsed the new law by a large majority. It now just needs to go through some linguistic checks, and to be formally endorsed by the European Council, before it is published in the official journal and enters into force 20 days later. 

What does this mean in practice?

In terms of the EU AI Act, now is therefore the time to start getting ready to comply. For example, you should check:

Are you in scope? 

  • The Act has wide extra-territorial reach - applying, for example, to non-EU developers (providers) and users (deployers) where the output from an AI System is used in the EU. 
  • You should also check whether your AI falls within the definition of an AI System (or a General Purpose AI Model). The definition of an AI system proved a contentious topic during the negotiations, but the final text is largely based on the OECD definition (see here for the text the Parliament approved - noting there may still be some final tidying up changes made).  

Which risk category does your AI fall into? 

  • The Act takes a risk-based approach. It bans AI which has an unacceptable risk profile (e.g. certain biometric categorisation systems are prohibited), it heavily regulates ‘high risk’ AI Systems (e.g. AI used in employment, credit scoring or medical devices) and it imposes specific transparency obligations on certain other types of AI (e.g. deepfakes and certain chatbots). 
  • You therefore need to understand what AI you are / will use within your organisation and for what purpose. 

Where do you fit into the AI supply chain?

  • You will need to know where you fit into the AI supply chain as your specific obligations will differ depending on both the type of AI involved (e.g. whether it is high risk etc.) and your role. 
  • Providers, deployers, importers and distributors all have different obligations under the Act, and which category you fit into may not always be immediately obvious. For example, if you are using, rather than providing/developing, an AI system but put your name onto the AI system then you will be classed as a ‘provider’ under the Act and will have to comply with the provider obligations.

Will you be ready to comply? 

  • While the new law will be fully applicable 2 years after its enters into force (i.e. at some point in 2026?), now is the time to start preparing. It takes time to put compliance processes in place, and products and services onto market (meaning products and services currently under development should be designed with AI Act compliance in mind). 
  • Also some of the AI Act rules will apply before the two year transition period ends. For example, bans on prohibited practices will apply six months after the entry into force date, codes of practice will apply nine months after, and the rules around general-purpose AI will apply 12 months after that date. Note: some of the rules around high-risk systems have a little longer - 36 months after that date.

AI Governance

More generally, now is also the time to ensure that you have appropriate AI governance in place. While AI Act compliance is important – something underpinned by the large, GDPR busting fines it sets (the biggest being the higher of €35m or 7% of worldwide annual turnover), there are a whole range of AI related risks that are not covered by the AI Act. A good governance process which helps you set your risk appetite, keep track of your AI use and bring together all relevant stakeholders to identify, manage and monitor the associated risks, can help ensure that AI is developed and/or deployed in your organisation in a responsible way.


ai, digital regulation