This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Digital developments in focus
| 3 minutes read

EU AI Act to become law this summer

The final vote on the EU AI Act was passed yesterday (21st May), clearing the way for the Act to come into effect this summer. 

While political agreement was reached in December (see blog) further work was needed to finalise the details before the European Parliament and Council’s final vote. 

Parliament’s vote, which led to some further changes to the Act, took place in March (see blog) and the Council) gave their final approval yesterday.  It is expected to be published in the Official Journal ‘in the coming days’ and will enter into force twenty days after publication. 

A quick reminder of the Act:

The Act:

  • Applies across sectors, although there are some use exceptions (e.g. for Defence or R&D purposes).
  • Takes a risk based approach – a relatively limited category of AI is prohibited (e.g. social scoring), some high risk AI systems are highly regulated (see below) and some types of AI have additional transparency obligations (e.g. it must be obvious you are interacting with a chatbot, and deepfakes and other AI generated content must be watermarked or otherwise detectable as being artificially generated or manipulated). There is then a whole raft of AI deemed to have minimal risk (e.g. spam filters) which have no specific additional obligations, other than having to comply with some general rules, for example around AI literacy. 
  • Splits high-risk AI systems into two categories: 
     - The first is AI used in safety components in products (or where the AI is itself a product) covered by EU product safety rules listed in Annex I (e.g. AI safety components in medical devices or cars), where the products must undergo a third party conformity assessment before being placed on the market. 
    - The second category is a list of eight areas (set out in Annex III) where AI will be considered high risk if it poses a significant risk of harm to the health, safety or fundamental rights of people. This includes areas such as recruitment, education, and access to essential public and private services (with the latter covering AI used in credit scoring or to price life and health insurance). 
    For both Annex I and Annex III high risk categories there is a whole raft of obligations for providers and deployers of such AI, including around risk management, data governance, transparency, conformity testing, human oversight and so on….
  • Aims to balance regulation with innovation. For example, it requires member states to have sandboxes which give priority to SMEs, although there are some concerns that this balance may not have been achieved and that the rules may still stifle innovation.
  • Imposes high fines – the maximum being the higher of €35m or 7% of worldwide annual turnover.

What can you do now to prepare?

Now is the time to start preparing. We have been helping clients get ready for the AI Act, and more generally helping them ensure they have appropriate AI governance in place. 


To get ready to comply with the AI Act you need to:

  • check if you are in scope (given its wide-extra-territorial reach and definition of AI Systems);
  • understand which risk category your AI falls in to;
  • know where you sit in the AI supply chain – are you a provider, deployer/user, distributor or importer? Your obligations will differ depending on your role as well as your AI use and its risk category;
  • put plans in place to ensure you are ready in time. The Act is expected to become law in June, but it will then not take effect for a further 24 months. That said, certain provisions will apply much sooner – for example the rules around prohibited AI Systems will apply six months after the entry into force date, and the rules around general-purpose AI will apply 12 months after that date. Note: some of the rules around high-risk systems have a little longer; and 
  • monitor developments in this space – much of the detail on how to comply will be provided through standards, codes of conduct and guidance and we will therefore be carefully monitoring developments in this space for our clients. 


More generally, now is the time to ensure you have appropriate AI governance in place. This involves understanding:

  • what AI you are currently using, or planning to use, and how you will track use going forward.  On the first point, we have had some clients who have kicked off this process by launching an AI amnesty (where people are encouraged to disclose any AI they are using, including where this may be an ‘unapproved’ product);
  • your AI risk appetite and tailoring an appropriate governance process to ensure use stays within these parameters;
  • where AI governance sits within your organisation, and how all appropriate stakeholders will be engaged;  
  • the role your board will play, and how they will be kept informed; 
  • whether, at a more operational level, you are using any AI specific risk management frameworks (like ISO 42001 or the NIST AI Framework); and
  • how you will be engaging with, and educating, your workforce and supply chain.



ai, digital regulation