The EU has today announced political agreement on changes to the AI Act (the Act) – just over a week later than originally planned, but still only five months since the Commission first published its Digital Omnibus on AI (for details on the original Omnibus plans, see our blog).
Pressure to protect against new AI related harms while simplifying overly complex rules and boosting EU competitiveness and innovation has resulted in changes to the Act which:
Give organisations more time to comply with the high risk rules: the high-risk rules are viewed as some of the most onerous under the Act, and there was intense lobbying to delay their start. There are two types of high-risk AI systems:
- Annex 3 high-risk AI covers AI systems in areas like employment, education and health insurance - these rules were originally due to start this summer, but will now apply from 2 December 2027.
- Annex 1 high-risk AI covers systems embedded in products covered by sector-specific rules, like lifts, toys and medical devices. These rules have been pushed out by a year - from 2 August 2027 to 2 August 2028.
The delay is intended to ensure that technical standards and other tools to support compliance are in place before the rules apply. It is, however, hoped this support will be provided in good time, as to-date we’ve seen very short timeframes between guidance being published and the relevant AI Act provisions starting to apply.
- Narrow what’s in scope for the high-risk rules: the Act’s Annex 1 high-risk rules apply to AI systems which are: (i) products already covered by sector-specific rules or are used as safety components in such products; and (ii) which have to undergo third party conformity assessments under those sectoral rules. The omnibus changes narrow down what qualify as “safety components”, meaning that products with AI functions that only assist users, or optimise performance, will not automatically be subject to high-risk obligations. Instead, what’s relevant is whether their failure or malfunction creates health or safety risks.
- Avoid duplication between sectoral and AI rules: this had been a key sticking point in the Omnibus negotiations, with some worried that the current system creates confusion and unnecessary duplication, while others were concerned that the Omnibus plans would take too many AI systems out of the scope of the Act. Agreement has, however, been reached on how EU product safety laws and the Act interact. The compromise limits the application of the Act where sectoral law has similar AI-specific requirements, and exempts the Machinery Regulation from direct applicability of the Act. The Commission also now has the power to adopt delegated acts under the Machinery Regulation which would add health and safety requirements in respect of AI systems that are classified as high-risk under the Act. While this exemption for the Machinery Regulation will be welcomed by those (including the German chancellor) who have been lobbying hard to remove industrial products from the Act’s scope, trade groups representing other sectors including technology, electronics and medical-device companies, have already expressed their disappointment that the agreement did not go further.
- Delay the watermarking obligations: the transparency watermarking rules which require AI-generated content to be machine-readable as artificially generated or manipulated will be pushed back so that they apply from 2 December 2026. While this delay is not as long as the Commission ’s original Omnibus proposal (which suggested 2 February 2027), it does still give organisations more time to comply than the original deadline of this summer.
- Ban ‘nudification apps’: the ban would apply to AI systems that generate non-consensual sexually explicit or intimate content, or child sexual abuse material. The ban applies to both providers who put such apps/systems on the market (either for that purpose, or without putting reasonable safety measures in place to prevent the creation of such content) and to deployers who use these apps/systems to create such content.
- Help innovation: changes include providing more access to regulatory sandboxes and extending the privileges and exemptions that were given to SMEs so that they also cover small mid-cap companies.
- Widen the enforcement remit of the AI Office: The AI Office will now oversee certain AI systems built on GPAI models (e.g. where the model and system are developed by the same provider) and those embedded into very large online platforms and very large search engines (as defined under the DSA). While these changes are a welcome change for such providers who may otherwise face enforcement from multiple state regulators, their impact may be reduced by the exceptions where national authorities will remain competent – including in areas like law enforcement, border management, judicial authorities and financial institutions.
- Impact bias detection: The revised rules extend the possibility of processing sensitive personal data where this is necessary for bias detection and mitigation purposes.
While these changes were specified in the press releases from the Commission, Council and Parliament on the political agreement, we also expect to see changes in other areas, such as the AI literacy rules (see our blog here for more details).
Next steps:
The agreement is currently provisional, we haven’t yet seen the final text for the changes, and the European Parliament and the Council need to formally adopt the political agreement before it can become law. This is, however, a priority for all institutions, and in terms of timings the Parliament have stated that the co-legislators intend to adopt it before 2 August 2026 (the date the Act applies generally, and some of the high-risk rules would otherwise start to apply).

/Passle/5badda5844de890788b571ce/SearchServiceImages/2026-05-06-14-06-59-755-69fb4b035941bb6aa9696c9a.jpg)
/Passle/5badda5844de890788b571ce/SearchServiceImages/2026-04-30-15-41-14-951-69f3781ace219d6690073643.jpg)
/Passle/5badda5844de890788b571ce/SearchServiceImages/2026-04-24-19-28-27-755-69ebc45b7a26a7ef1651334a.jpg)
/Passle/5badda5844de890788b571ce/SearchServiceImages/2026-04-24-13-49-37-973-69eb74f1be4eae700f7e707e.jpg)