This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minute read

Europe’s groundbreaking AI Act: are “superfines” the price to pay for trust in tech?

The EU broke new ground on 21 April 2021 by issuing the draft of its proposed harmonised legal framework on AI (the "AI Act"), the first attempt worldwide to specifically regulate this rapidly developing and often misunderstood branch of technology.

The proposal follows a wave of previous EU policy documentation addressing AI and directly builds on the Commission’s high-level approach to a future EU regulatory framework for AI outlined in its White Paper issued on 19 February 2020 (see our blog on the White Paper here). The lofty ambitions of “excellence” and “trust” in the AI space outlined in the White Paper now form the core principles of the new framework. The draft AI Act is proposed as a Regulation in order to harmonise member states through its direct effect, which should help ensure that national approaches to AI do not fragment across the single market. 

The EU has requested feedback on its proposals, and organisations have until 29 June 2021 to respond. 

Key takeaways from the new draft law include: 

- A risk-based approach - The AI Act establishes four categories of risk: unacceptable risk (which bans certain AI use); high risk (i.e. risks which have an adverse impact on safety or fundamental rights - the bulk of obligations in the AI Act relate to high risk AI technologies); limited risk (which covers AI such as chat bots and deep fakes – there will be transparency obligations for these types of tech); and minimal risk (i.e. AI which does not fall into any of the above – providers in this category are still encouraged to comply with the rules on a voluntary basis).

- GDPR level (and higher) fines – breaches of the prohibition on AI technologies posing an unacceptable risk could elicit fines of the higher of EUR 30 million or 6% worldwide annual turnover.

- Extra-territorial effect - UK businesses may therefore find themselves caught by the new rules.

The majority of obligations sit with AI providers, although distributors, importers and users may also need to act in certain circumstances. For a more detailed summary of the key features of the proposals, please see our client briefing.

A bold step by the EU 

The draft AI Act marks a bold statement by the EU on the world stage. Although the US and others are taking some steps to legislate AI (for example, by restricting the use of facial recognition technology), the level of obligations proposed by the AI Act is unparalleled in its scope and ambition. The EU has therefore further cemented its digital strategy as one which prioritises comprehensive regulatory frameworks aimed at preserving fundamental rights and ethical values, with the intention that trust in new technologies and innovation will flourish as a result.

Time will tell whether the EU’s strategy will succeed. In the meantime, the AI Act will have its own legislative hurdles to overcome within the EU political process. It takes time to agree new law at EU level, and once agreed there will be a two year implementation period. It is therefore unlikely we will see the AI Act implemented until 2024 at the earliest – but both the tech sector and broader sectors which use AI technologies should keep careful watch.

The EU has therefore further cemented its digital strategy as one which prioritises comprehensive regulatory frameworks aimed at preserving fundamental rights and ethical values, with the intention that trust in new technologies and innovation will flourish as a result.

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ai, digital regulation, european commission