This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minute read

To enforce or not? AI divides regulators

Staying abreast of enforcement trends is a critical element of the General Data Protection Regulation (GDPR) risk landscape for organisations and something we regularly discuss with clients – particularly in relation to AI. So what trends are emerging in AI-related GDPR enforcement? Are the pro-innovation statements we have seen (discussed in our blogs here and here) feeding through to the approaches of data protection authorities (DPAs) to sanctioning non-compliance?

AI enforcement is picking up (at least for some DPAs)

The last six months have seen very significant AI-related GDPR fines from the Italian DPA, including:

  • a €5m fine against Luca Inc in April in connection with the development of its Replika genAI driven ‘companion’ tool; and 
  • a €15m fine against OpenAI in December in relation to ChatGPT. 

Both actions identified failings around appropriate lawful basis, transparency and the controller’s approach to age verification. In both cases, the fines were preceded by the DPA suspending access to the tool. With the Italian DPA subsequently prohibiting access to the DeepSeek AI chatbot and a number of DPAs launching investigations into the tool (see here), further significant fines against developers of frontier tools seem likely. 

…. but is nothing new…

Large GDPR fines in connection with AI are not new. Clearview AI has been fined over €90 million to date across the UK and Europe in connection with its controversial facial recognition database – most recently by the Dutch DPA in September 2024. Actions against Clearview are ongoing, with the ICO’s appeal against the overturning of its penalty against Clearview being heard in the Upper Tribunal earlier in June (we discuss the lower court’s decision here).

Enforcement is not the only way 

Regulators are under increasing pressure to support innovation. In the UK, the newly passed Data (Use and Access) Act imposes a statutory duty on the ICO to promote innovation, and the EU has recently proposed pro-growth regulatory simplification plans (which would include simplified GDPR obligations for certain organisations) stemming from the Draghi report. 

What does this mean for enforcement? We know a number of DPAs have been favouring, at times, a more collaborative approach. For example, last month, the Irish DPA announced it has been working intensively with leading tech firms to improve the compliance of their large language model (LLM) training. This includes working with Meta on its plans to use public facing user posts from its Facebook and Instagram services in the training of its LLMs. The ICO has also been having similar conversations with Meta (as confirmed last September). 

Having said that, enforcement action is by no means off the table. For example, when launching its AI and biometric strategy on 5 June, the Information Commissioner flagged that the ICO’s scrutiny of the AI ecosystem would be increasing, focusing particularly on areas where there is a real risk of harm such as unlawful training models, and the use of automated decision-making by employers or recruitment platforms without information rights being respected. 

It’s not all about AI

While much focus is falling on AI risks, significant GDPR fines are being issued for infringements in other areas, including:

  • Processors: earlier in June, the German federal DPA issued a €45 million fine against Vodafone for security failings and in connection with the audit and appointment of its suppliers.
  • Cookies: the Finnish DPA has issued a €1.1 million fine against a pharmacy chain in connection with its sharing of sensitive data with Google and Meta via tracking tools embedded on its website between 2018-2022. 
  • International transfers: TikTok has been fined €530 million by the Irish DPA in connection with its transfers of user data to China. 

Looking ahead, organisations should continue to monitor these trends and adapt their approach to risk accordingly. From an AI perspective however, fines are likely be focused on non-compliance by leading technology companies, frontier AI developers and cases where AI usage poses real harm to data subjects. 

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

data, ai, dp, digital regulation