It has been a busy start to the year in the AI space, with data privacy regulators under increasing pressure to balance promoting innovation (and the potential for AI-driven growth) with respect for data subjects’ rights. We discussed the latest pro-growth statement from the Information Commissioner’s Office (ICO) in our recent blog.
Encouragingly, particularly for global businesses, a number of recent developments suggest that regulators are looking to align and collaborate across borders to promote data protection in the development of AI and balance these competing demands.
Data governance joint statement
Timed to coincide with this month’s AI Action Summit in Paris, the ICO and the data protection authorities (DPAs) of France, Australia, South Korea and Ireland signed a joint statement making commitments to build trustworthy data governance frameworks for AI.
The statement recognises the opportunities offered by AI, including for innovation and economic growth, but also acknowledges the risks and outlines the challenges posed by the “exceedingly complex” nature of the AI landscape. It is against this backdrop that the DPAs recognise that businesses need answers and legal certainty, but also “a sufficient degree of flexibility” for innovation.
The statement outlines how the five DPAs have committed to collaborate on issues around data governance, including to:
- foster a shared understanding of the lawful grounds for processing data in the context of AI training;
- share information and establish a shared understanding of proportionate safety measures for AI (tailored to particular use cases);
- monitor the technical and societal impact of AI, with a view to leveraging the experience of DPAs (and others) in policy matters;
- reduce legal uncertainty and “secure space” for innovation where data processing is essential, which may include regulatory sandboxes and sharing other best-practice tools; and
- strengthen relationships with other relevant authorities, including competition, consumer protection and IP, to facilitate consistency and synergies between different legal frameworks relevant to AI.
The statement is an encouraging show of support from the ICO for joint initiatives and information exchange with overseas regulators, indicating that it is outward-looking and keen to collaborate, engage and lead on these issues on the world stage.
New EU task force on AI enforcement
Cross-border regulatory cooperation around AI is also being seen at a EU level, with the European Data Protection Board (EDPB) announcing last week that it had extended the scope of its ChatGPT task force to focus on AI enforcement more broadly. In the same announcement it also laid the groundwork for the establishment of a ‘quick response’ team, to coordinate the actions of regulators and “support them in navigating the complexities of AI while upholding strong data protection principles”.
While these developments further point to welcome collaboration and knowledge sharing between EU DPAs, they may also indicate a greater emphasis on regulatory enforcement action around AI in the months to come. It is perhaps no coincidence that these statements follow scrutiny by a number of EU DPAs of Chinese AI tool DeepSeek, including the Italian DPA which has ordered a block on access to the chatbot in Italy.
French DPA issues new AI guidance
Also to coincide with the AI Action Summit, the CNIL (France’s DPA) published two new recommendations on how the processing of personal data in AI systems can be carried out in compliance with the General Data Protection Regulation (GDPR). This guidance, on transparency and data subjects’ rights, is the latest in a suite following the CNIL’s 2023 AI Action Plan. The CNIL's recommendations are both pragmatic and granular, building on the EDPB’s recent opinion on AI (discussed in this blog). For example, the CNIL recommends that AI developers:
- adapt transparency information “according to the risks for people and operational constraints”. The CNIL notes that the GDPR allows an AI developer to publish broader, more general information on its website (e.g. categories of third-party data sources) if the developer is not practically able to provide information directly to all affected individuals (e.g. due to the number of third-party data sources).
- adapt their approach to fulfilling data subject rights requests according to costs and practical constraints, with the CNIL stating that costs or practical constraints “may sometimes justify a refusal to exercise rights”, and that it would consider reasonable available solutions open to the developer.
Outlook
The pro-innovation messaging that underscores the above developments suggests that DPAs on both sides of the channel are taking steps to embrace innovation, at least to an extent, and to avoid being seen as AI blockers by national governments and other agencies. In part, this stance may be due to a common desire to be the lead AI regulator in their jurisdiction; the DPAs have a number of stakeholders to convince that they are the right agency for the job.