This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 2 minutes read

ICO develops generative AI guidance

Generative AI is now ever present, giving rise to new questions and challenges both on the work and home front. My kids' school were quick to tell parents that it was not to be used for homework, whilst my father had no qualms using it at Christmas to create strategies to beat me when playing the board game Risk! Not sure on the social ethics of the latter (may be I am just grumpy since I lost ☹ ), but, regardless, generative AI is clearly here to stay.

It is therefore timely that last week the ICO launched the first in a series of consultations on how aspects of data protection law should apply to the development and use of generative AI models. It reflects the ICO’s acknowledgement that there are a number of unanswered questions in this area on which it wants views before it reaches a conclusion. 

The ICO is planning a series of consultations on these unanswered questions, including:

  • what is the appropriate lawful basis for training generative AI models?
  • how does the purpose limitation principle play out in the context of generative AI development and deployment?
  • what are the expectations around complying with the accuracy principle?
  • what are the expectations in terms of complying with data subject rights?

This first consultation picks up the first of these questions and considers the most appropriate lawful basis for training generative AI models on web-scraped data. The ICO considers that five of the six lawful bases under the GDPR are unlikely to be available in this context and so the consultation focuses on legitimate interests .

The consultation sets out how the ICO considers the legitimate interest assessment should be addressed in this context, following the now well-trodden path of considering if there is a valid interest, whether the processing is necessary and whether the individual’s rights override the identified interest. 

The ICO pays particular attention in its analysis to the nature of the risks to individuals and how these can be mitigated. The ICO notes that the extent to which a developer can mitigate risks to individuals during deployment of the AI will depend on the way the model is put on the market, for instance whether deployed through the initial developer or if a copy of the underlying AI model is made available to third parties.

In conclusion, the ICO flags that developers using web-scraped data to train generative AI models will need to:

  • Evidence and identify a valid and clear interest.
  • Consider the balancing test particularly carefully when they do not or cannot exercise meaningful control over the use of the model.
  • Demonstrate how the interest they have identified will be realised, and how the risks to individuals will be meaningfully mitigated, including their access to their information rights. 

The ICO will use the responses received to update its existing guidance on AI in due course.

We are writing an article for Privacy Laws & Business on data scraping more broadly, which will also consider this consultation in greater detail, so look out for that in their March UK Report.

None of this of course will resolve the social ethics of using generative AI for board games, perhaps we will have to develop our own house rules on that ahead of next Christmas…..

“The impact of generative AI can be transformative for society if it’s developed and deployed responsibly” Stephen Almond, Executive Director for Regulatory Risk at the ICO

Tags

dp, big data, data, data analytics