This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 1 minute read

ICO takes enforcement action over generative AI chatbot

On 6 October the ICO issued Snap Group Limited (Snap), which operates the popular Snapchat messaging platform, with a preliminary enforcement notice over potential failures to properly assess the privacy risks posed by its generative AI chatbot ‘My AI’. The My AI chatbot, powered by OpenAI’s GPT technology, was the first example of generative AI embedded into a major messaging platform in the UK. 

The ICO has been warning organisations for some time now that they must address the potential privacy risks generative AI can create before rushing to adopt the technology. 

It issued guidance back in April on generative AI, setting out eight questions developers and users should ask. 

In June Stephen Almond, Executive Director of Regulatory Risk, warned that the ICO “will be checking whether businesses have tackled privacy risks before introducing generative AI – and taking action where there is risk of harm to people through poor use of their data.” He went on to say “[t]here can be no excuse for ignoring risks to people’s rights and freedoms before rollout.” 

I’ve subsequently seen him speak at a number of conferences where he has stated that organisations are not heeding this warning. It is therefore unsurprising that ICO enforcement action has been taken. What is maybe less expected is that the action has taken the form of a preliminary enforcement notice, rather than MPN/fine. 

Snap is yet to make its representations to the ICO, and the regulator’s press release is keen to stress the notice is just provisional. However, if a final enforcement notice were to be adopted, the ICO says that Snap would need to stop offering its ‘My AI’ product to UK users, pending it carrying out an ‘adequate’ risk assessment (the assessment it did was seen by the ICO, at least at this preliminary stage – as inadequate). 

Given Snapchat is a hugely popular platform which appeals to children and young adults, it is not surprising that the ICO has kept a close eye on its use of generative AI. However, this is still a warning for all organisations to carry out, and document, an appropriate risk assessment before rolling out AI products. 

“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI ... We have been clear that organisations must consider the risks associated with AI, alongside the benefits. Today's preliminary enforcement notice shows we will take action in order to protect UK consumers' privacy rights.” ( John Edwards, Information Commissioner)

Tags

ai, data, data analytics