The debate about keeping people (especially children) safe online keeps making headlines.
That debate has accelerated as AI tools that can generate very realistic images and videos have become easy-to-use, widely available products. Legislators are responding at pace: the UK has criminalised the creation of non-consensual intimate images and is proposing a ban on “nudification” apps; in the EU, the AI Digital Omnibus proposals would ban AI systems from producing non-consensual sexually explicit images and child sexual abuse material (see our previous blog). Many are also closely watching how Australia’s ban on under-16s having social media accounts works in practice.
While new laws are debated and rolled out, privacy regulators aren’t waiting around. They’re using their existing powers (and publishing guidance) to protect individuals online, and to remind organisations that privacy law is part of the broader framework that shapes online content and safety.
Global Privacy Assembly: statement on AI‑generated imagery
On 23 February, the Global Privacy Assembly’s International Enforcement Cooperation Working Group published a Joint Statement on AI‑generated Imagery. It was signed by 61 data protection authorities, including the UK ICO and the European Data Protection Board. The takeaway is simple: existing privacy and data protection laws apply in full to organisations developing and using generative AI tools.
Privacy laws differ across the GPA signatories’ jurisdictions, but the statement identifies four core expectations that apply (in one form or another) for anyone building or using AI content-generation systems:
- Put robust safeguards in place to prevent misuse of personal data and the creation of non-consensual intimate imagery (especially child sexual abuse material);
- Be genuinely transparent about what the system can do, what safeguards exist, what uses are acceptable, and what happens if people misuse it;
- Offer effective ways for individuals to ask for harmful content to be removed; and
- Address children’s risks specifically, with stronger safeguards and age-appropriate information.
Enforcement: regulators are using privacy powers
It’s not just talk from privacy regulators as they have also been using enforcement powers in ways that directly affect access to online content. For example, the ICO’s action against Reddit and Imgur – see our previous blog.
In both these cases, a key issue was reliance on a prohibition or age restriction in the terms and conditions to stop children accessing the platform. The ICO’s view was that this, on its own, doesn’t reliably stop children from accessing the platform. Without other measures (such as effective age assurance), the services didn’t therefore meet UK GDPR requirements when processing children’s data, as set out in the ICO’s Age Appropriate Design Code (better known as the Children’s Code).
The ICO (and several EU data protection authorities) have also opened investigations into Grok, following allegations that it generated non‑consensual sexual imagery. Parallel investigations by Ofcom in the UK under the Online Safety Act (OSA), and by the European Commission under the Digital Services Act (DSA), show just how many different rules and regulators platforms need to navigate.
The CJEU’s judgment late last year in Russmedia is another important piece of the puzzle. The Court held that the platform operator was a joint controller for personal data included by users in posts (in that case, an advert for sexual services showing an image of an individual who had not consented). That could have wide-reaching implications for platforms, and is a useful reminder that “online safety” duties don’t replace privacy compliance. For example, while platforms may benefit from the hosting exemption under the DSA, the Russmedia case makes it clear that there’s no equivalent exemption under the GDPR. In practice, this underlines the need from a GDPR perspective for platforms to prevent, or quickly identify and remove, unlawfully posted personal data.For more on this case, see our March Data Privacy Newsletter.
ICO and Ofcom: joint statement on age assurance
Most recently, the ICO and Ofcom (as the regulator of the OSA) published a joint statement on age assurance on 25 March 2026. From a privacy perspective, the message is familiar: the GDPR applies alongside the OSA, so organisations in scope need to comply with both. The more interesting point is in the ICO’s accompanying commentary. The ICO says that (as in the Reddit and Imgur cases) an age restriction in website terms is not enough to meet the Children’s Code, but then, significantly, states that where a service includes such an age restriction, effective age assurance will generally be needed for GDPR compliance because otherwise the organisation will typically have no lawful basis to process the personal data of children who nonetheless access the site.
The ICO has also said that sites and apps relying mainly on self-declaration (for example, a tick-box or “I confirm I’m over 18” prompt) will be a key focus for them this year. They’ve identified 17 high-risk platforms that they will include in their review.
The bigger picture: a growing regulatory matrix
Online safety rules are already complicated, and they’re only going to get more so. One thing is clear, though: privacy regulators are getting much more involved in this space. So, when you’re planning your approach to online safety, make sure you’re also keeping up with data privacy developments, as well as the more specific online safety laws.

/Passle/5badda5844de890788b571ce/SearchServiceImages/2026-04-15-11-44-58-066-69df7a3a477862f605aa452f.jpg)
/Passle/5badda5844de890788b571ce/SearchServiceImages/2026-04-08-10-21-26-624-69d62c26947602cb4c9ca973.jpg)
/Passle/5badda5844de890788b571ce/SearchServiceImages/2026-04-07-12-40-42-484-69d4fb4a605f83da4bdcea42.jpg)
/Passle/5badda5844de890788b571ce/SearchServiceImages/2026-03-31-15-05-34-392-69cbe2bef111dab9e0fd3a30.jpg)