This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 3 minute read

Tech firms told to ‘tame toxic algorithms’ as Ofcom publishes child safety consultation

On 8 May, Ofcom published its child safety consultation as part of its ongoing implementation of the Online Safety Act (‘OSA’).

Announcing the consultation, Ofcom’s chief executive claimed the proposed measures ‘will deliver a step-change in online safety for children in the UK’ – but they have equally been criticised as lacking in ambition

The consultation includes guidance for platforms in assessing whether they are in scope of the child safety rules, and draft Children’s Safety Codes containing the measures Ofcom expects platforms to follow if so. 

Who is caught by the OSA’s child safety duties?

The child safety duties under the OSA apply to both regulated:

  • user-to-user services (‘U2U’), being services where a user can encounter content uploaded to, shared on, or generated on, the service, by other users; and 
  • search services

where these are ‘likely to be accessed by children’.

To identify whether such a service is ‘likely to be accessed by children’, they are required by the OSA to conduct a ‘children’s access assessment’ ('CAA'). The CAA asks two key questions.

Question 1 - Access

The first question is whether children (i.e., under 18s) can access the service, or part of it. Crucially, providers can only conclude that children cannot access their service if they use age verification or estimation techniques such that children are “not normally able to access” it. Ofcom’s guidance makes clear that this will only be the case where providers are deploying “highly effective” age assurance, and “effective” access controls. 

Examples of technologies that Ofcom suggests includes credit card checks, the use of open banking, and photo ID matching. In any event, for a technique to suffice, Ofcom’s criteria for technical accuracy, robustness, reliability, and fairness, as set out in the guidance, must be met. Ofcom will not accept functionality whereby users self-declare their age, or restrictions on child access in a platform’s terms of use, as being sufficient.

Question 2 - Volume of child users

The second is to ask whether the service has, or is likely to attract, a significant number of child users.

The OSA does not define ‘significant’, other than to say that it can include a number significant in proportion to the overall UK user base, and that the question must be answered using evidence of the platform’s actual use (rather than intended use). Ofcom’s advice for providers is to “err on the side of caution” given the intent of the OSA, and that “even a relatively small number or percentage of children could be a significant number.”

Ofcom expects platforms to keep a written record of the evidence relied on if they are concluding they are not ‘likely to be accessed by children’. Any such provider must conduct a fresh CAA: (i) within the succeeding 12 months (and once a year thereafter); (ii) before making a significant change to their service’s design or operation which is relevant to use by children; (iii) if they have evidence of the reduced effectiveness of any age assurance used by the service; and (iv) if there is a significant increase in child users of the service. 

What child safety measures has Ofcom recommended?

If the answer to both questions in the CAA is yes, the child safety duties apply, and providers will be expected to conduct a children’s safety risk assessment of their platform, and accordingly follow the applicable measures in Ofcom’s Children’s Safety Codes. 

The draft versions of the Codes contain more than forty recommend measures (which may apply depending on the size and risk level of the service), including:

  • using highly effective age assurance techniques to prevent children from encountering ‘primary priority content’ (i.e., pornographic content or content promoting suicide, self-harm or eating disorders); 
  • reducing the prominence of ‘priority content’ (which includes content which is abusive, incites hatred, or promotes violence) in recommender feeds; 
  • providing age-appropriate user support materials;
  • enabling children to provide negative feedback on content recommended to them; and
  • signposting children to support during the user journey, such as when they search for harmful content.

The Codes also contain measures relating to: (i) internal governance and compliance (such as appointing an accountable individual; and tracking evidence of new kinds of harmful content); (ii) content moderation (such as ensuring teams are well-resourced and trained, and having internal content policies); (iii) complaints management; and (iv) user controls (such as the option for children to decline group chat invitations or mute users). These, among others, mirror the measures set out in Ofcom’s initial illegal harms consultation

What are the penalties for failure to comply?

In announcing the consultation, Ofcom’s chief executive indicated that companies in breach would be ‘named and shamed’, and noted that the regulator “won’t hesitate to use [its] full range of enforcement powers to hold platforms to account”. Those powers include imposing large fines (up to the greater of £18 million or 10% of the provider’s global annual revenue), and obtaining restriction orders which can significantly impede or altogether prevent a provider from continuing to operate.

What’s next?

Ofcom’s consultation will be open for responses from stakeholders until 17 July 2024. It is then expected that final Codes and guidance will be published in spring 2025, with the Codes coming into force in the third quarter of 2025. Services will be expected “to be held to full compliance shortly after”.

"Our groundbreaking laws will hold tech companies to account in a way they have never before experienced… To platforms, my message is engage with us and prepare. Do not wait for enforcement and hefty fines – step up to meet your responsibilities and act now" - Michelle Donelan, Technology Secretary

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

ofcom, online safety act, online harms, child safety, ai, digital regulation, emerging tech