This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 3 minute read

Lessons in online safety: Ofcom’s £1 million fine against AVS and end of year review

Earlier this month, Ofcom published its most significant fine to date – a £1 million fine against adult website provider AVS Limited (AVS) in relation to age assurance failings. At the same time as announcing this fine, Ofcom released two papers analysing the impact of the Online Safety Act (OSA) regime and businesses’ responses over the last year: a summary of the technology sector’s response to the UK’s online safety rules (the Sector Response Report), and an analysis of the risk assessment records the regulator has received to date (the Risk Assessment Report). 

While there is plenty to digest from these publications, we have extracted five key takeaways for in-scope organisations:

1. Age checks must be ‘highly effective’ (and not easily circumvented)

Ofcom’s Sector Response Report welcomes the widespread adoption of ‘highly effective age assurance’ by in-scope providers, including by the top 10 most popular adult sites (in light of the 25 July deadline for them to do so). Monitoring compliance with these requirements will be a continuing priority for Ofcom in 2026, with focus set to fall on the top 100 UK adult sites. The fine against AVS demonstrates that Ofcom will enforce where ‘highly effective’ measures are not in place. AVS’s approach (which involved age verification via upload of an adult’s photo) was found to be too vulnerable to circumvention by children. AVS was also fined an additional £50,000 for failure to respond to Ofcom’s request for information (RFI). 

2. Risk assessments should justify risk levels, outline effective mediations and be updated for new AI 

During 2025, Ofcom requested and received 104 risk assessments from in-scope providers, with 11 providers then asked to revisit their assessments. The Risk Assessment Report identifies shortcomings in the documents received and identifies five key areas for improvement:

  • Assess all kinds of illegal and harmful content. Inconsistent approaches to risk assessment were identified, with many not separately assessing the different kinds of illegal content identified in Ofcom’s guidance (e.g., content harmful to children).
  • Analysis of features, functionalities and other characteristics. Many service providers didn’t identify the risks posed by their service’s particular features and functions (e.g., encrypted messaging).
  • Demonstrate confidence in controls. Weaknesses were identified in how service providers explain how safety measures work and how they are known to be effective, especially in relation to content moderation controls and recommender system interventions.
  • Base decisions on relevant evidence. There was a lack of evidence to justify the assignment of risk levels, particularly for low-risk assignments.
  • Implement appropriate risk governance. 69 of the 104 risk assessments reviewed by Ofcom did not name the person responsible for the assessment, which Ofcom guidance says is necessary to fulfil record keeping duties under the OSA regime. 

The Risk Assessment Report confirms that Ofcom will be expecting to see improvement in these areas when it requests risk assessments from providers next summer and will take enforcement action where necessary improvements have not been made. Ofcom will also be monitoring to see if providers are adequately assessing risks before making changes to their services, including to add generative AI functionalities. 

3. Focus on child protection to increase in 2026 

While welcoming new child safety measures announced by social media providers, Ofcom calls on providers of the most popular services to provide more comprehensive information on their measures and how they work in practice. Scrutinising these measures is a priority for Ofcom in 2026, with the regulator planning to issue an RFI to Meta, TikTok, Pinterest and YouTube early in 2026.

4. More scrutiny for content moderation 

Ofcom is concerned that ‘over moderation’ is resulting in some non-harmful content being made inaccessible to children and intends to engage directly with providers involved. At the same time, evidence is suggesting that illegal hate speech and terrorist content is persisting on social media platforms, so Ofcom is conducting a detailed review of one platform’s moderation systems, with potential for this review to be extended to others in 2026. 

5. Do more to protect women and girls 

Following the publication of its guidance in November, Ofcom is continuing to focus on how service providers can do more to protect women and girls online, with a plan to publish a progress report in May 2027. 

Commentary

Against the slew of online harms news in the last week, including the European Commission’s 120m fine against X under the Digital Services Act (DSA) (see this blog) and the Australian social media ban for under 16s taking effect, these publications show that online safety enforcement in the UK is getting off the ground. While Ofcom has been criticised by some press and peers for its cautious approach to date, these latest reports (and the AVS fine) can be seen as a warning shot to industry – Ofcom has been clear that in 2026 it expects ‘meaningful, measurable improvements’ from providers. In-scope organisations must therefore ensure they are prioritising OSA compliance and Ofcom engagement. That said, the US response to the EU’s DSA penalty against X shows the tightrope Ofcom is walking, particularly in relation to enforcement against more major players.

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.