It has committed to making the Online Safety Bill, which will give effect to the new regulatory framework outlined in the response, ready in 2021. This follows criticism from the House of Lords regarding the urgency with which a new regime was needed, and the fact that the Covid-19 pandemic has meant that “the risks posed by illegal and harmful content and activity online have also been thrown into sharp relief as digital services have played an increasingly central role in our lives”.
A new duty of care, aimed at making companies take responsibility for user safety, is intended to improve the safety for users of online services, and prevent people from being physically or psychologically harmed as a direct consequence of content and activity on those services. Amongst other things, in-scope companies will need to complete an assessment of the risks associated with their services and take reasonable steps to reduce the risks of harms they have identified occurring.
Who is in scope?
The new regulatory framework will apply to companies whose services host user-generated content or facilitate interaction between users, one or more of whom is based in the UK. This means that as well as applying to, for example, content shared publicly on social media platforms, it will apply to online instant messaging services and closed social media groups.
The new rules may also apply in the context of online message boards or comment sections, but in many cases providers of these services will benefit from an exemption for low-risk businesses with limited functionality. For example, user comments on digital content will be exempt provided that they are in relation to content directly published by a service. A number of other exemptions will be provided, including for business-to-business services and services used internally by organisations. The government estimates that only a small proportion of UK businesses (fewer than 3%) will fall within the scope of the legislation.
What constitutes harmful content and what will companies need to do about it?
One question at the forefront for any company within scope of the new regulatory framework will be what exactly constitutes harmful content and activity. The response states that the legislation will provide a general definition and that this will include only content or activity which gives rise to a reasonably foreseeable risk of harm to individuals, and which has a significant impact on users or others. Secondary legislation will set out a limited number of priority categories of harm posing the greatest risk to users. There will be differentiated expectations with regard to different categories of content and activity, described in the response as: that which is illegal; that which is harmful to children; and that which is legal when accessed by adults but which may be harmful to them. Under the new regulatory framework there will be a tiered approach:
- Category 2 services: Most services will be Category 2 services. Providers of Category 2 services will need to take proportionate steps to address illegal content and activity (in each case which meet the definition of harm) and to protect children from content that would be harmful to them, such as violent or pornographic content.
- Category 1 services: A small group of services described as high-risk, high-reach services will be designated as “Category 1 services”. Providers of Category 1 services will additionally be required to take action in respect of content or activity which is legal but harmful to adults, such as content about eating disorders, self-harm or suicide.
All companies in scope will also have a number of additional duties in addition to the core duty of care, including providing mechanisms to allow users to report harmful content or activity and to appeal the takedown of their content.
Scope and enforcement
Ofcom has been confirmed as the relevant regulator and will have the power to fine companies that fall foul of the legislation up to £18m or 10% of annual turnover, whichever is higher. Ofcom will also have the power to block non-compliant services from being accessed from the UK. The government will also reserve the right to introduce criminal sanctions for senior managers if they fail to comply with information requests from Ofcom.
The government has stated that Ofcom’s regulatory approach will be deliberately risk-based. As noted above certain exemptions will apply and the legislation will also include protections for journalistic content shared on in-scope services. In line with the position outlined in the White Paper, other kinds of harms will be excluded from scope where there are existing legislative, regulatory and other governmental initiatives in place (including harms resulting from fraud, from hacking, or from breaches of intellectual property rights, data protection legislation, or consumer protection law).
A global standard?
The response recognises that tackling online harms is a global problem and states that the government is working with international partners to work towards common approaches. The regulator is also to take an international approach, working with other international regulators to ensure effective enforcement and promote best practice at a global level. What is very clear is that it is the government’s view that the development of the online harms regime represents a step by the UK to set the global standard for a risk-based, proportionate regulatory framework.
As noted above, the government has stated that the Online Safety Bill will be ready in 2021. It is also expecting the Law Commission to produce recommendations concerning the reform of the criminal offences relating to harmful online communications in early 2021, which it may incorporate into the bill. The Online Safety Bill will however be just one aspect of the evolving landscape that businesses will have to grapple with in 2021. The EU’s Proposal for a Regulation on a Single Market for Digital Services (Digital Services Act) was published on 15 December 2020 and we still await the UK’s Digital Strategy, originally expected to have been published in autumn 2020.
These new laws will mean no more empty gestures - we will set out categories of harm in secondary legislation and hold tech giants to account for the way they address this content on their platforms.