This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
THE LENS
Digital developments in focus
| 5 minute read

Buckling up the UK’s digital seat belts: Online Safety Bill covering ‘basic protections for the digital age’ introduced to Parliament

The UK took a major step on Thursday towards establishing the world’s first online safety laws with the introduction of the Online Safety Bill to Parliament for formal legislative scrutiny. With it the Government hopes to bring in a “new digital age which is safer for users and holds tech giants to account”.

The introduction of the Bill marks the latest development in what has been an extensive pre-legislative process starting with the Online Harms White Paper in April 2019. This was followed by a consultation process, the release of a draft Bill and the publication of a detailed report with recommendations by a parliamentary Joint Committee set up to scrutinise the draft Bill (see our previous blog posts here and here). The Bill incorporates some of the report’s recommended changes.

New duty of care for service providers

As we have previously highlighted, the Online Safety Bill applies a new duty of care to providers of user-to-user services and search services which have links to the UK. This could be because the service has a significant number of UK users, UK users form a target market for the service or the service is capable of being used in the UK and there are reasonable grounds for believing there is a material risk of significant harm to individuals arising from content on the service or encountered in search results. Certain exemptions apply, most notably to services with limited functionality which is intended to exclude ‘below the line’ content on media articles and user reviews of goods and services. The exemptions are, however, limited so while the press and politician’s comments have focussed on ‘regulating big tech’, smaller website or platform owners who provide elements of user-to-user functionality such as a message board could potentially be caught. They would, however, need to meet certain threshold conditions, such as number of users, which will be set by secondary legislation.

The scope of the core duty varies depending on the nature of the service - there are effectively two tiers of service providers. The vast majority will be Category 2 - essentially any in-scope provider that does not provide a Category 1 service. They are required to take proportionate steps to mitigate and effectively manage the risks, identified through mandatory risk assessments, of harm caused by illegal content and protect children from content that would be harmful to them. Providers of high-risk Category 1 services are then also required to protect adults from content that is legal but harmful to them. The Bill does not specify the threshold conditions for Category 1 services which are to be set subsequently, but the Government have indicated these will be a small group of services that are high-risk and have a high-reach.

Enforcement

Compliance will be enforced by Ofcom who will have the power to fine companies up to ten per cent of their annual global turnover, insist on the improvement of non-complying practices and block non-compliant platforms.

Latest changes to the Online Safety Bill

As mentioned, the Bill introduced to Parliament also serves as the Government’s response to the report by the Joint Committee on the draft Bill. A number of key changes have been made to the Bill as a result, for example:

Senior manager criminal liability

A key concern of the draft legislation has been ensuring compliance from large in-scope companies by making senior managers criminally liable for failings by the company. In the initial draft of the Bill, this power was deferred and could not be used by Ofcom until two years after the legislation is passed. Now, that deferral period has been shrunk considerably to a period of two months, meaning senior managers will have to ensure compliance more or less from the outset. A number of other information offences have been introduced as well covering the suppression, destruction or alteration of information requested by Ofcom, the failure to comply with Ofcom when exercising its powers of entry and audit or the provision of false information and the failure by employees to attend, or provide false information at, an interview. Committing these offences can lead to up to two years' imprisonment or a fine.

Who decides what is ‘legal but harmful’ content?

Another important focus of the Bill is how high-risk Category 1 providers address their obligations around ‘legal but harmful’ content for adults. The initial draft left the decision as to what constitutes ‘legal but harmful’ content to the providers themselves, raising concerns that they could adopt a risk-averse approach and remove content simply because it might offend someone, which could stifle freedom of speech. The categories of ‘legal but harmful’ content will therefore now be identified in secondary legislation approved by Parliament and providers will be required to set out in their terms of service how they will deal with such content and enforce those terms consistently. The downside of this approach is that there will invariably be a lead time between new harmful types of content arising and their designation as ‘legal but harmful’ content. The Bill seeks to mitigate this by obliging Category 1 providers to flag to Ofcom content that is not designated but presents a material risk of significant harm to an appreciable number of adults in the United Kingdom. This will still though need to be collated and acted upon by Ofcom and the Government and the question remains as to whether such an approach is nimble enough to keep up with, for example, potentially harmful viral internet trends. Interestingly, the same approach is not being taken in respect of ‘legal but harmful’ content for children. Here providers are still required to judge for themselves whether or not content (in addition to designated content) is ‘legal but harmful’ to children. This means we may see a more zealous approach being taken by providers on measures like age assurance in respect of their content.    

Next steps

The Online Safety Bill will now go through the usual legislative process with readings in both Houses of Parliament. Given the scrutiny that the Bill has already received, the Government will no doubt hope that this will be a relatively smooth process. However, achieving the balancing act of protecting individuals from criminal or harmful content and behaviour while ensuring free speech, independent journalism and democratic political debate are not limited remains a challenge. The number of changes made recently to the Bill suggests that the Government are still grappling with this balance, not helped by the fact that the Online Safety Bill is novel, meaning there are no helpful examples to draw upon, and also sprawling in scope. Inevitably, campaigners for children’s charities will insist the Bill does not go far enough whereas those who support freedom of speech will view it as a significant curb on that right. The risk of trying to satisfy such competing interests is that the Bill ultimately fails to satisfy either. It is therefore very likely that the Bill will undergo further and potentially significant amendment before it becomes law. Even then, it is conceivable that we will not get a true picture of the impact of the law until well after that, given the amount that has already been deferred for secondary legislation, leaving affected organisations in a state of uncertainty for some time to come.

We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving. Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.

Sign up to receive the latest insights. Click here to subscribe to The Lens Blog.

Tags

big data, regulating digital