BCS, The Chartered Institute for IT’s Senior Policy Manager Dan Aldridge has prepared a briefing on the Bill, with contributions from BCS Fellow and online harms expert Professor Andy Phippen and Ellen Judson, a Senior Researcher at CASM, the digital research hub at the cross-party think-tank Demos.

Overview

The much-awaited Online Safety Bill completed its Committee stage this week with MPs and peers calling for ‘major changes’ to the Bill to ‘call time on the Wild West online’.

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT, said a chief concern parents would have with the Bill, was that it seemed to rely ‘entirely on the platforms’ own risk assessments and reporting’ about what is harmful.

Dr Mitchell also said, ‘the new law aimed at stopping harmful online content puts too much trust in social media companies’ judgement of danger’.

The Joint Committee on the Draft Online Safety Bill said more offences needed to be covered, including paid-for scam and fraudulent advertising, cyberflashing, content promoting self-harm and the deliberate sending of flashing images to people with photosensitive epilepsy.

The Committee’s report also said the Bill needed to be clearer about what is specifically illegal online and proposed that pornography sites should have a legal duty to keep children off them regardless of whether they host user-to-user content.

We expect the Bill to place a duty of care on social media websites, search engines and other websites to protect users from dangerous content. Big tech companies like Facebook and Twitter could face fines of up to £18m or 10 per cent of their annual turnover, plus access to their sites being blocked. The Bill’s outcome could have global significance when it comes to the regulation of the internet.

Background

The Draft Online Safety Bill has been published. The Joint Committee on the Draft Online Safety Bill is a cross-party committee of MPs scrutinising the Bill currently. There are other Select Committees also considering issues relating to the Online Safety Bill at the moment - the DCMS Sub-Committee on Online Harms and Disinformation and the Petitions Committee, which is looking into how to tackle online abuse.

The Joint Committee publishes its report on 14 December 2021, setting out their recommendations for how the government should proceed with the Bill. DCMS will then consider the recommendations and produce a second draft of the Bill to go to Parliament.

At Prime Minister’s Questions in October, the PM appeared to promise a second reading of the Bill before Christmas - despite recess beginning on the 16th, so just days after the report of the Committee would be out. This was met with some concern. Accordingly, we expect that the second draft of the Bill will be presented to Parliament sometime in early 2022, following DCMS committing to producing it ‘as soon as possible’.

What we think is going well

The Bill has the potential to significantly change:

  • how online platforms are responding to risks to users on their services and
  • the oversight that a regulator (Ofcom) will have over what they are doing.

Up till now, the primary mode of platforms acting to reduce risks to users has been ‘self-regulation’ - platforms deciding what they do, and what information to share or keep private about the effects of their actions (as the leaks from Facebook whistle-blowers have shown). This has led to a situation where risks are continuing, but the causes, actions to tackle them and incidence are hard to assess, and users are not adequately informed about how the services they use are operating.

The Bill would require platforms to produce risk assessments against certain kinds of harm and set out what changes they will make to their systems and processes to mitigate those risks, with a particular focus on child protection. It also includes requirements that platforms consider threats to freedom of expression and privacy when implementing safety systems, potentially giving an avenue for action to be taken if platforms over-moderate content or moderate content in discriminatory ways as well as if platforms do not take action to reduce harms.

For you

Be part of something bigger, join BCS, The Chartered Institute for IT.

‘Category 1’ services (likely to be the largest, most-high risk platforms - think Facebook) will also have additional duties to take action on a wider range of harms and greater duties around freedom of expression.

At a minimum, this Bill will introduce significantly greater transparency from platforms about the systems they have in place to reduce harms and what actions they are taking to reduce identified risks. Platforms would also be expected to make their policies much clearer to users of their services.

We would also expect to see not only transparency but improved systems and action on a range of harms online, as this can be scrutinised by Ofcom, who will have significant information-gathering and enforcement powers.

What we think is concerning

However, there are a lot of questions remaining and many different concerns which have been raised.

  • The Bill leaves a lot of definitions abstract, and much of the concrete expectations for what platforms will be asked to do will be set out in secondary legislation and Codes of Practice - meaning it’s currently very difficult to assess *what exactly* platforms will be asked to do to reduce harms and protect rights, and whether it will be sufficient.
    • For instance, platforms will need to take into account the importance of ‘democratically important content’ - the definition of which is extremely unclear.
  • Concerns have been raised that the Bill doesn’t go far enough to actually lead to serious actions from platforms: that as it relies on platforms’ own risk assessments and own reporting on their systems, it’s unclear how far genuine audit and accountability will be possible, especially without greater independent researcher access to platform data.
  • The Bill is intended to regulate the systems and processes platforms have in place - however, much of the discourse around the Bill and the language in the Bill itself focuses heavily on content moderation and removal of certain types of content. There is a worry that this will lead to systems that amplify, encourage, or incentivise harmful content and behaviour online being under-addressed and content over-moderated.
  • There are also specific harms either not explicitly mentioned in the Bill, such as violence against women, or which are explicitly excluded from the Bill, such as online scams.
  • Some argue that the Bill goes too far, and risks serious encroachments on user rights in the name of safety, setting a dangerous international precedent.
  • One area where there is broad consensus, though, is that the powers currently in the Bill given to the Secretary of State to direct Ofcom’s regulation are too great and risk compromising the political independence of the regulator.

Dr Bill Mitchell OBE, Director of Policy at BCS, The Chartered Institute for IT said:

‘If I was the parent of a young daughter whose social life is mainly online, it wouldn’t be clear to me if this Bill really does do enough to keep her safe. The Bill seems to rely entirely on the platforms’ own risk assessments and their own reporting on how well their systems work at reducing harm. I’d want to have more reassurance around how the Bill will guarantee auditing and accountability of social media platforms is open, transparent, and rigorous.

‘For me as an IT professional, I’d like to have more clarity on how the Bill ensures a future Secretary of State can’t unduly interfere with the independence of Ofcom as the new regulator for social media platforms.

‘Lastly, for this legislation to work we also need to provide education and training for people to learn how to dissent in civilised ways on social media. That would help us deal constructively with opinions we don’t like and less likely to turn into some sort of incandescent rage monster.’