Tech firms could face £18m fines in UK clampdown on harmful content

Technology companies could face fines of up to 10 per cent of their turnover for failing to act on harmful content

(FILES) In this file photo taken on October 29, 2020 a photo taken on October 21, 2020 shows the logo of the the American online social media and social networking service, Facebook and Twitter on a computer screen in Lille. The US Federal Trade Commission on December 14, 2020 sent orders to Amazon, Facebook, Twitter, YouTube and other internet giants demanding troves of information about their data collection, ad practices, and user engagement.
The orders endorsed by four of the five FTC commissions were sent under the authority of a federal act that lets the agency conduct studies for legislators to have on hand when crafting relevent laws.
 / AFP / Denis Charlet
Powered by automated translation

Britain on Tuesday will announce a crackdown on technology companies that fail to protect people from exposure to illegal content such as that relating to child sexual abuse, terrorism or suicide.

Companies failing to protect people face fines of up to 10 per cent of turnover, or up to £18 million ($24m), whichever is higher.

They may also have their sites blocked and the government will have the power to apply sanctions on senior management.

Digital Secretary Oliver Dowden and Home Secretary Priti Patel are set to announce the government’s final decisions on the laws on Tuesday.

The regulations, which will allow users to post their own content or interact, will apply to any company in the world hosting user-generated content online that is accessible to people in the UK or enables them to privately or publicly interact with others online.

It includes social media, video sharing and instant-messaging platforms, online forums, dating apps, commercial pornography websites, and online marketplaces, peer-to-peer services, consumer cloud storage sites and video games that allow interaction.

Search engines will also be subject to the new regulations.

The legislation will include protection for freedom of expression and pluralism online, allowing people to take part in society and engage in robust debate.

But the new laws will not affect articles and comments sections on news websites, and there will be additional measures to protect free speech.

Tech platforms will need to work harder to protect children from being exposed to harmful content or activity such as grooming, bullying and pornography.

The most popular social-media sites, with the largest audiences and high-risk features, will need to set and enforce clear terms and conditions that explicitly state how they will handle content, which is legal but could cause significant physical or psychological harm to adults.

This includes dangerous disinformation about coronavirus vaccines, for example.

“We are giving internet users the protection they deserve and are working with companies to tackle some of the abuses happening on the web," Ms Patel said.

“We will not allow child sexual abuse, terrorist material and other harmful content to fester on online platforms. Tech companies must put public safety first or face the consequences.”

The government plans to bring the laws forward in an Online Safety Bill next year.

Category system

Different tech companies will be in different categories, depending on how how large and high risk they are considered to be.

A small group of companies with the largest online presences and high-risk features, which are likely to include Facebook, TikTok, Instagram and Twitter, will be in Category 1.

These companies will need to assess the risk of legal content or activity on their services with “a reasonably foreseeable risk of causing significant physical or psychological harm to adults”.

They will then need to make clear what type of “legal but harmful” content is acceptable on their platforms in their terms and conditions and enforce this transparently and consistently.

All companies will need mechanisms so people can easily report harmful content or activity while also being able to appeal against content being taken down.

These companies will also be required to publish transparency reports about the steps they are taking to tackle online harm.

Examples of Category 2 services are platforms that host dating services or pornography, and private messaging apps.

Less than 3 per cent of UK businesses will fall within the scope of the legislation and by far most of the companies will be Category 2.