Under-fire YouTube to ramp up attempts targeting inappropriate content

The video channel’s CEO says more than 10,000 people will be dedicated to identifying and removing extremist and inappropriate movies from 2018

(FILES) This file photo taken on November 20, 2017 shows shows logos of US technology company Google displayed on computer screens.
Google is accused of illegally collecting data belonging to more than five million UK iPhone users, in a mass legal action launched on November 30, 2017. A campaign group dubbed 'Google You Owe us' says the tech giant owes consumers "trust, fairness and money" after unlawfully placing cookies on mobile phones between 2011 and 2012.
Powered by automated translation

Google will increase the number of its teams identifying and removing extremist content, hate speech and child cruelty from its YouTube channels following allegations of profiteering amid the failure to remove unsuitable footage.

The company has already improved its controls and removed more than 150,000 videos because of violent extremism since June with 98 per cent of it identified by technology, according to YouTube CEO Susan Wojcicki.

YouTube has developed automated software to identify videos linked to extremism and now is aiming to do the same with clips that portray hate speech or are unsuitable for children. Posters whose videos are flagged by the software may be ineligible for generating ad revenue.

Nearly 70 per cent of violent extremist content is taken down within eight hours of it being uploaded and “we continue to accelerate that speed,” she wrote in a blog. Previous analysis cited by the UK government suggested that three-quarters of ISIL propaganda was shared within the first three hours, often reaching its target audience before the authorities had time to react.


Read more:


Ms Wojcicki said that the number of staff working on removing such material would increase to more than 10,000 in 2018. Google did not respond to inquiries as to how many it currently employed.

The UK government has repeatedly criticised social media platforms for failing to stem the proliferation of extremist material following a series of attacks in London and Manchester this year that left 36 people dead.

The Prime Minister, Theresa May, and her French counterpart Emmanuel Macron have promised to fine internet companies if they did not step up their efforts to remove terrorism-related content.

The industry has said that it wanted to remove extremist material but had to balance those demands with democratic freedoms.

Last month, the BBC and The Times found paedophiles were posting indecent comments on videos of youngsters, evading discovery through flaws in YouTube's reporting system.

It said adverts for major brands were appearing alongside some of the videos, which led to several big brands including Mars and Adidas pulling advertising from the site.