Facebook accused of ‘neglecting’ its responsibilities as moderators suffer trauma

Calls for Facebook to improve its online policing model

Facebook CEO Mark Zuckerberg leaving The Merrion Hotel in Dublin after a meeting with politicians to discuss regulation of social media and harmful content. (Photo by Niall Carson/PA Images via Getty Images)
Powered by automated translation

Facebook has been accused of “neglecting” its policing responsibilities by leading think tanks after its moderators complained of suffering trauma.

Experts are warning more support is needed for the social media giant's moderators, who are employed by outsourcing firms, and a change of model in the way graphic content is handled is "desperately" needed.

The accusations follow an investigation by The National that revealed some moderators have been left traumatised by terror videos and feel they have not received adequate training or access to mental health professionals, unlike Facebook's in-house staff.

Facebook employs more than 15,000 moderators globally through outsourcing companies.

Extremism researcher at the Henry Jackson Society think tank, Eilish O’Gara said the company has “neglected” the need to prioritise the removal of graphic content.

“It is absolutely vital that violent and dangerous extremist content is identified and removed from all online platforms,” she said.

“For too long, Facebook and other social media platforms have neglected their responsibility to protect the public from harmful extremist content.

“To effectively do so now, those tasked with the incredibly difficult job of identifying, viewing and de-platforming such content must be given adequate training and wrap-around, psychological support in order to carry out their work meaningfully, without damaging their own mental health.”

Facebook, along with four of its outsourcing firms, is being sued by 30 content moderators from across Europe who claim they have suffered post-traumatic stress disorder as a result of the disturbing images they viewed in their job.

Hundreds of moderators currently employed by them are also campaigning for more support, training and better working conditions.

Alex Krasodomski-Jones, director of the Centre for the Analysis of Social Media at Demos think tank, believes Facebook needs to change its policing model.

"It’s been known for some time that content moderation is a draining, difficult job requiring moderators to engage with some of the most horrifying content that can be found online,” he said.

"When you industrialise the process and demand moderators review hundreds of pieces of content like this every day, the impact is magnified: the number of reports of stress and trauma among workers doing this job is growing.
"Automating this work is only a partial solution: algorithms simply are not sophisticated or transparent enough to be trusted with decisions around freedom of expression online.

“Better protection and support for those on the front line is vital, but a change in model is also desperately needed.

“I hope the stresses caused by this approach to moderation cause platforms to rethink how they approach the policing and curation of their spaces online, investing in new ways to empower communities to manage themselves instead of handing policing power to outsourced workers."

The Counter Extremism Project (CEP) think tank has been monitoring a rise in ISIS and far-right propaganda during the pandemic and said there needs to be a regulatory standard for the way moderation is conducted.

“There are no agreed minimum standards for content moderation. Each platform does it the way they like,” said Hans Jakob-Schindler, director of CEP.

“That is why content moderation continues to be done mostly badly and cheaply. The reason for the lack of standards, including work standards for content moderators, is that there is currently no regulation that guides these activities. As a consequence, unfortunately, Facebook seems to deploy the bare minimum of resources and outsources it.

“Moving moderation inside Facebook may slightly improve the situation, however as far as I can see, that would not sustainably solve the problem.

“Without a regulatory obligation for transparency and the ability to audit, it is impossible to say with any certainty what and how much progress on improving content moderation, including working conditions for the moderators, has been made or not made.”

Facebook's European headquarters in Dublin. (Photo by Niall Carson/PA Images via Getty Images)
Facebook's European headquarters in Dublin. (Photo by Niall Carson/PA Images via Getty Images)

Daniel Markuson, digital privacy expert at NordVPN, said organisations need to invest more in vetting content and psychological support.

“Every organisation dealing with the vetting process of user-generated content should be focusing on the further development of AI software that helps to analyse and flag inappropriate content,” he said.

“The goal is to minimise the margin of error so that the involvement of human vetters would be minimal.

“Nevertheless, automated moderation is very important in order to minimise human trauma. Moderators say they are expected to deliver 98 per cent accuracy, and that’s when they are dealing with up to 1,000 tickets a night - if that’s true, then the vetting technology has to improve significantly.”


Facebook told The National it is continually reviewing its working practices.

“The teams who review content make our platforms safer and we’re grateful for the important work that they do,” a Facebook representative said.

“Our safety and security efforts are never finished, so we’re always working to do better and to provide more support for content reviewers around the world.

“We are committed to providing support for those that review content for Facebook as we recognise that reviewing certain types of content can sometimes be difficult.

“Everyone who reviews content for Facebook goes through an in-depth, multi-week training programme on our Community Standards and has access to extensive psychological support to ensure their well-being.

“This is an important issue, and we are committed to getting this right.”

Last year, Facebook agreed to pay $52 million to 11,250 current and former US moderators to compensate them for mental health issues developed on the job.

On Thursday, Facebook revealed it will implement content moderation changes recommended by its own oversight board.

Nick Clegg, the company’s vice president of global affairs, said 11 areas would be changed as a result of the board’s report, which was released in January.

They include more transparency around policies on health misinformation and nudity, and improving automation detection capabilities.