Facebook's ‘factory’ conditions for moderators suffering trauma

Irish MP James Lawless told Facebook to address issues three years ago

Facebook's European headquarters in Dublin. (Photo by Niall Carson/PA Images via Getty Images)
Powered by automated translation

The chairman of Ireland’s justice committee has raised concerns that Facebook is not “urgently addressing” content issues after dozens of moderators highlighted workplace problems.

James Lawless met Facebook CEO Mark Zuckerberg three years ago to discuss the company's use of agency staff to monitor graphic content.

The social media giant now employs 15,000 content moderators globally through third party outsourcing agents, many are employed in Dublin by firms CPL and Accenture.

Moderators have raised concerns about training and support after some staff suffered trauma from viewing graphic content and 30 former employees in Europe are suing for damages.

"There are many problems stemming from decisions on content quality and tone being made in factory type conditions," Mr Lawless told The National.

"I am still not satisfied that Facebook is addressing the issue with sufficient urgency

“This is neither helpful for the broader platform, including user safety and confidence that the right calls are being made regarding challenging content, and also obviously not positive for the moderators themselves.

“I previously raised these issues with Facebook, including directly with Mark Zuckerberg in a meeting in Dublin in 2018.

"I am still not satisfied that Facebook is addressing the issue with sufficient urgency and the fact that such moderators are typically kept at a legal remove, by use of agency staff, suggests the long-term intention is not positive for them either.”

In a landmark judgement in the US last May, Facebook agreed to pay $52 million to 11,250 current and former moderators to compensate them for mental health issues developed on the job.

Hundreds of moderators currently employed by Facebook are also campaigning for more support, training and better working conditions.

Former moderators have told The National they had to assess up to 1,000 posts per shift, which included disturbing images, against hundreds of criteria and were expected to have 98 per cent accuracy.

Eilish O’Gara, extremism researcher at the Henry Jackson Society think tank, said the company has “neglected” the need to prioritise the removal of graphic content.

“It is absolutely vital that violent and dangerous extremist content is identified and removed from all online platforms,” she said.

“For too long, Facebook and other social media platforms have neglected their responsibility to protect the public from harmful extremist content.

“To effectively do so now, those tasked with the incredibly difficult job of identifying, viewing and de-platforming such content must be given adequate training and wrap-around, psychological support in order to carry out their work meaningfully, without damaging their own mental health.”

Facebook has been using algorithms to remove inappropriate content where possible.

Facebook Oversight Board investigating moderation protocols

On Tuesday, Facebook’s own Oversight Board revealed it is investigating how it works.

Board member Alan Rusbridger told the UK’s House of Lords communications and digital committee it will be looking beyond Facebook’s decisions to remove or retain content.

“We’re already a bit frustrated by just saying ‘take it down’ or ‘leave it up’,” he said. “What happens if you want to make something less viral? What happens if you want to put up an interstitial?

I think we need more technology people on the board who can give us independent advice from Facebook

"What happens if, without commenting on any high-profile current cases, you didn’t want to ban someone for life but wanted to put them in a ‘sin bin’ so that if they misbehave again you can chuck them off?

“These are all things that the board may ask Facebook for in time. At some point we’re going to ask to see the algorithm, I feel sure, whatever that means. Whether we’ll understand when we see it is a different matter.

“I think we need more technology people on the board who can give us independent advice from Facebook. Because it is going to be a very difficult thing to understand how this artificial intelligence works.”

Last week, Facebook revealed it will be implementing some content moderation changes recommended by its oversight board in January.

Nick Clegg, the company’s vice president of global affairs, said 11 areas would be changed as a result of the board’s report.

They include more transparency around policies on health misinformation and nudity, and improving automation detection capabilities.

Facebook previously told The National it continually reviews its working practices.

“The teams who review content make our platforms safer and we’re grateful for the important work that they do,” a Facebook representative said.

“Our safety and security efforts are never finished, so we’re always working to do better and to provide more support for content reviewers around the world.

“We are committed to providing support for those that review content for Facebook as we recognise that reviewing certain types of content can sometimes be difficult.”