Facebook says governments must help it curb online hate speech

The tech company's counterterrorism policy manager says it needs help with slang, nuance, language and sheer range of extremist groups

Dubai, United Arab Emirates - February 12, 2019: Erin Saltman, Policy Manager for Europe, the Middle East and Africa on Counterterrorism speaks during day 3 at the World Government Summit. Tuesday the 12th of February 2019 at Madinat, Dubai. Chris Whiteoak / The National
Powered by automated translation

One of Facebook’s leading voices in the Middle East has said it needs governments to get behind a movement to stop the spread of online hate speech.

Dr Erin Marie Saltman, the company's policy manager for Counterterrorism in Europe, Middle East and Africa, made her plea during her demonstration at the World Government Summit in Dubai, earlier today.

She said that it was no longer enough to simply crack down on posts connected to hate speech as censorship tended to have the opposite effect than was intended, causing the message to proliferate.

Dr Saltman said that agencies from all over the world are having to move quickly to combat the constantly evolving problems created by online hate speech.

“If you had told me five years ago that I would be working for a social media company I would have laughed,” she said.

“These jobs simply didn’t exist back then. We are having to build a whole new realm of positions to tackle extremism.”

(FILES) In this file photo taken on May 23, 2018,  Facebook CEO Mark Zuckerberg speaks during a press conference in Paris on May 23, 2018. Mark Zuckerberg said Monday, February 5, 2019 he sees Facebook as a largely "positive" force for society as the embattled social network marked its 15th anniversary. Even as Facebook is facing a wave of criticism over issues of manipulation, misinformation, abuse and other social ills, Zuckerberg said it would be a mistake "to overly emphasize the negative" impacts of social media and the internet.  / AFP / BERTRAND GUAY
Tech firms like Mr Zuckerberg's have come under fire for failing to crack down on hate speech. AFP

Facebook came in for major criticism last year after social media users in Myanmar used the platform to organise attacks against the Rohingya, the Muslim minority in a predominantly Buddhist country.

This led to Facebook creating a designated team to handle all content to Myanmar.

“The ethnic violence in Myanmar is horrific and we have been too slow to prevent misinformation and hate on Facebook,” said Facebook Product Manager Sara Su, speaking at the time.“We can’t do it alone — we need help from civil society, other technology companies, journalists, schools, government, and most important of all members of our community.”

In 2018 alone, Facebook removed 14.3 million pieces of content that were identified as terrorist content.

The battle against online extremism became so complicated that Facebook, YouTube, Twitter and Microsoft came together to form the Global Internet Forum to Counter Terrorism in 2017.

The body was set up to “substantially disrupt terrorists' ability to promote terrorism, disseminate violent extremist propaganda, and exploit or glorify real-world acts of violence using our platforms”.

Facebook employs more than 200 people globally who work specifically on countering terrorism. This included policy and operations teams as well as experts on law enforcement, engineering and human rights.

Dr Saltman added that the organisation also employed 30,000 people in its security team to review and respond to harmful content, including hate speech and incitement.

“We are looking at what is causing people to join violent extremist groups,” said Dr Saltman.

“That is the same whether it’s a Neo Nazi white supremacy group, a Buddhist extremist group that might be targeting minorities in south East Asia or Islamist extremism in another part of the world.”

Dubai, United Arab Emirates - February 12, 2019: Erin Saltman, Policy Manager for Europe, the Middle East and Africa on Counterterrorism speaks during day 3 at the World Government Summit. Tuesday the 12th of February 2019 at Madinat, Dubai. Chris Whiteoak / The National
Erin Saltman said the company needs help from governments and users to spot extremist language. Chris Whiteoak / The National

She said that the 30,000 people in the security team were carefully selected to be able to deal with content that was specific to certain regions.

Slang in hate speech is very nuanced, we need a good understand of what's being said

“They have to be based in different time zones. Not only do they have to speak a language, they have to speak it very fluently,” she said.

“For example, I think my French is good, but I shouldn’t be the one reviewing French content in detail.”

She said there were major issues when it came to the nuance of certain cultures.

“Slang in hate speech and cultural context is very nuanced, we need a very good understanding of what’s being said,” she said.

“That’s especially the case when it comes to the differences between humour, sarcasm and offensive speech.”

Another measure that Facebook had taken was tagging certain content by “hashing”, which involved creating a digital fingerprint in each case.

“Anytime that content is shared or reshared – we can immediately identify a match, our technology can make the decision to take it down in some instances,” said Dr Saltman.

She said that when there is context shared, within an image, it becomes the responsibility of a human team with language skills to make a decision on what to do with the content.

“You do see other sectors sharing content for different reasons – you see media sectors sharing it as well as academics,” she said.

“We also see activists that might share an image just to condemn it.”