Meta responds to independent review of Hebrew and Palestinian content moderation

The social media giant will fully implement 10 of 21 recommendations

Powered by automated translation

An independent review of social media giant Meta's practices during the May 2021 war between Palestinians and Israel has identified a number of missteps in the company's handling of Arabic and Hebrew content circulating online.

During the conflict, a large number of Palestinians complained that posts advocating human rights and an end to the violence were censored by the company.

Chiefly, independent consultancy firm Business for Social Responsibility (BSR), which conducted the investigation into Meta's practices, concluded that the company had "overenforced", or overmoderated, Palestinian content and "underenforced" Hebrew content, despite that material from both sides violated the network's community standards.

Miranda Sissons, Meta's director of human rights, said on Thursday: "BSR did raise important concerns around underenforcement of content, including inciting violence against Israelis and Jews on our platforms, and specific instances where they considered our policies and processes had an unintentional impact on Palestinian and Arab communities — primarily on their freedom of expression."

As a result, Meta said it launched a Hebrew machine-learning classifier that detects "hostile speech".

"We believe this will significantly improve our capacity to handle situations like this, where we see major spikes in violating content," Ms Sissons said.

Last year, Palestinians across the world raised red flags about how their content was being handled by Meta on Facebook, Instagram and WhatsApp.

An organisation called 7amleh (meaning campaign) was launched specifically to record instances where pro-Palestinian content was censored or deleted or when users posting this material were banned from Meta's platforms.

Al Aqsa Mosque in Jerusalem reopens after two months

Al Aqsa Mosque in Jerusalem reopens after two months

In a call with reporters, Meta also admitted that a human content reviewer blocked the Al Aqsa hashtag on Instagram in May last year because the employee had erroneously linked the mosque's name to "designated dangerous organisations" under US law, by which Meta abides.

"The error in May 2021 that temporarily restricted people's ability to see content on the Al Aqsa page" caused Meta to allow only "expert teams" to vet and approve keywords associated with designated dangerous organisations, the Meta response said.

Meta also said it was "assessing the feasibility" of improving the process of detecting violating Arabic content by dialect for review.

"This includes reviewing hiring more content reviewers with diverse dialect and language capabilities," Meta said.

BSR made 21 recommendations to Meta, of which it is implementing 10 and partially implementing four while assessing the feasibility of six and refusing to take action on one.

The recommendation Meta refused to implement was funding "public research into the optimal relationship between legally required counter-terrorism obligations and the policies and practices of social media platforms".

The suggested research would explore how "the concept of material support for terrorism should be interpreted in the context of social media and whether governments should establish different regulations or implementations for social media companies".

Meta said its legal advice related to "relevant sanctions authorities to understand the company's legal obligations."

Updated: September 22, 2022, 3:12 PM