Social media companies grapple with how to moderate content in wake of Christchurch attacks

The speed and distance that extremist content can be shared is causing Silicon Valley to rethink its approach

Members of the Muslim community attend the National Remembrance Service at North Hagley Park in Christchurch on March 29, 2019.  The remembrance ceremony is being held in memory of the 50 lives that were lost in the March 15th mosque shootings in Christchurch. / AFP / Sanka VIDANAGAMA
Powered by automated translation

With Twitter, Facebook and YouTube facing criticism over the role of social media in the wake of the Christchurch terror attacks for hosting, spreading and in some cases inadvertently promoting extremist material, Silicon Valley says it is becoming more proactive to clamp down.

Speaking to The National at their headquarters in San Francisco, representatives of Twitter said they have long responded to feedback but they acknowledged the need for more transparency.

“This is an ongoing effort across the company to make sure people have a better understanding of the changes we make,” said a representative who declined to be identified. “People on Twitter have always given us feedback throughout our development process, and we've been exploring different ways to bring people in.”

Despite the size of individual social media ecosystems, several of the largest companies are looking to better cooperate to prevent the spread of content across platforms.

In 2016, Facebook, Twitter, Microsoft and Google announced they would be working together to create a database of terrorist content, known as the Global Internet Forum to Counter Terrorism (GIFCT). The partnership relies on sharing information between platforms.

In the wake of the Christchurch attack, the companies have been sharing a large amount of information to try and prevent the spread of a video the shooter reportedly made of his deadly attack that left over 50 dead and dozens wounded on March 15.

Initially streamed live on Facebook, the 17-minute video was first reported by a user 29 minutes after it began. Twitter refused to say how long it had taken for them to be made aware of the footage.

Both platforms rely on a mix of artificial intelligence (AI) and human moderators to catch offensive videos, but most of the onus goes on the user to report harmful content.

But what content should be removed is still something of a grey area. Just because a video is violent or gory, or depicts death or killing, it doesn't mean it will be automatically taken down.

If a video draws attention to or condemns atrocities it might be allowed to exist, but it is when it promotes or glorifies violence that it is removed. For sensitive content, both Twitter and Facebook tend to use age filters and disable autoplay functions.

epaselect epa07442893 A man browse the Facebook profile of Syed Areeb Ahmed, a Pakistani victim of mass shooting in Christchurch, at his hometown in Karachi, Pakistan, Pakistan, 16 March 2019. A gunman killed 49 worshippers at the Al Noor Masjid and Linwood Masjid in Christchurch, New Zealand on 15 March. The 28-year-old Australian suspect, Brenton Tarrant, appeared in court on 16 March and was charged with murder.  EPA/REHAN KHAN
A man browses the Facebook profile of Syed Areeb Ahmed, a Pakistani victim of mass shooting in Christchurch. EPA

The question of violence in videos posted online is not new. In 2017, YouTube removed footage shot by activists and journalists in Syria documenting the violence that researchers said could be crucial in future war crimes prosecutions. The company apologised after a backlash and reinstated the footage, saying it made “the wrong call”.

After the Christchurch shooting, however, Facebook immediately classified the video as terrorism making any sharing or praise of the footage an infringement of its user guidelines. This even extended to clips from news reports that showed extracts and were therefore removed, a company spokesperson – who also declined to be named – told The National.

Despite Facebook’s tough stance on the footage after the fact, there remains the question of how it was not flagged sooner. Facebook has been open about the fact that its AI moderation failed on March 15.

“This particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.”

Automatic detection also ran the risk of flagging similar content, like video games, which underlined why human moderation would continue to “be part of the equation”.

The Facebook spokesperson said live videos brought their own “unique challenges”.

Despite imperfect machines unable to always make the right calls, Facebook say they are still looking at ways to speed up their review and flagging of videos posted to the site.

“Some have asked whether we should add a time delay to Facebook Live, similar to the broadcast delay sometimes used by TV stations. There are millions of Live broadcasts daily, which means a delay would not help address the problem due to the sheer number of videos,” the Facebook spokesperson said.

“More importantly, given the importance of user reports, adding a delay would only further slowdown videos getting reported, reviewed and first responders being alerted to provide help on the ground.”

The company also disagreed that Facebook live videos should be restricted to specific types of high profile users such as celebrities or media professionals rather that anyone with an account.

The spokesperson said there was a “very small number of cases where people are misusing our services”, and the “vast majority of uses of Facebook Live are meaningful and positive”.

“We recognise that the immediacy of Live brings unique challenges – especially in enforcing our policies. We know it’s important to have a responsible approach, and will continue to work around the clock on this.”

The company also said they believed that the process for reporting content was “easy” – although the spokesman said the company “welcome[s] feedback from our community”.

Currently, to report a post a user has to find the drop-down menu beside the item, select the “give feedback” option, select what aspect of the guidelines the post is in violation of then they are able to submit a report for review.

As well as the speed of the response, one of the biggest questions has been about reposted instances of the footage from the attacks. The issue of stopping the spread of banned content came to the fore after 2014 when ISIS began disseminating propaganda videos online, some that showed the murder of civilians, Western hostages including journalists and a Jordanian pilot.

In the wake of the Christchurch attack, Facebook says it removed 1.5 million versions of the video within 24 hours. YouTube hasn't said exactly how many copies they have deleted but the company's chief product officer Neal Mohan told the Washington Post that they had removed an "unprecedented volume" and had struggled to keep on top of the deluge.

Versions were appearing as quickly as one a second and YouTube had to disable some searches to prevent people being able to look for the footage while they deleted them all.

Stuart Macdonald, criminal law professor and director of a multidisciplinary cyberterrorism project at Swansea University, told The National that "the events in Christchurch show the size of the task facing social media companies".

On the GIFCT database, the member companies create a digital footprint known as a ‘hash’ every time particularly egregious material is removed, this is shared and allows all member companies to automatically block the same material on their sites. Mr Macdonald said that in the two years since the joint operation began some 100,000 hashes had been listed in the database.

“In the days following the Christchurch attack, more than 800 distinct hashes were added to the database. This gives some idea of how many distinct videos of the attack were circulating,” he says.

“For example, it has been reported that users were attempting to circumvent automated blocking by uploading the video in different languages and by posting recordings of the video playing on their computer.”

While the GIFCT system has been credited with a large reduction of material from the likes of ISIS appearing in mainstream social media, there is still a concern given that just 13 companies are currently signed up – nine have joined since its inception.

Various countries are looking at or already have their own legislation to force action from social media companies – in Germany, for example, references to Nazis, their icons or supporters will not appear in Twitter given the country’s strict laws around showing support for National Socialism.


Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of the US Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee on Capitol Hill, April 10, 2018 in Washington, DC.  / AFP / JIM WATSON
Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of the US Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee on Capitol Hill, April 10, 2018. AFP

But some platforms have remained “hostile” to regulation.

“For example, shortly after Britain First was banned from Facebook in early 2018, it moved to Gab. This platform states that it “champions free speech, individual liberty and the free flow of information online. All are welcome”.

Britain First still maintains a presence on this platform and Gab has become a popular site among the far right.

“Similarly, when Twitter began its aggressive suspension activity against Daesh [ISIS] supporters, a migration to Telegram occurred,” he added. Telegram is an encrypted messaging application that allows broadcasting to people who actively follow a specific channel.

The list of sites that have hosted the Christchurch terrorist’s video, or his manifesto, is long and varied, Mr Macdonald says. It includes small file sharing sites, as well as the larger social media companies.

“Whilst most smaller platforms took positive action in response to the attack, there are some that have a history of hosting questionable content and still have active links to the manifesto and the full video,” Mr Macdonald says.

Notable, Mr Macdonald added, was that the video only went viral after it was aired by the mainstream media – and in particular a few British tabloids as well as Sky News and Ten Daily in Australia – which “raises questions about the role of the media more widely”.

In terms of concrete changes to operations and policy, Facebook has provided both intention and actual alterations to a number of its services as a direct result of the attacks.

The company said it was improving matching technology to stop the spread of harmful videos and had used the audio-based technology it had been building to identify variants of the Christchurch video.

Facebook and Instagram also banned praise or support for white nationalism and white separatism, as part of an intensified crackdown on hate speech.

“This includes more than 200 white supremacist organizations globally whose content we are removing through proactive detection technology,” the company spokesman said.

They are also seeking to bolster the GITFC sharing system and experimenting with sharing URLs of removed content between companies as well as working to address the range of terrorists operating online, and are attempting to “improve our ability to collaborate in a crisis”.

While the Christchurch shooting has been the most recent high profile case where the role of social media companies has been called into question, it is unlikely to be the last. Many in Silicon Valley are taking an active approach to trying to tackle the most egregious violations but scale and deciding exactly where the lines are will remain an issue for the foreseeable future.