Removing extremists from social media sites effective in limiting influence, report says
Many far right groups re-emerge on Gab, a popular alternative site
Removing extremist groups from social media is an effective way of destroying their fan bases, a new report has revealed.
A study by the Global Research Network on Terrorism and Technology found radical groups do not necessarily thrive on alternative platforms once they have been removed from the mainstream.
When Facebook removed far-right group Britain First, it had 1.8 million followers on the site and was the second most-liked Facebook page in the politics and society category in the UK, after the Royal Family.
The group reformed on Gab, which is a popular alternative site for far-right groups, and now only has 11,181 followers.
“This represents an enormous loss of followers and reach for the group,” the report said.
“As well as a collapse in online engagement, the ban from major social media platforms has left Britain First without a gateway to a larger pool of potential recruits, or the ability to signpost users to sites such as Gab.”
Facebook banned the group for breach of terms of service after its leaders, Paul Golding and Jayda Fransen, were convicted of hate crime.
Britain First was banned from Twitter in December 2017.
The paper, called Following the Whack-A-Mole: Britain First’s Visual Strategy from Facebook to Gab, was written by researchers at Swansea University.
It says mainstream social media companies should continue to remove extremist groups that breach their terms of service.
The report says banning groups from major platforms is effective because it reduces the ability of groups to point followers to more extreme content and limits their pools of potential recruits.
The report calls on the UK and US governments to work towards better relationships with newer, smaller and fringe platforms so content can be regulated on these sites.
In April, the British government launched its long-awaited white paper on online harm, which sought to make the directors of social media companies personally accountable for the behaviour of users.
It followed the Christchurch massacre in New Zealand in which the terrorist streamed live footage of the mosque shootings on Facebook for 17 minutes.
Under the UK government’s proposals, companies would be sanctioned if they failed to stop child abuse, or prevented users from viewing or sharing extremist content.
After the Christchurch attack, Facebook launched a major crackdown on extremist groups and in April removed other groups and people for breaching its policies.
Facebook’s Community Standards state that it “does not allow organisations or individuals that engage in ‘terrorist activity’ or ‘organised hate’, and that any content that expresses support or praise for either will result in removal”.
Gab, which has a smaller platform of 850,000 users, has now become a forum for the radical right.
It was used to post hate comments by Pittsburgh gunman Robert Bowers before he shot 17 people, killing 11, in the US city last year.
The report authors have revealed that since moving to Gab, Britain First’s imagery has become more extreme.
It recommends that future research investigates how this social media strategy progresses.
The paper is the latest published by the network, which aims to understand terrorist exploitation of technology and the digital space.
It is led by the Royal United Services Institute, a group of leading global think tanks and academic institutions.
It is supported by the Global Internet Forum to Counter Terrorism, an industry-led initiative that includes Facebook, Google, Microsoft and Twitter.
Updated: July 8, 2019 09:12 AM