There’s strength of hate in numbers on social media

Platforms such as Facebook and Twitter provide highly effective mechanisms for fanning the flames of antagonism, spreading toxic ideas and fuelling hate crimes, writes Justin Thomas

People gather to see flowers and messages of support left around a statue of Richard Cobden in St Ann's Square in Manchester, northwest England on May 25, 2017, placed in tribute to the victims of the May 22 terror attack at the Manchester Arena.
Police said they arrested two men Thursday in the Manchester area in connection with the deadly bombing of an Ariana Grande pop concert, while a detained woman was released without charges. Britain has raised its terror alert to the maximum level and ordered troops to protect strategic sites after 22 people were killed in a suicide bomb attack on a Manchester pop concert. / AFP PHOTO / Oli SCARFF
Powered by automated translation

Hate is the ugliest four-letter word in the English language. This complex blend of noxious perception and destructive emotions can drive us to do the cruellest things imaginable. Hate and its close cousin indifference are key factors in assault, homicide and genocide. Tragically hate – or at least, hate crimes – appear to be on the rise.

A hate crime is an offence motivated by hostility towards a person based on any aspect of their identity, whether it is their gender, disability, race, ethnicity, religion or lifestyle. Many of us occasionally engage in unhealthily categorising our social worlds into them and us; hate crimes are always perpetrated against “them”, the despicable other.

After the Brexit vote in the UK, for example, there was a massive increase in racially aggravated public disorder offences in the UK. The National Police Chiefs Council reported that hate crimes rose nearly 500 per cent in the first week after a Brexit campaign which focused heavily on immigration. But even before Brexit, hate crimes in the UK had already been rising steadily since 2012.

A recent research study, reported in The National last week, explored the relationship between social media and the incidence of hate crimes against refugees in Germany. researchers studied more than 3,000 hate crimes and the factors present in each circumstance. The team from the University of Warwick in the UK found that towns and cities with a higher-than-average Facebook use corresponded with more attacks on refugees. Social media, they discovered, could facilitate the transformation of online hate speech into real life incidents.

Our capacity to hate is primordial but in an information age, the vintage bottle of hatred has found a disturbingly effective new cork: social media. Platforms such as Facebook and Twitter provide highly effective mechanisms for fanning the flames of hatred and spreading toxic ideas.

One element of social media that might contribute to the rise in hate crimes is the tendency for social media to polarise opinions. Decades of research in social psychology have shown that talking to like-minded people – those who share our views about a hot topic – tends to lead to us all adopting a more extreme stance than the one we began with, whether that is mild irritation becoming dislike and dislike morphing into hate, a phenomenon known as group polarisation.

When we talk to like-minded people, we tend to say “yes and” rather than “yes but”. We throw petrol on each other's bonfires until the whole forest is ablaze. Social media allows us to isolate ourselves from the ugly dissenting other and surround ourselves with people who sound just like us. The echo chamber can be good, bad or ugly and if it’s hateful, it’s likely to become even more so with time and further discussion.

Another aspect of social media that might be contributing to the rise of hate is known to social psychologists as toxic disinhibition. This concept describes the elements required to bring the worst out in people. For example, perceiving that we are anonymous or, at least, hard to identify, seems to help unleash our crueller side. In a classic psychology experiment, participants given the cloak of anonymity tended to administer harsher punishments to strangers than to their nametag wearing counterparts.

_____________________

Read more from Justin Thomas:

_____________________

Another aspect of toxic disinhibition is known as deindividuation, a loss of self-awareness by perceiving ourselves as being part of a larger group. Deindividuation allows us to do things we might never do when acting alone.

For example, when one motorist honks at a hesitant driver, the following honks from other drivers further back in the queue tend to be far louder, longer and more aggressive. The hard evidence of deindividuation in hate crimes comes from the study of lynching in the US. The findings from such research show that the larger the lynch mob, the more gruesome the atrocity and the higher the likelihood that the lynching will also include mutilation.

When people are publicly humiliated on social media, the dynamics are very similar. The hurtfulness of the comments directed at the victim tends to intensify with the volume of the online mob.

New laws in Germany have led to the social media site Facebook deleting hundreds of incendiary posts since the law was launched earlier this year.

However, beyond deleting offensive posts, there needs to be further consideration given to punishing those guilty of online hate speech. In addition to greater regulation, we also need to raise societal levels of psychological literacy so we can better understand how groups, virtual or otherwise, shape our thinking, feelings and actions.

Dr Justin Thomas is professor of psychology at Zayed University