Dislike: why the thumbs down option on Facebook is a fundamentally flawed idea

In a poisonous social media atmosphere with 'well-policed echo chambers', is it a good idea to give people the opportunity to downvote the opinions of people they disagree with?

Facebook thumbs-down hand on a iPhone. Facebook is a social media company owned by Mark Zuckerberg. (Photo by Ted Soqui/Corbis via Getty Images)
Powered by automated translation

Gesturing with our thumbs to show approval or disapproval is a human habit stretching back centuries. The understanding that thumbs-up means good, and thumbs-down bad, is also reflected across the internet, where we're encouraged to upvote things we like and downvote things we don't. Facebook, however, has always shied away from giving its two billion users the opportunity to express displeasure with a single click. "Likes" are Facebook's currency, and "dislikes" just don't exist. But in the last six months, Facebook has begun trialling downvoting in parts of the US, Australia and New Zealand, allowing people to demonstrate instant disapproval of other people's contributions. "Support comments that are thoughtful," suggests the window that pops up during the trial, "and ­demote ones that are uncivil or irrelevant."

Will it bury hateful content? 

Facebook users have long demanded a feature that lets them give the thumbs-down to belligerent, rude or bullying comments. In 2015, Facebook chief executive Mark Zuckerberg noted that "people have asked about the dislike button for many years". He added that "today is the day I can say we're working on it and shipping it". However, said button never materialised. Internal sources confirmed that it was rejected for fear of sowing "too much negativity".

But negativity continues to run rampant across the platform. Back in April, representatives of civil groups in Myanmar wrote an open letter to Zuckerberg, expressing deep concern about the way Facebook was being used in the country to incite violence. In a statement, the company admitted its failings, and promised to “improve our technology and tools to detect and prevent abusive, hateful or false content”. It’s part of a broader plan: in recent months, the company has sought to reverse declining user numbers in certain demographics by trying to make Facebook a more pleasant experience – reducing the number of news stories, cutting back on advertisements and encouraging people to spend time interacting with each other.

But nothing derails those interactions quite like rudeness. Facebook currently employs just one staff member per 100,000 users to deal with safety and security, so self-policing has to play a big part in keeping things civil. The system now being trialled relies on users to moderate discussions; up and down arrows sit next to each comment, along with a running votecount showing how valuable the community has deemed that particular contribution. Algorithms then use the voting data to re-order the debate. Facebook believes that they will “push thoughtful and engaging comments to the top of the discussion, and move down the ones that are simply attacks or filled with profanity”.

The downside of downvoting

Not everyone, however, believes that two negatives necessarily make a positive. "I think that upvoting and downvoting is a really bad way to do this," says Joseph Reagle, author of a book entitled Reading the Comments: Likers, Haters, and Manipulators at the Bottom of the Web. "It prompts a kind of gamification. I don't imagine that users are going to be sensitive to all of the semantics of these things and know how to use them appropriately."

Reagle is alluding to the thorny question of whether upvotes and downvotes have the effect that’s intended. The psychological theory of “operant conditioning” – that our behaviour is linked to the punishments or rewards we have received in the past – dates from the 1930s, and underlies the design of many social-media platforms. But in 2014, data scientist Justin Cheng – who now works at Facebook – completed a piece of research based on analysing 42 million online comments, and concluded that downvoting caused a spiral of negative behaviour. Downvoted authors, he concluded, go on to produce posts “of lower quality… We find that negative feedback leads to significant behavioural changes that are detrimental to the community.”


Read more from Rhodri:

Is storing data in DNA the way forward?

Why it’s not always OK to hit the OK button

Could this be the solution to the pricey problem of smashed phone screens?

From Amazon to Google: Why we need to pay attention to these tech giants


The binary nature of thumbs up, thumbs down

It's not surprising, perhaps, that reducing human emotions to a binary choice has unintended consequences. Reddit, the online community that's perhaps most associated with up/down voting systems, has frequently become a battlefield of voting contests that have little to do with the quality of people's contributions and much more to do with differences of opinion. Indeed, the practice of mass downvoting even has a name on the site: "brigading". "You see a similar thing on Amazon," Reagle says, "where people don't like the e-book version of a classic piece of literature, so they give it one star." Online commenting system Disqus, which also features up and down votes, recently polled users to find out why they downvoted comments. The most common reason, by some distance, was because they disagreed with the opinion being expressed – nothing to do with civility or abuse whatsoever.

"This surprised us," ­Disqus's Tony Hue admitted in a blog post – but it shows that downvoting systems designed to tackle bullying could end up facilitating it instead. A journalist for Slate, Rachel Withers, found herself included in one of the Facebook trials, and concluded that it's "the perfect feature for trolls and bots, lefties and conservatives… to silence opinions through effective organising and well-policed echo chambers". However, as Reagle points out, new systems may have downsides, but they still work better than the old ways. "I can only presume it's serving Facebook somehow," he says. "The most paranoid theory would be that this is a honeypot! They create this mechanism, knowing that people are going to abuse it, and then they're better able to spot the fraudulent accounts as a result."

It’s a problem unique to the age we’re living in. Never before have we had to consider how to behave and interact with thousands of strangers with whom we have fundamentally differing views. Nor do we understand the toll that systems of this kind may be having on our mental health, with inflammatory exchanges and exaggerated reactions affecting our sense of self-worth and diminishing our sense of empathy. It may have come to the point where Facebook has no choice but to use downvoting as a way to help us to get along, but it’s also possible that the company’s mission – to connect all the citizens of the world – may be a fundamentally flawed idea.