'Fake news' continues to confuse and confound, so what's being done to stop it?

With hoaxes and misinformation cluttering the internet, we report on what is being done, and not done, to combat the phenomena

epa06542076 Special Assistant to the President Christopher 'Bong' Go shows a document he described as 'fake news' during a hearing at the Senate building in Pasay City, south of Manila, Philippines, 19 February 2018. The Senate is investigating the 16 billion Philippine Peso (304 million US Dollar) acquisition of two Philippine Navy frigates as part of the Armed Forces of the Philippines'Pasay City (AFP) modernization program.  EPA/MARK R. CRISTINO
Powered by automated translation

The misinformation began within minutes of the Florida high school shooting on February 14th. Claims on social media that students caught up in the horror were paid actors were then followed by assertions that the incident was orchestrated by the US government. Amid all the conspiracy theory masquerading as fact, one reporter from The Miami Herald, Alex Harris, was subjected to a stream of abuse based upon two doctored screengrabs of her tweets. One indicated that she was trying to source pictures of dead bodies. The other contained speculation about the race of the attacker.

Neither tweet was real, but both images were shared by thousands of people who believed them to be genuine. Other faked screengrabs to emerge that day included one of a Buzzfeed article entitled “Why We Need To Take Away White People’s Guns Now More Than Ever”. Gun owners were furious at Buzzfeed for publishing such a thing. However, the article was entirely fictitious.

Screengrabs need to be treated with greater suspicion 

The ability to take a snapshot of the screen of a computer, tablet or phone is a convenient little trick. One of its many uses is to provide proof that we’ve seen something published online, and as a result screengrabs have an air of authenticity about them. There’s a widespread supposition that creating a fake would be laborious and time consuming, but barely any skill is needed. No Photoshop wizardry is involved. No libraries of suitable fonts are required.

By using the “developer tools” that accompany web browsers such as Chrome, Firefox or Safari, changing the appearance of a webpage on your screen is as simple as editing a Word document: just highlight, delete and type whatever you like. Given that the job can be done in seconds and the fake circulated in minutes, you’d think that screengrabs would be regarded with far greater suspicion. But psychologically speaking, they’re a very convincing ruse. People seem to believe them in an instant.

The speed with which faked screengrabs gather momentum is partly down to our reluctance to check if they’re real, a laziness prompted (perhaps understandably) by 21st century information overload. It’s accelerated by the fact that misinformation of all kinds can back up our already existing beliefs. If a faked screengrab supports someone’s deeply-held conviction that gun laws are under threat or that the media is evil, they’ll be happy to use them in support of their cause. The consequence of this is a substantial downgrading of the status of truth.

That challenge to truth has been big news in the past few months, as Russian interference in the 2016 US elections is investigated by Robert Mueller, former Director of the FBI. But creating convincing misinformation is so easy to do that it has become endemic, whether it’s challenging the integrity of news organisations or merely making whimsical jokes. The people behind the fakes might be driven by a sense of power, relishing their ability to alter perception of an event from the comfort of their armchair, or maybe they’re just amused by the mischief of it all – but they’re spurred on the fact that, broadly speaking, there’s no penalty for misleading people. That may now be changing in some countries.

How culpable are tech giants in the spread of misinformation?  

Earlier this month it was reported that more than 200 people had been arrested in China for “illegal internet speculation” (ranging from denigrating products and services to making up news) while this past month French president Emmanuel Macron stated his intention to create laws to curb the spread of misinformation during elections. There have been many calls for action by Google, Facebook and Twitter, the three organisations whose systems tend to facilitate the circulation of fake news. Historically, but they have all shown a public reluctance to examine the extent of their role in disseminating misinformation.

Facebook CEO Mark Zuckerberg famously described the notion that fake news stories had influenced the 2016 election as a “pretty crazy idea”, and all three companies see themselves as neutral services shaped entirely by users and for which they hold minimal responsibility. Facebook’s “Trending Topics” section used to be moderated by a team of 25 journalists.

These days it’s data driven. An excerpt from a Twitter blog post from June last year outlines the company’s “hands off” attitude: “Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information,” it reads. “We, as a company, should not be the arbiter of truth. Journalists, experts and engaged citizens Tweet side-by-side correcting and challenging public discourse in seconds.” In other words, it’s up to us to decide what the truth is.

The story of a doctored screengrab of a journalist’s tweets is just a small example of a more alarming trend. Last month, a bogus news story suggesting that a flu vaccine was the cause of a US flu outbreak was shared well over half a million times on Facebook.

Newspapers such as the San Francisco Chronicle and Brazil's biggest newspaper, the Folha de S.Paulo, have publicly called out Facebook for passively encouraging the spread of misinformation. Even advertisers, whose money is so critical to online publishing, have become spooked. Last week, Keith Weed, the head of marketing at Unilever, expressed his concern at the "fake news and toxic online content" that its adverts sit alongside. "It's in the digital media industry's interest to listen and act on this," he said, "before viewers stop viewing".

So, what's being done?

Google, Facebook and Twitter recently promised the US Senate that they would take proactive measures against propaganda. Some say there is an algorithmic solution to weeding out falsehood, but ultimately, for misinformation to be properly challenged, the public has to develop a keener eye for it.

“Bad News”, an new online game created by researchers at Cambridge University, was recently developed to help raise awareness of misinformation techniques.

"We want the public to learn what these people are doing by walking in their shoes," said lab director Sander van der Linden to The Guardian last week. The truth, as they say, is out there. Whether we have the hunger to seek it out is another matter.

_____________________
Read more:

Start-ups fight back against the scourge of fake news

Fake news? A better term would be a multi-headed hydra of deceit

Facebook's fake news offensive is better late than never

_____________________