Can algorithms save us from the darkness of the Internet?

Moderators around the world ‘clean up’ the worst that the internet has to offer, often at a high cost to their mental health.

Portrait Of Young Man Using Laptop In Dark Room. Getty Images
Powered by automated translation

When the perpetrator of the Christchurch attacks made the decision to broadcast his atrocity live on Facebook, he knew it would provoke the worst kind of human curiosity. He also knew that such curiosity would be intensified by people sharing the footage, and that social media platforms geared towards making material "go viral" would play their part.

YouTube was reportedly seeing one upload of the 17-minute video every second. Facebook later announced that it had removed 1.5 million copies in the 24 hours following the massacre. These platforms, built by engineers and mathematicians, were once again forced to make snap judgments relating to social responsibility. It's part of an ongoing struggle to make such decisions transparent, timely and, above all, correct.

Free speech and social media

"These are firms founded in the United States, in Silicon Valley and their notion of free speech comes out of a bastion of cyber libertarianism," says Sarah P. Roberts, assistant professor of information studies at UCLA. In other words, they believe that information wants to be free, and they are merely a conduit.

The stated aim of Facebook's chief executive, Mark Zuckerberg, is to give people the power "to share anything they want". But when personal expression manifests itself in shocking, violent and graphic ways, demands are inevitably placed on Facebook, Reddit and others to judge what is and is not acceptable.

"They don't want to be there," says Mark MacCarthy, senior fellow at Georgetown Law and Business School in Washington. "They don't want to be making these tricky judgments, and if I were in their shoes, I wouldn't either. But they can't go back to a posture where they claim to just be platform providers with no say over this."

As a result, deeply complex issues have had to be distilled into a series of yes-or-no judgements. A few weeks ago, The New York Times reported on the existence of complex moderation rulebooks, hundreds of pages long, that attempt to define what Facebook users are allowed to post. The ad hoc way those rules have been assembled has resulted in decisions that some claim are misguided, ignorant of local issues and lacking cultural nuance. While all but the most extreme libertarians will have approved of the decision by the major social media platforms to block videos of the Christchurch killings, such judgements are rarely that clear cut.

Those who moderate the content

In a recent documentary about social media moderation called The Cleaners, a former Google lawyer, Nicole Wong, described the formation of the policy surrounding the footage of the execution of former Iraq president Saddam Hussein: the video of his hanging was kept online "for historical purposes", while footage of the dead body was removed. "I've no idea if we made the right decision," Wong said. "History will tell us."

When formulated, these policies have to be implemented by contracted third-party companies who in turn hire thousands of moderators to manually review as many as 25,000 disturbing images and videos every day. The Cleaners addresses how the mental health of these workers has been blighted by their exposure to distressing material, and how their concerns are unheeded by target-driven bosses who are unwilling to provide any additional resources. ("It's your job to look at child pornography, you signed a contract," was one boss's response).

From Arizona to Manila, moderators have experienced workplace breakdowns and post-traumatic symptoms as a result of looking at horrific material, just so the rest of us don't have to. It's a terrible job, but someone's got to do it; as one moderator says in the documentary: "Algorithms can't do what we do."

Can algorithms save us?

Algorithms do, however, have considerable powers. Sophisticated fingerprinting technology, of the kind that can identify a song playing on the radio or help remove copyrighted material, was used extensively in the aftermath of the Christchurch killings, and it quickly blocked hundreds of thousands of copies of the video at source. But there's a marked difference between this kind of artificial intelligence and using it to assess newly produced images and videos for offensive content.

Machines don't make judgment calls as we think of them. They don't consider factors beyond what they're programmed to do. As somebody recently said to me: whatever the AI is doing, it's not watching videos.

In recent months Facebook has introduced tools to detect so-called revenge porn, and Google has released software to help curtail the spread of child sexual abuse material, but AI is currently incapable of making sophisticated judgements about, say, whether video containing guns is either news footage or terrorist propaganda. Zuckerberg told the US Congress that AI holds the key to successful moderation, but some say that this merely passes the buck to machines that will never possess this capability.

“There is a value to outsourcing some of the worst work to machines to avoid human eyeballs being exposed,” says Roberts, “but machines don’t make judgment calls as we think of them. They don’t consider factors beyond what they’re programmed to do. As somebody recently said to me: whatever the AI is doing, it’s not watching videos!

“That was such a good encapsulation of the limits of these tools.”

Better safe than sorry

After Christchurch, YouTube and Facebook were forced to temporarily escalate the role of AI in their systems. "They were losing control of the situation," says MacCarthy, "and so they got rid of material automatically." This "better safe than sorry" approach, however, removed many videos completely unrelated to Christchurch.

"The other problem," ­MacCarthy says, "is that this may not be something they reserve for just emergency circumstances." In this scenario, AI becomes a blunt tool of censorship – one that is completely antithetical to Silicon Valley's libertarian values.

For companies to decide what we should and should not see is evidently anti-democratic, not least because they can be (and are) subject to government influence.

Machines deciding what we should and should not see bestows them with a level of power and control that we find unacceptable. And yet platforms have been created that require these things to happen for us to avoid seeing footage of some of the worst acts a human being can perpetrate.

Should social media companies save us from our own worst impulses, or allow us to follow them, and suffer all the unknown consequences? It's a problem they don't want. But it's theirs to solve.