Online bullying: what can Instagram teach us about being abusive on the Internet?

Instagram's new tool recognises if the tone of the comment you’re about to post is abusive, and asks: 'Are you sure you want to post this?'

(FILES) In this file photo taken on May 2, 2019, logos of US social network Instagram are displayed on the screen of a computer and a smartphone in Nantes, western France. Instagram on July 9, 2019 announced new features aimed at curbing online bullying on its platform, including a warning to people as they are preparing to post abusive remarks. / AFP / LOIC VENANCE
Powered by automated translation

As a young boy I would frequently shout out nonsense for no reason, and my father would sternly tell me to put my brain in gear before opening my mouth. Today, in the ongoing battle to encourage people to be more civil to each other online, photo-sharing service Instagram is deploying a 21st century take on my father’s advice.

If their artificial intelligence systems sense that the tone of the comment you're about to post is abusive, a suggestion will pop up: "Are you sure you want to post this?" It's the lightest of moderation techniques; it doesn't ban people, it doesn't censor them, it merely asks them to take a moment to think about what they're doing. And it appears to be having a positive effect.

Can artificial intelligence teach us about the consequences of our actions?

"From early tests of this feature, we have found that it encourages some people to undo their comment and share something less hurtful once they have had a chance to reflect," Instagram boss Adam Mosseri said in a blog post.

Reflection and careful contemplation are rare in the fast-paced world of social media, and online services have battled for many years to keep our tempers under control. Anonymity was once deemed to be the biggest problem, thanks to the so-called online disinhibition effect where using an alias gives you carte blanche to be as rude as you like.

Aggression can be reinforced within a group if that behaviour continues to happen without being challenged. Once it starts it's almost self-fulfilling

Platforms introduced systems that linked you to your real name or to a phone number, but bad behaviour continued regardless. As those platforms became too large to moderate manually, users were asked to help out by rating comments, promoting the good ones and burying the bad. Some services asked for money for the privilege of leaving comments, while others simply gave up on user interaction altogether and closed their comment boards down. But Instagram is now showing that AI might offer a way forward by asking us to think about the consequences of our actions.

"There has been so much debate about how you programme AI to understand the nuances of language," says Renee Barnes, an Australian academic and author of the book Uncovering Commenting Culture. "It's very difficult to do, but the suggestion that Instagram's system now makes – 'are you sure?' – both recognises the limits of AI and puts the onus back on people themselves, and that's a good thing. We need to take ownership to create the sort of spaces we want."

There have been many attempts to stop online bullies, but none have stuck

As toxic comments have run rampant across the web, there have been repeated calls for offenders to be banned, but exclusion is a blunt instrument that’s easily circumvented. Asking people to take a moment to reflect on their actions might be seen as a weak and excessively liberal response, but it does have a measurable effect.

"Over the years, there have been lots of little experiments with trying to slow the pace of conversation," says Joseph Reagle, associate professor of communication studies at Northeastern University in Boston in the US. "Civil was an interesting platform that did this, by asking a user to rate another comment for its civility before their own was posted."

Other experiments have involved getting people to perform a task – to type in some text or click through to another space in order to comment – and that small hump can give people a moment to reflect. Similar techniques are at work in Gmail's "Undo Send" feature, which effectively holds your email for up to 30 seconds after you've sent it in case you suddenly wish you hadn't and want to recall it. That half a minute allows you to reflect, too.

"It's an interesting approach, because it's based on the idea that not everyone who is uncivil is a troll," says Barnes, of Instagram's initiative. "Maybe they've got caught up in a discussion, or they're missing some of the social cues we have offline that let us know we're treading a bit close to the edge."

One in five comments online are now deemed 'uncivil'

The question is whether such gentle, persuasive measures can be effective in an online environment that's seen as becoming more toxic. As many as one in five online comments are now deemed to be uncivil, but why are we veering further towards impoliteness? One factor, Reagle says, is the ever-growing size of these platforms, which continue to prove that "intimacy doesn't scale".

There appears to be an inherent contradiction between the desire of social media platforms to boost user numbers and activity, and our desire for them to be free of unpleasantness.

"I don't see anything on the horizon that makes me think public discourse is going to be more tempered any time soon beyond a move to small-scale communities," he says. Another factor is that within these huge communities incivility has become normalised, with negative comments prompting evermore negative comments. Reagle notes that offenders also double down when challenged, by downplaying the comments they've made or accusing the victim of not being tough enough. Prominent online figures, says Barnes, are partly responsible.

“Social values are created by dominant personalities within groups,” she says. “Aggression can be reinforced within a group if that behaviour continues to happen without being challenged. Once it starts it’s almost self-fulfilling.”

Where's the line between engagement and online abuse?

There appears to be an inherent contradiction between the desire of social media platforms to boost user numbers and activity, and our desire for them to be free of unpleasantness. But Barnes believes that it's very much in the platforms' interests to find solutions to these problems. "A lot of them work very hard to make these spaces more harmonious because they're making money by hosting them, and if they're not inclusive then people are less likely to go there," she says.  

Barnes is optimistic that newer, innovative measures might help. "People shrug their shoulders and say well, it's the internet, it's the Wild West, what do you expect," she says. "But we're not going to get better spaces unless we expect better spaces."

The “are you sure?” prompt isn’t the only measure that’s been recently introduced by Instagram. Another allows its users to “restrict” people whose comments they find unpleasant. If you restrict someone, they become the only person who sees their comment online. Not you, nor anyone else. Their comment will have no effect on the wider world at all, rendering them utterly powerless. It reminds me of another one of my dad’s pieces of advice when I was confronted with bullies as a child: “If you ignore them, they’ll get bored and go away.”