Facebook, Twitter and the future of free speech on social media

Social media companies make adjustments to protect users and platforms, but approaches differ

In recent years social media companies have been grappling with how to approach regulating their platforms amid accusations that they've been used to spread fake news and incite violence in some instances. 
Powered by automated translation

Few companies have experienced such a rapid rise and sudden fall in public trust like the social media giants.

In the early days of social media, Facebook and Twitter enjoyed altruistic reputations that soared with the lofty goal of bringing people closer together.

In recent years, however, especially following the 2016 US presidential election, public opinion of social media has been tempered amid proof that social media platforms helped spread fake news, divulged private user data to advertisers and that some content had even incited violence in parts of the world.

Despite the fall from grace, one in five adults in the US use social media as their primary news source, according to Pew Research.

This prevalence has prompted governments around the world to take a closer look at the companies, their practices and the vulnerability of their users.

In the US, there have been multiple congressional hearings but little in the way of tighter regulations. While pundits and politicians try to figure out what to do next, Twitter, Facebook and other big tech companies in the spotlight are trying to make their own changes to protect their platforms.

The danger of self-regulation

The sheer size and power of many social media companies trying to self-regulate has created a grey area for free speech.

The global reach of the platforms adds a layer of complexity to their efforts to keep the platforms safe, Peter Yacobucci, associate professor of political science at Buffalo State University, told The National.

“All of this is under the context of both media companies [Twitter and Facebook] being multinational and trying to come up with policy that is even remotely coherent across the US, EU, Middle East, as well as an eye towards China,” he said.

Even in the US alone, Mr Yacobucci noted that the breakneck pace of social media and technology are making it difficult to enact changes to social platforms, with or without regulation.

“I think this is where new doctrine is clearly needed. The constitutional structures established over 50 years ago for a print media don't fit,” he said. “The founding fathers wrote in an era before consumer capitalism with artificial fads and crafted pitches. Public entities have grown exponentially in their sophistication in moulding their citizens' preferences. This calls into question every element of democracy.”

In terms of democracies, Mr Yacobucci said he believes social media companies would continue to see more regulatory efforts from the EU long before they see them from the US, resulting in the companies trying to continue self-regulation efforts.

“They’ve essentially said to the social media platforms, ‘if you want to work within our area, you have to stop hate speech,’ so the algorithms the companies use there are much more aggressive at finding and limiting that information. The EU doesn’t have a First Amendment like the US has, nor has any other part of the world.”

As for elected officials, social media has become a partisan battleground between Republicans and Democrats in the US since the 2016 presidential election. However, the criticisms are vastly different, Mr Yacobucci said.

“Republicans have made social media and all media as the bogeyman that is trying to suppress conservative thought,” he said. “Democrats see it differently in that they see these media companies as willing to allow false ads and posts that undermine the entire democratic process and promote hate speech.”

US President Donald Trump signed an executive order in May directed at all social media companies to "defend free speech”, shortly after his tweets were flagged by Twitter.

Legal challenges to his executive order are expected.

"I think the [companies] are doing a lot more than they did back in 2016," said Mathew Ingram, an award-winning journalist and chief digital writer for the Columbia Journalism Review, before cautioning that the 2016 comparison might not be a great benchmark.

“Prior to 2016 and during 2016, it wasn’t quite clear yet that disinformation on these platforms was such a huge problem,” he said.

Mr Ingram, who has spent the past 15 years writing about business, technology and new media, also noted how the atmosphere inside social media companies might have changed due to the scrutiny and increased efforts to try to keep the platforms safe.

“We’ve seen Facebook staffers write open letters and blog posts talking about how troubled they are by what Facebook is or isn’t doing,” he said. “People who feel Facebook should be doing more to remove disinformation are having a hard time. They used to think the platform was for connecting people so they could share pictures of their loved ones … if that’s how you thought of your job, you don’t want to think of it as ‘I’m enabling someone to rig an election’.”

Facebook changes

With approximately 2.7 billion active users, Facebook's reach as the world’s largest social network made it a prime target for those pushing fake news for profit and propaganda campaigns by Russia, Iran and other countries.

One of the initial efforts to prevent the spread of fake news on Facebook started in December 2016 when the platform made it easier to flag potentially false stories and attempted to disrupt financial incentives for those who made a profit sharing such stories.

That, however, was just the beginning, as the company came under increased scrutiny from regulators.

Facebook has also made it a point to remove what it describes as “inauthentic behaviour” on the platform around the world.

“We’ve removed multiple pages, groups and accounts for misleading people about who they are and what they’re doing,” read a 2019 news release.

Most recently, Facebook, like Twitter, has also decided to delete content from President Trump, after his campaign published untrue posts that claimed children were “almost immune” to Covid-19.

The overall changes have not come without criticism, with many suggesting that the company had been slow to remove altered and deceptive videos, and in some instances, deciding to simply flag the videos with a warning to users.

Facebook has also faced criticism for deciding to allow political ads, although founder and chief executive Mark Zuckerberg announced a small change to the company’s policy.

“We’re going to: block new political and issue ads during the final week of the campaign,” he posted on Facebook. “It's important that campaigns can run get-out-the-vote campaigns, and I generally believe the best antidote to bad speech is more speech, but in the final days of an election there may not be enough time to contest new claims.”

The social media platform also recently removed a Trump advertisement which many said contained a symbol used by Nazis, a clear violation of the company’s hate speech policies.

Twitter changes

"It's always an election year at Twitter," read a statement, in part, provided to The National. "We prioritise the removal of content when it has a call to action that could potentially cause harm and will always take enforcement action when tweets violate the Twitter Rules."

In that same statement, Twitter also emphasised that the company’s approach was not unique to the US elections.

“Twitter is a global service and our decisions reflect that … We take the learnings from every recent election around the world and use them to improve our election integrity work.”

In 2019, Twitter announced it would no longer allow political ads, differentiating itself from its much larger rival, Facebook.

Even before his 2016 White House bid, Mr Trump was known for his prolific and controversial use of Twitter.

Although Twitter founder and chief executive Jack Dorsey originally defended his company against accusations that it was misused to spread fake news around the world, Twitter, like Facebook, has since taken several steps to protect users and the platform.

In recent months, Twitter made it a point to flag several of Mr Trump’s tweets for containing unsubstantiated claims about mail-in voting and potentially abusive rhetoric toward protesters.

Some have critiqued the company’s decision to do so, but Twitter has continued to explain in detail its approach in trying to maintain the integrity of the service.

“In March, we broadened our policy guidance to address content that goes directly against guidance on Covid-19 from authoritative sources of global and local public health information,” read a news release.

The company has also consistently sought to identify and remove hundreds of thousands of accounts around the world that it says were linked to various governments attempting to spread misinformation and “geopolitical narratives”.

Mr Ingram warned that although social media companies may have been late to act on their platforms being misused in the run-up to the 2016 election, there were and still are a lot of variables at play in terms of whether or not those efforts to spread fake news were effective.

“It's like predicting the weather,” he said, before cautioning against social media companies taking a hands-off approach.

“You have to find a middle ground, but what is that middle ground? Is it that you put warning notices on tweets; is it that you remove them but only if they talk about specific acts of violence against specific people? These are things that have taken centuries to be established in law, and now we have private corporations effectively trying to re-engineer free speech questions.”