In the name of fighting “fake news”, Singapore recently passed a law that requires websites and social media apps to remove content the government deems false or “against the public interest”, and to publish corrections.
Around the same time, Chris Hughes, one of the three co-founders of Facebook, publicly called for the company to be broken up, thereby leaving it with less power over society and politics around the world. He also urged the US government to regulate digital products and services, just as it does aviation and pharmaceuticals.
Are either of the above the way to deal with social media’s immense power over people and politics?
Not really. What’s needed is a plan that takes a few elements from each and a bit more besides.
At the outset, it is important to recognise that Facebook is a monopoly, in that it has no real like-for-like competitor. Add to that Facebook’s ownership of WhatsApp and Instagram, and it is clear that Mark Zuckerberg, Mr Hughes’ old college roommate, has virtually untrammelled power over 2.4 billion people across the planet.
Therefore, as Mr Hughes argued, the American authorities could legitimately do what they did with the telecommunications company AT&T in the 1980s, breaking up Facebook and creating competition in the social media marketplace. Europe is also considering if the platform is overly dominant and needs to be curbed.
But that’s hardly going to be enough. The additional step of government regulation is necessary, not to mention equally contentious. The Singapore model is not perfect. That said, almost every expert on technology anywhere in the world agrees that governments need to make rules, in order to prevent misinformation from destroying political and social norms.
Several governments are considering or already implementing elements of bespoke regulatory schemes, even if only in the short term. However, there is little evidence right now of a comprehensive, streamlined strategy.
For instance, Sri Lanka blocked social media in the wake of the Easter Sunday attacks and again after rising anti-Muslim violence. Officials have said that this was to curtail the spread of false information that could fuel tensions.
During India's multi-phase general election, which ends on May 19, the nation's electoral authorities supplied WhatsApp with phone numbers that had previously spread "fake news" and objectionable content. The messaging service then blocked them.
Last month, the British government announced plans to appoint a new online regulator, whose job would include levying large fines against tech companies that fail to protect users against specific types of content and even blocking access to offending websites within the UK.
Regulators in France last week issued a report calling for social media companies to be bound by a legal duty of care. According to reports, the French regulators also recommended government intervention, having spent weeks inside Facebook's various offices in Europe, noting the company’s lack of transparency about key algorithms. There are also suggestions that Facebook and other social media platforms might have to require users to provide identification before they are allowed to open accounts.
In the US, there is expectation that individual states will enact their own regulations covering internet privacy and other issues, and the federal government will eventually follow through with nationwide rules.
China, of course, is not concerned by much of this, because it has shut out foreign tech platforms and replaced them with domestic sites and apps. Since 2017, the government has also banned unauthorised VPNs, which allow users to access the internet outside of China without restrictions.
On May 1, the Russian President Vladimir Putin approved legislation that seeks to establish his country’s “internet sovereignty”, but critics say it is nothing more than a means of political control.
A clear pattern emerges from some of these piecemeal measures. First, that regulation is essential and can only be executed and enforced by individual governments in their own territories. Despite concerns about the effect on free speech and the ability of minority groups to express themselves, a country can really only expect to have the internet its system allows. Inevitably, this means the worldwide web will be further Balkanised and uneven in terms of freedom.
Second, the online world should be regulated, just like the offline world. As mooted in France, similar to applying for a driving licence, social media accounts should belong to real, traceable people who register with valid, verifiable documentation. This will mean social media users will bear responsibility for their online actions. They may be feted for their insight and positivity, or penalised for hate speech, malicious misinformation or incitement to violence.
That may sound draconian, but consider the examples set by the highest court in America, a country wedded to free speech. In 1919, the US Supreme Court criminalised behaviour such as “falsely shouting fire in a theatre and causing a panic”. In 1969, the Court made it more difficult to convict someone simply for falsely shouting "fire". They had also to incite "imminent lawless action". Even so, the legal recognition of the limits to free expression still holds true. The common good depends on it.