There are no "perfect" laws to regulate harmful content online anywhere in the world but Facebook would welcome regulation, a vice president of public policy at the social media company said, echoing earlier statements by founder Mark Zuckerberg.
"We don't think there is a perfect law out there that we can point to and say everybody should do that," Simon Milner, vice president of public policy in Asia Pacific at Facebook said. "Hopefully, soon we will see an example where we can say, 'Actually, this country's got it right'. And we can all kind-of get behind that."
Disinformation, abuse and harmful content published on technology platforms poses an urgent threat to society as life increasingly moves online.
Every minute, 500 hours of video are posted to YouTube and 243,000 photos are uploaded onto Facebook, according to the World Economic Forum. On Facebook alone, 11.6 million pieces of content on child nudity and sexual exploitation of children were removed in the third quarter of 2019, a substantial increase on the previous quarter. Bullying, fake accounts to spam or defraud and terrorist propaganda is also spreading rapidly.
For now, Facebook is largely policing itself for harmful content.
The world's biggest social media company employs 35,000 people to develop technology that can constantly scan for illegal activity, hate speech or disinformation on its website and app.
"Most people are not reviewing content, they are designing technologies and iterating on those technologies to continually improve them," he said of the massive content moderation team. He claimed Facebook finds the bulk of harmful content before users see it and highlighted progress in the company's ability to monitor itself.
But the company also largely outsources its moderating, a practice that is increasingly facing backlash, a report by The National found earlier this year. Last year, more than 200 moderators signed an open letter to Facebook and outsourcing firms used by the social media giant citing concerns over Covid-19 after they were told to work from the office carrying out Facebook's "most brutal job".
Mr Milner said technology used to monitor content is getting better, even though moderating by humans is still necessary. Three years ago, when Facebook began monitoring for hate speech using machine learning, the company caught only a quarter of the content this way. Now, 97 per cent of hate speech on Facebook is detected by these algorithms.
He acknowledged "we don't always get it right, so that combination of technology and human review is extremely important".
:quality(70)/cloudfront-eu-central-1.images.arcpublishing.com/thenational/D4JK42BFACIHOJPB4LD6CQ3V34.jpg)
Mr Milner, who made his comments on a panel at the World Economic Forum's Global Technology Governance Summit, echoed the message of Mr Zuckerberg, who for years has said he wants more from politicians.
In a 2019 op-ed written for the Washington Post Mr Zuckerberg called for the regulation of "harmful content, election integrity, privacy and data portability".
Lene Wendland, a chief in the business and human rights section of the United Nations who spoke on the same panel on Wednesday, said that "no one has gotten it exactly right" from a regulatory or business standpoint when it came to online content.
But she commended Facebook for the human rights commitment it released last month.
In March, Facebook did not change any of its existing rules but laid out a new policy holding itself accountable to human rights as defined in international law, including the United Nations Guiding Principles on Business and Human Rights (UNGPs).
Critics said the policy was too long in the making, but Ms Wendland said it was "a clear human rights commitment" that would crucially allow Facebook to be "held to account by stakeholders".
She added that given the policy was only a few weeks old it would need to be continually monitored, but she sounded a note of optimism that businesses were "embracing responsibility" and "experimenting" with ways to address harm online.
The plan set out by Facebook will increase transparency from the company. It plans to report "critical human rights issues" to its board of directors. However, how those issues would be identified was not specified.
Facebook also said it would release an annual public report on how it was addressing human rights concerns stemming from its products, establish an independent oversight board and change content policies, including creating a new policy to remove verified misinformation and unverifiable rumors that may put people at risk of imminent physical harm.