Facebook parent Meta to start labelling all AI-generated content from May

The company says it will remove the altered content only if it violates its policies or in the 'highest risk scenarios'

Meta is making changes to its policies, which is says will reduce misleading content on its platforms. Reuters
Powered by automated translation

Meta Platforms, the parent company of Facebook, Instagram and Threads, will start labelling artificial intelligence-generated audio, image and video content as “Made with AI” from next month to address the issue of misleading content on its platforms.

The company said that it will only label AI-generated content and will not remove it unless it violates its policies or in the “highest risk scenarios”.

It admitted its existing policy is “too narrow” as it only covers videos that are created or altered through AI. Its current policy for manipulated media was drafted in 2020.

“In the last four years, and particularly in the last year, people have developed other kinds of realistic AI-generated content like audio and photos, and this technology is quickly evolving,” Monika Bickert, Meta’s vice president of content policy, said in a blog on Friday.

Meta said it is making changes to its policy on the basis of feedback it received from its oversight board that consulted over 120 stakeholders in 34 countries to draft new rules.

It also conducted public opinion poll with more than 23,000 respondents in 13 countries. Nearly 82 per cent respondents voted for adding warning labels for AI-generated content.

Following the announcement, Meta's shares rose and were trading 3.27 per cent higher at $527.61 at 8.40pm UAE time on Friday.

The stock has gained about 14 per cent so far this year and the company had a market value of about $1.01 trillion at close on Thursday.

Globally, AI investments are projected to hit $200 billion by 2025 and could possibly have a bigger impact on gross domestic product, Goldman Sachs Economic Research said in a report in August.

But despite a meteoric rise of the AI industry, authorities have been scrambling to regulate the sector as new innovations continue to outpace existing guidelines.

In December, the EU became the first major governing body to enact major AI legislation with the Artificial Intelligence Act, stipulating what can and cannot be done, and announcing fines of up to more than €35 million ($38.4 million) for non-compliance.

Meet the world's first CEO with artificial intelligence

Meet the world's first CEO with artificial intelligence

To ensure there is no restriction on freedom of expression, Meta said its oversight board has recommended a “less restrictive” approach to manipulated media like labels with context.

Labelling will be done based on Meta’s own detection of AI content, as well as when users disclose that they are uploading AI-generated media.

“A majority of stakeholders agreed that removal should be limited to only the highest risk scenarios where content can be tied to harm, since generative AI is becoming a mainstream tool for creative expression,” Ms Bickert said.

“If the digitally-created or altered content create a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label so people have more information and context.”

Meta said it will remove content, regardless of whether it is created by AI or a person, only in selective cases.

For example, if it violates its rules against voter interference, bullying and harassment, violence and incitement, or any other policy in its community standards.

Meta also said it has a network of nearly 100 independent fact-checkers.

If they rate content as false or altered, it will be showed lower in the feed so fewer people see it. An overlay label with more information will also be added.

Updated: April 05, 2024, 5:25 PM