YouTube will soon require content creators to disclose use of generative AI

Video-sharing platform says it wants to ensure authenticity of its content to curb potentially dangerous narratives

YouTube said its upcoming generative AI labels will be part of rollouts 'over the coming months and into the new year'. EPA
Powered by automated translation

YouTube creators will soon have to disclose the use of generative artificial intelligence in their videos, as the company seeks to rein in the beneficial but polarising emerging technology.

The world's biggest video-sharing platform is seeking to ensure the authenticity of its content, which creators are increasingly sprucing up with generative AI.

This move is aimed at curbing misleading and potentially dangerous narratives, the Google-owned company said in a blog post.

“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” it said.

Viewers will be informed that content was altered or synthetic with a new label to be added to a video's description, YouTube said.

For certain types of content on sensitive topics, a more prominent label will be applied to the video player.

The California-based company already has some of the strictest content guidelines to prohibit, among others, “technically manipulated content”, but the rise of AI has forced it to re-evaluate its policies to prevent “a serious risk of egregious harm”.

“AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers, particularly if they’re unaware that the video has been altered or is synthetically created,” it said.

YouTube did not give a specific timeline for the new generative AI guidelines, which are at “the early stages”, but said they will be part of rollouts “over the coming months and into the new year”.

“Specifically, we’ll require creators to disclose when they've created altered or synthetic content that is realistic, including using AI tools,” it said.

“When creators upload content, we will have new options for them to select to indicate that it contains realistic, altered or synthetic material.”

Non-compliance will result in penalties, including the removal of content or suspension from the YouTube Partner Programme.

AI gained momentum with the introduction of generative AI, which rose to prominence thanks to ChatGPT, the language model-based tool made by Microsoft-backed OpenAI.

Its sudden rise has also raised questions about how data is used in AI models and how the law applies to the output of those models, such as a paragraph of text, a computer-generated image, or, in YouTube's case, videos.

Technology companies have begun enforcing regulations to ensure the use of generative AI.

Microsoft has committed to legally protect its customers if they are sued for using the company's AI services, as the use of more advanced iterations of the technology becomes more widespread.

The company's Copilot Copyright Commitment will shield users from the risk of intellectual property infringement claims if they use the generative AI output from its Copilot AI service, it said in September.

Facebook parent Meta Platforms, on the other hand, has a number of in-product transparency tools that help people understand when someone is interacting with or seeing content created by its generative AI features.

“We believe it’s in everyone’s interest to maintain a healthy ecosystem of information,” said YouTube, which released its own generative AI tools in September.

“We’re taking the time to balance these benefits with ensuring the continued safety of our community at this pivotal moment.”

Alongside the new rules on generative AI, YouTube will also be accepting requests to remove AI-generated or other altered content that “simulates an identifiable individual, including their face or voice”.

This will also apply to music content if it “mimics an artist’s unique singing or rapping voice”.

YouTube said it will continue to ramp up its AI guidelines, acknowledging that there will always be the threat of rogue elements using AI and trying to slip in content that does not adhere to the platform's standards.

“We also recognise that bad actors will inevitably try to circumvent these guardrails. We’ll incorporate user feedback and learning to continuously improve our protections,” it said.

Updated: November 15, 2023, 7:26 AM