Facebook parent Meta bans political campaigns from using generative AI advertising tools

Meta hopes to blunt potential misuse of AI-ad creation tool

Social media company Meta says advertisers running campaigns for housing, employment, credit or social issues are also barred from using its generative AI tools. Reuters
Powered by automated translation

Political campaigners hoping to use Meta's new generative artificial intelligence tools for the creation of advertisements to promote candidates and causes will not be permitted to do so.

Meta, the parent company of Facebook, specified who can and cannot use the much hyped AI tools on its website.

“As we continue to test new generative AI ads creation tools in ads manager, advertisers running campaigns that qualify as ads for housing, employment or credit or social issues, elections, or politics, or related to health, pharmaceuticals or financial services aren't currently permitted to use these generative AI features,” Meta noted on its help centre.

The development offers a glimpse into the not too distant future where political campaigns will be creating advertisements using AI, and also dealing with the repercussions when competing campaigns try to use the technology to sway public opinion.

In April, the Republican National Committee claimed it had produced the first US political advertisement “built entirely” by AI.

The advertisement used AI-generated images of US President Joe Biden and Vice President Kamala Harris combined with foreboding images of boarded up stores, rampant crime and closed banks.

Although the advertisement was not created using Meta’s AI tools, it generated discussion and debate in the US and around the world about how AI might be used and abused to influence voters.

“What makes this threatening is that it’s so inexpensive and so easy to mass produce this and we do not yet have a good grasp on how the general public views this in terms of their ability to make discernment,” said Timothy Kneeland, a political science and history professor at Nazareth College in upstate New York.

Besides the potential impact on voters, he also emphasised what AI could mean for the economics of running a political campaign.

“You might be able to cut your campaign staff in half,” Mr Kneeland said, echoing concerns about the overall economic impact of AI on employment in the future.

However, he also said that AI has the potential to level the political campaign playing field.

“This could democratise our campaigns for people who don't have deep pockets and can't afford to raise lots of money,” he said.

In October, UN Secretary General Antonio Guterres also spoke about concerns of easily produced AI-generated content that could deceive people.

“Thanks to one AI app, I had the surreal experience of watching myself deliver a speech in flawless Chinese, despite the fact that I don’t speak Chinese,” said Mr Guterres.

The move from Meta regarding its AI-generated advertising tool is the latest in a string of Big Tech announcements seeking to calm fears about how AI might used in the context of political campaigns.

In September, Google's parent company Alphabet announced it would require the disclosure of AI-generated political advertising content.

“An ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do” would need to be disclosed, as would an ad “with synthetic content that alters footage of a real event or generates a realistic portrayal of an event to depict scenes that did not actually take place”, the company said in a blog post.

From a regulatory perspective, the US Federal Election Commission, an independent agency that enforces campaign laws, recently voted to approve a petition to address “deliberately deceptive artificial intelligence campaign ads”, although it remains to be seen if any action will be taken.

Updated: March 06, 2024, 11:34 AM