As the 2024 US presidential race heats up, so too is a race to handle and respond to the inevitable use of artificial intelligence in campaign advertisements.
Given the aggressive nature of politics and the quickening pace of AI development, both the tech giants and regulators may be facing an uphill battle.
Shortly after US President Joe Biden in April announced his plans to seek re-election for the Democratic Party, the Republican National Committee responded with what it described as an ad “built entirely” by artificial intelligence.
The ad depicts Mr Biden and Vice President Kamala Harris hypothetically getting re-elected in 2024, with images of the two generated by artificial intelligence, combined with fake images of the US border being overrun, banks boarded up and crime running rampant.
It ends with a solemn-looking, AI-generated image of Mr Biden with his elbows on his desk in the Oval Office, frustratingly staring at his desk.
Not surprisingly, the Democratic National Committee wasted no time responding to the ad and its AI-generated imagery.
“In incredibly telling fashion, the RNC had to make up images,” read the news release. “Because quite simply, they can’t argue with President Biden’s results.”
Aside from the obvious political rebuttal of the ad, some say there are bigger issues to be addressed as AI becomes more prevalent in campaign advertising.
“We’re on the edge of something here and it’s good to proceed with caution,” said Timothy Kneeland, a political science and history professor at Nazareth College in upstate New York.
But that, he warned, might be easier said that done.
“It’s like a car ad where you have all the fine print at the bottom of the screen,” he said, referring to efforts at disclosing which ads are created with AI. “Do people really read it?”
Despite the potential challenges in effectively regulating and blunting the deceptive impact of AI in campaign ads, the US Federal Election Commission is in the early stages of trying to do so.
The independent US federal agency that enforces campaign laws recently voted to approve a petition to address “deliberately deceptive Artificial Intelligence campaign ads”.
Prof Kneeland, however, said the road to regulating the ads would be a lengthy one, and any attempt to crack down could result in First Amendment litigation.
Prof Kneeland pointed to a judge's recent decision that said federal agencies had previously overstepped when they attempted to work with social media companies.
In some instances, it appears various tech giants are trying to get ahead of what will probably be a flurry of AI-generated ads.
Back in September, Google's parent company Alphabet announced it would require the disclosure of AI-generated political advertising content.
“All verified election advertisers in regions where verification is required must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events,” read a post in Google’s advertising policy section.
Google also explained in greater detail examples of AI posts that would require disclosure.
“An ad with synthetic content that makes it appear as if a person is saying or doing something they didn’t say or do,” would need to be disclosed, as would an ad “with synthetic content that alters footage of a real event or generates a realistic portrayal of an event to depict scenes that did not actually take place.”
But ads using AI in an inconsequential way, such as colour correction or red-eye correction, according to the policy, would be exempt.
We contacted Facebook's parent company Meta to see if it had any plans to require disclosure of AI-generated content.
“We’re focused on transparency on AI-generated content when it has the potential to mislead people,” the social media company said in an email.
“Our ad standards, community standards, and fact-checking program apply whether content is created by AI or a person.”
Recently, Meta introduced “AI-powered features for ad creatives”, but noted that ads for social issues, elections or politics would currently be excluded.
“This is in line with our approach for testing new products, and considering additional sensitivities around AI and regulated industries,” the email from Meta read.
At least for now, it seems US federal regulators are taking a wait-and-see approach.
Claire Rajan, a partner for international law firm Allen & Overy’s Washington office, and lead for the firm’s political law group, does not expect that to change overnight, but said circumstances might overtake the status quo.
“I think that if there is a continued use of deepfakes and Congress gets frustrated or upset by this, that they may well get motivated to pass law,” she added, noting that incumbent elected officials may start to enact legislation only if they’re on the receiving end of a deepfake video.
“There is legislation pending on this topic,” she said, referring to several pieces of legislation introduced by Democratic and Republican members of congress.
Ms Rajan, who spent seven years with the US Federal Election Commission and worked to defend election laws that were challenged, said it’s not necessarily a surprise that regulators are taking a methodical approach to AI.
“It took them 60 years to regulate the railroads,” she said. “That’s our legislative process … a legislative body that does move slowly.”
Ms Rajan offered a caveat, however, as to why regulating AI might not take as long as some might expect.
“In a bipartisan way, there’s real attention and focus on it,” she said, before cautioning that the fast-paced changes already taking place with AI could pose some obstacles.
“It probably makes the most sense to bulk up existing laws. Rather than trying to regulate a particular technology, we try to regulate conduct,” she added.
The US is not alone as it deals with AI-generated ads and deepfakes seeping over into political campaigns.
In Slovakia, AFP and Wired reported that an AI-generated audio recording posted to Facebook 48 hours before general elections may have contributed to the defeat of the country's liberal party.
In the UK, an AI-generated deepfake audio recording of Labour Party leader Keir Starmer supposedly being abusive to staffers was heard millions of times and replicated on several social media platforms even after it was debunked.
For Prof Kneeland, the results of AI being used in such ways could expand beyond politics.
“Aside from political consequences, I think there might be social consequences and it could further erode civility and constructive communication,” he said.