Images circulating recently on Twitter featured a flustered-looking Donald Trump being arrested, devastation in Oregon after a 2001 earthquake and the Pope wearing a big white puffer jacket.
To the casual observer they look like real photographs, with only their fictitious subject matter in some instances indicating these are actually fakes - created in this case using an artificial intelligence image generator called Midjourney.
Other images created through Midjourney have shown Mr Trump in orange prison clothing cleaning a toilet block, the late John F Kennedy shaking hands with an alien in the White House and Boris Johnson, the former British prime minister, striding down a misty London street dressed like an old-style detective.
All artificially generated, they are convincing, aside from their subject matter.
While some of Silicon Valley’s biggest names, including Twitter’s owner, Elon Musk, recently signed an open letter calling for a pause on AI development amid concerns that it could pose risks to humanity, the technology is already causing a stir thanks to its ability to make such "deepfake" videos and images.
Political deepfakes are seen as particularly problematic.
"Deepfake content can be used maliciously, such as discrediting politicians or spreading disinformation," says Dr Subhajit Basu, associate professor of information technology law at the University of Leeds in the UK.
"For example, deepfake videos can create false statements or actions attributed to political candidates, influencing public opinion and undermining trust in the democratic process."
Although it may be possible to detect deepfakes, by the time a video or image has been shown to be fake, it may already have gone viral.
"This can be incredibly damaging in countries where the general population does not have ‘digital awareness’. This can contribute to the erosion of trust in democratic institutions, media, and public figures," Dr Basu says.
A politically incendiary deepfake video could be released shortly before polling day or politicians could also argue that controversial or ill-conceived comments they made were never actually said.
"If this threat did emerge, it would harm the ability for genuine public discussion and debate on politics as people feel less trust in what they see," Dr Basu says.
"There is, therefore, the concern that deepfakes may cultivate the assumption among citizens that a basic ground of truth cannot be established. The truth is fundamental for the functioning of a democratic society."
Aside from such societal harms, deepfakes can damage individuals, such as through "weaponised deepfakes".
"People can have their image and voice taken and used without their consent, resulting in misattribution or other negative portrayals," says Dr Alexandros Antoniou, a lecturer in media law at the University of Essex in the UK.
"This can in turn cause serious reputational harm, causing someone to lose their job or other valuable sources of income."
In 2020, The National reported how deepfake audio of the voice of a UAE-based father was used in a custody battle in the UK courts.
The manipulated recording falsely indicated that the man, a Dubai resident, had used threatening language towards his wife, who had denied him access to his children.
How to detect them?
Just as sophisticated technology is used to produce deepfakes, so there are technological methods to detect them - but it's not easy.
Dr Antoniou distinguishes between direct and indirect methods.
"Broadly speaking, direct technological methods would involve digital forensic analysis, namely the analysis of the metadata of a video to determine if it has been manipulated. This is particularly effective in detecting deepfakes that have been poorly made or not fully optimised," he says.
In the case of the deepfake audio of the Dubai resident, analysis of the audio file’s metadata by the man’s lawyers demonstrated that the recording had been doctored.
Dr Antoniou says another direct detection method involves facial recognition software, with deep learning algorithms identifying deepfakes by analysing patterns and anomalies or inconsistencies in the audio and video data, such as in lighting or shadows.
Blockchain technology offers an additional option to uncover fakery, because the creation of a unique digital signature for each image and the storing of it in blockchain can indicate whether there has been tampering or manipulation.
Dr Antoniou says indirect methods may involve considering an image’s context, such as the location or event that it is supposed to come from.
Sometimes simply looking carefully at a poorly produced fake image will reveal giveaway flaws.
He cautions that it as deepfakes become more realistic, it may become "increasingly difficult" for the untrained eye to tell apart real and artificially generated videos.
Trying to ban deepfake production outright is seen as problematic because the technology can legitimately be used, for example, in film production or for satire.
"Instead of concentrating efforts at preventing the production of deepfakes, perhaps we should be looking at developing suitable measures and systems to limit their spread and impact," Dr Antoniou says.
This includes social media companies and online content-sharing platforms developing better policies to flag and remove deepfakes, and doing more to discourage their creation and sharing.
Given the potential harmful effects on individuals, Dr Basu "strongly advocate[s]" tighter legislation.
While numerous other laws, such as those to protect intellectual property or prevent harassment, may be used to combat harmful deepfakes, Dr Basu says targeted legislation is necessary.
"Governments need to create laws that directly address deepfakes, making it illegal to create and distribute such content without consent, especially in cases intended to harm or deceive others," he says.
Some authorities are taking action. China has developed legislation specifically aimed at deepfakes and California has enacted laws specifically focused on politicians.
Individuals can help to protect themselves, by creating unique, strong passwords for all accounts and enabling two-factor authentication whenever possible.
"We need to be much more cautious with personal information," he says. "We need to limit the amount of personal information and content we share online, particularly on social media, and use privacy settings to restrict access to our content.
"Regularly search for your name and images online to monitor for potential deepfake content and take action if something suspicious is found."