Does the UN need a watchdog to fight deepfakes and other AI threats?

The Security Council will meet soon to scrutinise the challenges posed by artificial intelligence, but it must work with scientists, the private sector and civil society if it wants to succeed

Both Russian President Vladimir Putin and Ukrainian President Volodymyr Zelenskyy have been impersonated online using deepfake technology. Nick Donaldson / Getty
Powered by automated translation

Less than a month after Russia's invasion of Ukraine, a video surfaced on social media that purportedly showed Ukrainian President Volodymyr Zelenskyy urging his soldiers to surrender their arms and abandon the fight against Russia. While the lip-sync in the video appeared somewhat convincing, discrepancies in Mr Zelenskyy's accent, as well as his facial movements and voice, raised suspicions about its authenticity.

Upon closer examination, a simple screenshot revealed that the video was indeed a fake – a deepfake. This marked the first known instance of a deepfake video being utilised in the context of warfare.

Deepfakes are synthetic media, including audio, images, or videos, that have been manipulated and altered to falsely portray individuals saying or doing things they never actually did.

On June 5, Russian President Vladimir Putin declared martial law and military mobilisation in the regions bordering Ukraine, announcing these measures through various Russian radio and television networks. But it was soon discovered that Mr Putin's speech was also a fabrication – a deepfake broadcast through hacked TV and radio channels. The deepfake was so convincing that it prompted Russian officials in the Belgorod region to issue warnings, cautioning the population against falling prey to the deepfake's intended to “sow panic among peaceful Belgorod residents”.

The rise of deepfakes serves as a vivid illustration of the exponential growth of artificial intelligence and the challenges it poses to both national and international governance. Deepfake technology, fuelled by the invention in 2014 of generative adversarial networks (GANs) – a type of machine learning framework – aims to create new content by pitting two neural networks against each other in a competitive fashion.

By 2018, GANs had advanced to the point where they could generate, for instance, highly realistic images of individuals who have never actually existed. In Autumn 2017, the first deepfake videos were uploaded on Reddit. These initial deepfakes involved merging the faces of Hollywood actresses onto the bodies of performers in adult videos. In less than two years, almost 15,000 deepfake videos had been identified online, with an alarming 96 per cent of them falling into the category of adult content. Moreover, 100 per cent of the victims depicted in these videos were women.

Disturbingly, it was reported earlier this year that paedophiles are now employing deepfakes to create explicit images of child abuse. One paedophile in Quebec, Canada was recently convicted after the police discovered 545,000 pictures and videos of children on his computer, with 86,000 of them being deepfakes generated from real children's images collected from social media, particularly Facebook.

Deepfake technology has also demonstrated its potential for other nefarious purposes beyond exploiting individuals. It can be employed to alter medical scans, creating fake tumours or removing real ones, or manipulate satellite images to fabricate entire geographical features or deepfake geography. The implications are profound, posing risks not only to personal privacy but also to various sectors, including healthcare and national security.

On November 30, 2022, OpenAI, an American artificial intelligence laboratory, released ChatGPT, an AI chatbot. Within five days, ChatGPT garnered five million users. It took Netflix three-and-a-half years to reach the same milestone. After just two months, the application boasted 100 million users, making it the fastest-growing consumer application in history - until it was overtaken by Meta’s app Threads this month. While the first iteration of ChatGPT (ChatGPT 3.5) achieved a mediocre score (10th percentile) on the US Uniform Bar Exam, the subsequent release of ChatGPT 4 on March 14, 2023, outperformed 90 per cent of aspiring lawyers attempting to pass the bar.

The growing accessibility of generative AI presents not only opportunities, but also immense risks

In a recent experiment, MIT associate professor and GCSP polymath fellow Kevin Esvelt and his students utilised freely accessible "large language model" algorithms like GPT-4 to devise a detailed roadmap for obtaining exceptionally dangerous viruses. In just one hour, the chatbot suggested four potential pandemic pathogens, provided instructions for generating them from synthetic DNA, and even recommended DNA synthesis companies unlikely to screen orders. Their conclusion was alarming: easy access to AI chatbots will lead “the number of individuals capable of killing tens of millions to dramatically increase”.

The growing accessibility of generative AI presents not only opportunities, but also immense risks, including targeted manipulations at the individual level. A recent study revealed that AI-generated responses to patient queries outperformed physicians' responses in terms of quality and empathy. Empathy, the intrinsically human ability to understand another person's feelings from their perspective rather than our own, is now being surpassed by chatbots. This should serve as a wake-up call for governments, as it opens the door to potential large-scale subversion campaigns and gives rise to a new form of warfare –cognitive warfare – where public opinion is weaponised to influence policy and destabilise public institutions. Generative AI and tools such as ChatGPT could be soon considered as weapons of mass deception.

These examples underscore the exponential pace at which AI is advancing. The challenge lies in the fact that humans and organisations tend to think in a linear fashion when considering future developments. Faced with exponential growth, such as the rapid spread of the Covid-19 pandemic, many governments have often demonstrated slow and ill-suited responses.

UAE minister calls for global coalition to regulate artificial intelligence

UAE minister calls for global coalition to regulate artificial intelligence

In an era defined by emerging exponential technologies, global and national governance must adapt to become more reactive and anticipatory. Strategic foresight, the ability to envision and act upon potential futures, should become a standard procedure for any organisation engaged in national and global governance. This necessitates the inclusion of diverse skills and profiles among those working within these institutions. Furthermore, effectively addressing the consequences of exponential technological transformations requires the ability to identify weak signals, highlighting the need to promote polymaths – individuals with knowledge spanning various subjects – to break free from silo thinking and groupthink.

On July 18, the UN Security Council will convene its first-ever meeting to discuss the potential threats posed by artificial intelligence to international peace and security. The UN already addresses certain aspects of this issue through, for instance, the Governmental Group of Experts (GGE) on Lethal Autonomous Weapons (LAWS), which examines the potential impact of autonomous weapons on international humanitarian law and possible regulations or bans. However, autonomous weapons also have profound implications for strategic stability, an area hardly discussed by the GGE.

AI represents a dual-use technology even more transformative than electricity, and therefore has profound international security implications. The UN Secretary-General recently expressed support for the establishment of a UN agency on AI, similar to the International Atomic Energy Agency. Such an agency, focused on knowledge and endowed with regulatory powers, could enhance co-ordination among burgeoning AI initiatives worldwide and promote global governance on AI.

To succeed, however, the UN must transcend its traditional intergovernmental DNA and incorporate the scientific community, private sector (the primary source of AI innovations) and civil society into new governance frameworks, including public-private partnerships. As was mentioned in the recent UN AI for Good Summit in Geneva, the city, well endowed with a governance ecosystem conducive to such initiatives, presents an ideal venue for materialising this vision.

The deepfake and generative AI quandary serves as a sobering reminder of the immense power and multifaceted security challenges posed by artificial intelligence. In the pursuit of responsible AI governance, we must prioritise the protection against malevolent exploitation while nurturing an environment that encourages ethical innovation and societal progress.

Embracing strategic foresight, unshackling ourselves from linear thinking, and fostering diverse collaborations and security by design are crucial steps towards collectively shaping an AI-powered future that upholds ethical principles, preserves democratic values and secures the well-being of humanity in the face of transformative technological landscapes. By forging this path, we can pave the way for a more equitable, secure and prosperous society in the age of AI.

Published: July 14, 2023, 6:00 PM