UAE asks public to help tackle deepfakes

People urged to report manipulated videos and images

In 2019, a video appearing to be US politician Nancy Pelosi, speaking with a slurred voice, made headlines warning people the video was a fake.
Powered by automated translation

The UAE wants to help members of the public spot deepfakes, in which images or video are manipulated to fool the viewer.

The National Programme for Artificial Intelligence and the Council for Digital Wellbeing published a guide to raise awareness on both the harmful and useful applications of deepfake technologies.

It also tells people how to report them to the appropriate authorities.

Deepfakes use a form of artificial intelligence called deep learning to manipulate images or video. This can create fake events that may seem very real. The availability of large amounts of data in the form of pictures and videos has allowed AI systems to be trained to create better deepfakes. As they have become easier to produce, the potential for their misuse has grown.

Omar Al Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, emphasised that it is "imperative to focus on the positive and beneficial uses of advanced technologies" and to "raise community members’ awareness on their potential and diverse applications".

The UAE's Deepfake Guide aims to help individuals better understand the technology and to provide guidance on healthy tech habits.

Deepfakes for good

Examples of deepfakes used for good do exist in some industries.

The guide lists medical applications, such as to generate new MRI images for training purposes. Synthesised audio deepfakes may be of use for someone who has lost the ability to speak due to cancer and other medical conditions that affect the vocal cords.

In the film and advertising industries, deepfakes may be used to enhance content, provide a news broadcast virtually or to create special effects. Customer service is also a growing venue for deepfakes with the use of virtual assistants at call centres.

Are deepfakes ever illegal?

While there are 'good' examples, the technology is most often associated with bad actors.

The guide emphasises "it is important to note that deepfakes cannot be categorised as good or bad. Deepfakes are merely a tool that can be used for different purposes".

They may, however, affect reputations and the national interest. Existing UAE laws "prohibit cyberbullying, actual malice and identity impersonation".

A deepfake's capacity to "influence communities" is of particular concern as it relates to public opinion. They may also "impose a reputational threat to nations as well as cause disruptions to international and diplomatic relations if not verified promptly by concerned governments", according to the guide.

How can I spot a deepfake?

The guide advises that it is "possible for a human eye to detect signs that suggest whether a video content is forged or not".

There are six things to look out for:

1. "Irregular" or "disorganised" facial movements of the subject

2. Variations or sudden changes in lighting

3. If skin tone changes during the clip

4. "Repetitive blinking or no blinking at all"

5. Make sure the audio matches the speaker's lip movements. A mismatch often signals the media has been manipulated

6. "Distortion" around the speaker's face

It is possible to detect a deepfake yourself, but as the technology improves this will get more and more difficult. In time, the only way to root out the fake from the real will be with AI.

The UAE council has already found that an alternative technology is the best method of detecting the manipulated images. It said: "The most accurate approach to detect forged contents is through a systematic screening of the deepfakes using AI-based tools that need to be regularly updated."

Updated: July 09, 2021, 11:39 AM