The fast-developing strand of audio-visual technology known as deepfake can cause amazement and consternation in equal measure. The ability to realistically alter video footage by swapping faces, changing expressions or putting unspoken words into people’s mouths is something we find compelling, but the resulting distortion of reality can be hugely unsettling.
A perfect illustration of that inherent conflict comes in the shape of Zao, an app launched in China on Friday. Its simple premise – to place your likeness into a selection of famous scenes from movies and TV – was an instant hit; in just three days it reached the top of the Chinese app charts.
What is Zao and how does it work?
It's a pretty straightforward premise: users first upload a series of selfies to Zao in which they blink, move their mouths and make different facial expressions. The app then morphs that selfie into a chosen character's likeness (usually an actor in a famous movie scene, or a singer in a music video, but it can just about be anything).
The enthusiasm for Zao is easy to understand. The ability to use a single selfie to realistically replace the face of Hollywood actor is impressive, and visual artist Allan Xia described it on Twitter as the "best application of deepfake-style AI facial replacement I've ever seen".
Results can be improved further if you submit a series of photos where you blink or move your mouth. But by only offering a small selection of clips for users to apply their faces to – clips that its AI has clearly been well-trained to operate on – good results are guaranteed.
“It retains [the] facial structure of the original actors,” continued Xia, “so the cherry-picked results more or less always look good and encourage users to share.”
But its popularity has led to a larger privacy row
Chinese messaging service WeChat was quickly filled with links to videos created using the app, while Zao’s developers spoke of servers under considerable strain and long queues for account registration. But its App Store rating showed the other side of the story: more than 4,000 reviews giving an average score of just 1.9 stars out of five.
On Monday, WeChat banned the messaging of links to the service (citing "security risks") while the China E-Commerce Research Center described the app as violating "certain laws and standards set by the nation and the industry".
Zao is just the latest deepfake app to grab worldwide attention. It's only a few weeks since a piece of Russian software called FaceApp was a viral hit, with people using it to create images of themselves at different ages, young and old. But this summer has seen bigger advances in deepfake technology at laboratory level.
Back in June, researchers at Imperial College London, in collaboration with Samsung, used AI to create a video of a person singing from a still photo and a piece of music. Meanwhile, Stanford University researchers showcased software which allowed the words being spoken in a video to be altered by merely editing the text transcript. It's remarkable to behold – but every development in this field seems to come laden with implications.
As Xia said about the Zao app: “I'm both excited and interested from a technologist / creator perspective, and morbidly cynical from a moral one.”
Is Zao using your pictures for wrongdoing?
After you’ve used your own picture to create a deepfake, your first thought tends to be: “But what if someone else did this to my picture for immoral purposes?” And so the problem immediately becomes not one of technology, but of privacy. Zao has tried to address this problem by only allowing non-selfie uploads to be used if they’ve been verified by a selfie taken by a front-facing camera.
But as various privacy implications began to register with Zao’s users and the app’s privacy policies were examined, there was an outcry as viral as that of the app itself. The small print stated that anything you create is covered by a licence to the developer that is “free, irrevocable, permanent, transferable, and relicense-able”.
Later, a statement was released to reassure the Chinese public that photos and videos would only be used to make improvements to Zao’s functionality, but the damage was done. Zao was suddenly viewed as sinister, and the one-star reviews began to rack up.
A similar reaction happened in the wake of FaceApp's viral popularity earlier this year, and security experts were quick to note then that such policies are standard across many popular apps; this doesn't make it right, but it does highlight how the unsettling nature of deepfake apps causes greater attention to be paid to the terms and conditions of using them. As security researcher Elliot Anderson said on Twitter: "Don't upload your face to a random app. Yes, this is cool, but once your face is uploaded you lose your rights… They can do whatever they want."
Zao’s developers are now trying to calm the storm. “We understand the concern,” it posted on the social media platform Weibo. “We’ve received the feedback, and will fix the issues that we didn’t take into consideration, which will need a bit of time.”
But the very concept of deepfake apps is so new to us that we simply don’t know how to react to them: should we delight in their eccentricities, or be panic-stricken about the direction in which they’re heading? For the moment, we seem to have settled on a combination of the two.