The AI pop star: could we soon be worshipping singers made up entirely of algorithms?

Experiments in artificial intelligence are now assembling an artiste that’s truly alien

2AT4N5E DJ android, disc jockey robot with microphone playing music on turntables, cyborg on stage with deejay audio equipment, back view, 3D rendering. Alamy
Powered by automated translation

Pop stars often inhabit otherworldly personas. Artists such as David Bowie and Lady Gaga made their mark on culture by becoming almost transcendent, mystical figures. But their success has also been down to their humanity; their ability to shock, surprise and sing about things we identify with or aspire to. Experiments in artificial intelligence are now assembling an artist that's truly alien: the AI pop star. Could a non-­human entertainer enchant and enthral a human audience? Could we become devoted fans of a bunch of algorithms?

The most fully formed example currently is Auxuman, a collective of five AI personas created by London artist Ash Koosha. Yona, Mony, Gemini, Hexe and Zoya released their first album back in September, with a follow-up two months later. They each have their own musical style that's computer-generated rather than composed. On the song Crossfire, Mony sings of spending days "shooting pixels off the screen", and how he "lost my friends in the crossfire". The sound is synthesised, beguiling, opaque.

The aim of the project was to build virtual entertainers that could “satisfy our endless thirst for entertainment”, but producing a cultural facsimile of pop music is far from easy. Lyrics, voice, melody, sound and image have to dovetail together in a way that appeals to huge numbers of people. Humans find this notoriously difficult to achieve; millions have tried and failed. And Auxuman hasn’t hit the big-time (yet). AI certainly has its work cut out.

Given the recent advances in the quality of computer-­generated text, lyrics should, at least in theory, present the least problem. Neural networks can mine text data to produce strings of words that have rhyme, alliteration and rhythm. Koosha told website Digital Trends the Auxuman lyric engine was trained on articles, poems and conversations from online. "Expression on each song comes from stories we have told, ideas we have generated and opinions we have shared," he said.

A budding AI pop star would need to be able to convert those words into song. While most of us can sing impromptu tunes to ourselves in an instinctive way, AI has no such instinct.

The initial inspiration for a song's subject – unless it's randomised – also has to come from a human. Last week, US researcher Li Yang Ku unveiled his "Home-made Rap Machine", a web-based engine trained on 180,000 rhymes by classic MCs. You feed in a line, it supplies a rejoinder, which was described by Li as "entertaining, but with limited success". When prompted with "I think I got coronavirus" it replied: "Put me on my head like a vinyl". Questionable.

A smartphone app called Alysia also claims to be able to automate lyric writing, but only for certain topics (including Love, Sadness, Joy, Girls and Boys) and the onus is on you to organise and edit the generated lines.

A budding AI pop star would need to be able to convert those words into song. While most of us can sing impromptu tunes to ourselves in an instinctive way, AI has no such instinct. In December, scientists at Amazon's research centre in Cambridge, UK, used a Google-designed algorithm to analyse language and combine it with notes into a sung melody. Human listeners then marked the results for "naturalness"; it scored 59 per cent on average. Not bad. But its actual voice – the singing, if you like – still involves synthesis with a sound determined by a human being.

Then there's the song's arrangement, combining melody and harmony in a way that's pleasing to the ear. AI was first used to generate a musical composition back in 1957, a string quartet ("Illiac Suite") composed by a vacuum tube computer at the University of Illinois. The results were pleasant, but clumsy. Fast forward 60 years, and a Sony research laboratory was using AI to compose a Beatles-style song called Daddy's Car.

Again, it felt like an impersonation of pop music rather than pop music itself. AI has managed to distinguish itself, however, in the field of muzak, which doesn’t demand our attention. An AI project called Boomy has made hundreds of thousands of ambient soundscapes that quietly move around simple musical shapes. But that’s something very different to achieving pop stardom.

BERLIN, GERMANY - NOVEMBER 22: US musician Holly Herndon performs during the International Music Award (IMA) 2019 in Berlin, Germany, 22 November 2019. The IMA recognizes the efforts of artists to share their work with a statement independently of the commercial success.  (Photo by Felipe Trueba - Pool/Getty Images)
Experimental musicians such as Holly Herndon use AI to push compositions in new directions. Getty.

AI is undoubtedly useful for facilitating and developing ideas. Experimental musicians such as Holly Herndon use AI to push compositions in new directions and challenge musicians to move out of their comfort zone. At the other end of the scale, AI provides engines for people who’d love to write music but need help doing so. An app called Amadeus Code bills itself as an “AI powered songwriting assistant”, generating melodies in a number of styles.

In December, Amazon unveiled DeepComposer, "the world's first musical keyboard powered by generative AI", which fleshes out melodies with accompaniments. Other services offer something similar without a keyboard, including the iOS app HumTap, and OpenAI's MuseNet, which generates music for up to 10 instruments in more than a dozen genres. It is possible for AI to generate a song from an idea. But it needs plenty of helping hands along the way.

But what about the physical embodiment of an AI popstar? Modern audiences are certainly willing to be entertained by non-human entities, whether it’s the cartoons that represent the band Gorillaz, or modern day virtual entertainers such as Instagram’s Lil Miquela. But AI doesn’t choose the way it presents itself; humans do. The look of Auxuman’s five personas were not the result of a flash of computer-generated inspiration; they’re designed to look like pop stars. Across every creative aspect of pop music, AI is only currently able to facilitate or augment the stylistic choices of the humans behind it. That may yet result in something culturally significant, according to Stephen Phillips of Australian AI firm Popgun. He believes that AI will help younger kids create their own pop personas that connect with other kids of the same age. “Once they have the tools to make music that sounds great, they’ll make music for each other, and it’ll sound incredibly genuine to them,” he said in a recent interview.

For many years now, it’s been relatively easy to get a computer to make music that sounds vaguely acceptable to the ear. We appreciate it for being clever. But it’s far harder for it to make music that we appreciate for its artistic merit and want to revisit. “Music is a complex, highly structured sequential data modality,” researchers at AI company DeepMind noted in a 2018 paper. For now, AI music is less about computers entertaining us, and more about marketing. The AI pop star is merely a puppet. Some, of course, may argue that this is what human pop stars have been all along.