Last week, the world’s leading experts in artificial intelligence converged on the Canadian city of Montreal for one of the biggest gatherings in their field.
On Saturday, the 2018 Conference on Neural Information Processing Systems (NeurIPS) hosted a workshop titled "AI for Social Good". The assembled crowd were not treated to the usual conference staples of robot demonstrations or video presentations showcasing new algorithms that promise to revolutionise healthcare systems.
Instead, they were greeted with a solo performance by the superstar cellist Yo-Yo Ma, followed by a session on artificial intelligence, ethics and the arts.
The history of artificial intelligence is littered with algorithms that were supposed to mimic the most complex feats of human creativity, from problem-solving to writing poems, composing music and painting portraits.
In 2007, a new version of Glenn Gould's legendary rendition of the Bach's Goldberg Variations was released by Sony Classical. It was not a remastering from original tapes, though. Instead, a piece of software named Zenph was used to analyse the Canadian musician's 1955 performance, then translate it to an electro-acoustic Yamaha piano.
Almost a decade later, researchers at Sony used an application called Flow Machines to dissect the scores of 13,000 popular tracks and then build new melodies. After selecting the music style of The Beatles, the French composer Benoît Carré arranged and produced a piece of music for which he also wrote the lyrics. Titled “Daddy’s Car”, it has been widely credited as the first ever pop song written by artificial intelligence.
And music is not the only creative field in which AI is being used. Last year, academics at Rutgers University in New York and employees at Facebook AI Research debuted a system named Creative Adversarial Networks, which can generate visual art after studying and “learning” from pre-existing works.
Then, last month, Christies in London became the first auction house to sell an AI-generated painting. Bidding for the work, titled Portrait of Edmond Belamy, reached $432,500 – nearly 45 times its original estimate.
However, these AI-generated artworks are, at best, pale imitations of the real thing. Art resonates with human beings, because it is a vehicle for artists to express their feelings, beliefs or political ideas. Even if algorithms learn to paint exactly like Picasso, it is unlikely that they will ever be able to create anything with the power or historical weight of Guernica, nor will they be able to capture the beauty and pain of Vincent van Gough's best work.
The human brain – specifically our ability to imagine – is at the heart of the creative process. This is precisely what artificial intelligence lacks so far, and exactly what developers and engineers should be reaching for. The aim should not be for technology to generate new paintings, poems, songs and buildings that emulate those already created by human beings. Instead, the power of technology should be harnessed to offer a truly original and personalised artistic experience.
For me, this idea brings to mind the neurological condition of synaesthesia, in which sensory perceptions cross over and people are, for instance, able to smell letters or hear colours. Far from viewing it as a handicap, many with synaesthesia believe it to be a gift that allows them to experience art and the world around them with a vivid intensity that others can only dream of. AI has the potential to make us all able to touch music and smell words.
Personally speaking, I would prefer this new form of art to any computer-generated Yo-Yo Ma soundalike. After all, nothing can beat the real emotions that a performance by a virtuoso musician can evoke – for now, at least.
Professor Olivier Oullier is the president of Emotiv, a neuroscientist and a DJ