Innovation

Artificial flavouring: AI and the real future of unreal music

possessed-photography unsplash
Photo cover by: Possessed Photography / Unsplash
Written by: Eamonn Forde
Published Nov 04, 2021
7 min read

1957 was a transitional year for music. Elvis Presley’s ’All Shook Up’ was the biggest single of the year in the US as rock ’n’ roll stars eclipsed crooners like Pat Boone and Perry Como who had previously dominated the charts. John Lennon and Paul McCartney met for the first time that July. And Lejaren Hiller worked with Leonard Issacson in 1957 to programme the ILLIAC I computer at the University of Illinois (where they both taught music) to compose the Iliac Suite, commonly regarded as the first musical score created by computer. Music forged through artificial intelligence (AI) was becoming “real”.  

In 1960, Russian researcher R. Kh. Zaripov published the first academic paper on algorithmic music composition. By 1980, David Cope at the University of California had developed his Experiments in Musical Intelligence (EMI) to analyze existing music, like that of Bach, in order to not just predict what music they would write but to also add in the opportunity for random factors to shape the music. It was not purely an academic exercise and was soon being used in commercial terms, with Cope releasing the album Bach By Design in 1993. (EMI became known as Emmy and was subsequently advanced to become Emily Howell – a clear move to humanize these technological applications by giving them names as if they were people.)

Since then major companies like Google, IBM (with Watson), iZotope and Jukedeck have been experimenting here as well as building commercial products around using AI for producing music. Even Sony, a company with interests in both technology and music, used its Flow Machines AI technology to produce ‘Daddy’s Car’ in 2016 – “a song created in the style of The Beatles”. (Spoiler: it is terrible.) 

By 2017, exactly 60 years after Hiller and Issacson’s first experiments, musician Taryn Southern used the Amper Music platform to release what was claimed to be the first album entirely created by AI. The album with the palindromic title of I AM AI was seen as the moment AI music truly came of age

Anti matter: the arguments against AI in music  

There are, however, enormous ethical considerations and objections when AI goes beyond the notion of the fictional/artificial “pop star” and enters into the realms of the real world. Deepfake technology has been applied by OpenAI to artists – both living and dead – to imagine what new songs by them could sound like.  

We have also seen AI study the human singing voice and replicate it, albeit with mixed results.  

It has even got to the stage where a deceased artist, South Korean pop star Turtleman, was reborn as a hologram and put on TV to perform a song that was not even written while he was still alive.  

There exists a cocktail of artistic, legal and commercial concerns around AI music composition, often leaning towards the dystopian. Some believe this heralds the slow death of creativity where machines replace humans. Others raise questions about the intellectual property around AI composition and who, legally, owns it. Yet more fear it will simply drive writers and musicians out of work

Positive energy: the arguments for AI in music 

On the more utopian side is the argument that AI serves a very specific role and can co-exist with human musicians/composers. 

Time has proposed that AI could “spur a new golden age of creativity” by unlocking artistic possibilities rather than closing them down. Because “few straight-ahead pop songs are being created by AI”, the “more intriguing progress” here is made in two areas: 1) the functional (e.g. library music where a few bars are required to set a mood or feeling); and 2) the experimental (creating “musical chaos and disorientation with AI’s help”).  

Rather than be an assassin of creativity, AI could be an aid. DJ magazine cited Pioneer DJ’s rekordbox as a positive for having developed an “AI-assisted vocal detector, to avoid dreaded vocal clashes”; others see it as helping overcome “composer’s block” which is fundamentally no different to a lyricist using a thesaurus and a rhyming dictionary or a guitarist consulting a chord or scales chart when struggling to finish a piece.  

Speaking to DJ magazine about the impact of AI on musical professionals, Jonathan Bailey, the chief technical officer of plugin manufacturer iZotope, accepted that some jobs could be lost here – but only the most non-creative ones. 

“[A]s a mix engineer, if you make your living by loading up a session, getting the most generic mix in place and not really applying any of your own creativity and humanity to that and moving on to the next assignment, then I’m sorry, you are going to be replaced by technology,” he said, bluntly.  

Finally, AI can also be used as a teaching aid for musicians – helping them to learn and to extend their skills, with Yousician being a prime example here.  

The artist’s view: bringing AI to heel in a creative context 

French musician and producer Agoria (aka Sébastien Devaud) has worked with Bronze to unlock the creative potential of AI for his 2021 track ‘What If The Dead Dream’. The Bronze software enables users to create an infinite number of remixes – allowing one track to be twisted into endless new shapes automatically.  

“[W]ith machine learning and deep learning, you have a lot of iterations to define the boundaries for the AI or the machine learning process,” he said. “So for me, it was very important that the AI would not try to imitate what I could do, but would try to be herself. This way, you rediscover the song and […]  the way you could have done it.” 

This is a way to bring AI to heel in a creative world, rather than presuming that the AI dictates everything. Agoria also argues that music is evolving and exists in a constant state of flux. “[T]he lines between the finished version and the improved version are very wide,” he proposed. 

He is arguing here that when AI is applied to existing songs it is merely an extension of the European folk tradition where songs naturally evolved as they passed between travelling musicians – adapting to their specific time and location.  

Merging markets: how the human and the machine can grow together  

Ultimately, as with recommendations on streaming services that combine the algorithmic and the editorial, the best uses of AI in music are in the equal matching of the human and machine. 

The Conversation covered the moves to complete Beethoven’s 10th Symphony, noting how AI played a key role but the entire project was steered by input from humans who had carefully studied and understood his career and works. “This project would not have been possible without the expertise of human historians and musicians,” it said. “It took an immense amount of work – and, yes, creative thinking – to accomplish this goal.” 

This is something Drew Silverman, co-founder of AI composition tool Amper as well as a film composer, has argued strongly. “One of our core beliefs as a company is that the future of music is going to be created in the collaboration between humans and AI,” he stated. “We want that collaborative experience to propel the creative process forward.” 

It is that indefinable something extra that a human brings – something that cannot be programmed or predicted – that can elevate music from the functional to the exceptional. That does not mean that machines alone cannot produce interesting and functional work; it just means that without some human involvement steering it away from the obvious and into the serendipitous they will not produce enduring work.  

It is difficult not to conclude here without mentioning Kraftwerk. They were years ahead of their time, considering the societal and philosophical implications of what would happen if humans could become more machinelike. But from Emily Howell onwards, the direction of travel has been going the other way as the central thrust of AI is to make machines more humanlike.