1. Enhancing Creativity of Writers and Producers

There is the view that AI and Machine Learning can and will completely replace human writers and producers. It’s an interesting debate depending on which side of the fence you sit.

The fundamental misunderstanding is that AI is not here to replace musicians.

AI can and has been producing songs for some time. In fact 1957 was the very first time. The question is do AI songs pass the quality test. The “Turing test” for AI says that if songs produced by both humans and AI behind closed doors were provided to another human, for AI to be successful no-one would be able to predict who produced what.

While there is no doubt AI can produce music, the big limitation at present seems to around lyrics and also larger structures such as repeated choruses. I have had the (dis)pleasure of listening to a range of AI tracks and no lyrics seem to able to pass my own personal Turing test.

This Nirvana AI track was generated by feeding into the algorithm the catalogue of existing Nirvana tracks. It sort of gets the sounds, but the lyrics are all over the shop and gave the game away too easily.

Other reasonably successful AI platforms such Open AI JukeBox are up front with the current limitations. “While JukeBox represents a step forward in musical quality, coherence, length of audio sample, and ability to condition on artist, genre, and lyrics, there is a significant gap between these generations and human-created music.

In other words, AI can provide a direction and a structure, but at present its major contribution seems to be providing writers and producers with different options and a creative path. Not a finished top 10 hit.

Expect the continual launch of cool AI plug-ins to the digital audio workstation software (DAW) that will enhance creativity but maybe not complete the whole song.

2. Allowing Consumers to Create Music

We are all familiar with clever apps that blend faces together, combine graphics, or predict how you might look in 20 years. These apps, put simply are taking an input such as photos, and then by making a variable setting, they output a calculated result.

Hey, I love blending my face with a celebrity to see what it could be.

So imagine an App that allows you to select 2 artists. Then select a ratio such as 70% Beyonce and 30% Drake. Then the app produces a song with those two artist catalogue characteristics.

70% Beyonce. 30% Drake thank you very much

The artist characteristics have been identified by an AI system being trained via the artist catalogues being essentially played to it.

Some of this will be utter trash, but some output might be genius and make that Tik Tok content all the more compelling. And you won’t need to be Calvin Harris to achieve this … these apps are coming to every music lover around the globe but with a huge legal consequence .. see below.

3. Music and AI inputs

The AI Music App “AiMi” is an interesting place to start on this topic. This app produces dance music based on two inputs by the user — Energy Level and Like/Dislike. The interesting thing is that it is not a playlist — but one continuous track.

The AiMi engine is made up of a catalogue of samples and the system literally blends the samples together real time based on user input, in what is a very pleasant experience. Obviously no lyrics which makes things a little easier.

The big point is that by taking consumer inputs, whether they be direct inputs, facial recognition, or other bio data such as heart rate etc, AI is going to be able to produce a much more refined user experience.

Take for example Sony’s recent AI patent to produce soundtracks based on a gamers emotion likely received through bio data in the PS5 controller. Or facial recognition software being able to accurately track the average age of supermarket customers, bar patrons, or elevator occupants and possibly even track their mood.

PS 5 Controller

That clever playlist curator just lost his job to something more accurate and realtime.

4. Industry Meta Data

The Music industry has been built on a data set of song and album having a unique code known as an ISRC and Catalogue Number respectively. More recently UPC or bar code has become more important than catalogue number and we have also had derivatives of the ISRC to account for sections of tracks used in ring tones.

Moving forward, there will be a further requirement for samples and small units of tracks to have unique codes. The concept of a music generator having a large catalogue of samples and selecting the most appropriate blend based on AI inputs is a case in point. See Point 3.

How those royalties are received and distributed based on the available meta data will also provide further challenges but they are not insurmountable.

5. Challenging Copyright Laws

When more AI creations are released, there will be inevitable legal battles. Existing copyright laws weren’t written with AI in mind, and are extremely vague about whether the rights to an AI song would be owned by the programmer who created the AI system, the original musician whose works provided the training data, or maybe even the AI itself.

Some are worried that a musician would have no legal recourse against a company that trained an AI program to create soundalikes of them, without their permission.

Currently a song has to be sampled for infringement to occur. Also, who owns the song if it’s created by a computer. A computer can’t own a song under the laws right now.


Valerio Velardo recently put together a great summary.

  • Musicians shouldn’t be afraid of AI. Technology is always neutral. It’s how we use it that determines its ethical implications.
  • Musicians should always have the upper hand over AI. In order for this to happen, they have to be actively involved in the research and development of generative music systems.
  • Musicians should work side by side with AI music geeks. This will help steer the technology in the right direction. Being hostile against generative music will be counter-productive. If anything, it’ll open up space for a misuse of the technology. The engineers won’t have an open channel of discussion with music creators, therefore won’t be able to figure out their needs and concerns.
  • Musicians should always be curious and get their hands dirty with generative music systems. They should play with systems like Magenta and be on the lookout of what’s coming next.
  • Musicians should become fluent in AI. Everyone should have a basic understanding of what AI is and how it can be used in music. Even the most ‘hardcore’ musicians should push themselves to play around with code, and experiment with machine learning.
  • Copyright. The AI Music world will be fraught with legal issues. This is only just beginning.

The future of music is a fascinatingly bright one, where musicians and AI create together. Those music creators that are open to AI will benefit most from the incoming revolution.

PS: If you are a bit nerdy and have 8 minutes to spare, the following video breaks down AI and Machine learning quite well, and you will be soon using the term “Neural Networks” in your morning coffee break chatter.

Article written by Gavin Parry