• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

AI and Future of Music Production

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
45,922
Likes
256,389
Location
Seattle Area
In a recent thread there was a discussion around what the technology would be like in 2074. I commented that by then AI will be able to on-demand generate any music we like. We could for example give it a seed song, and then modify it with our mood, desires, and even fidelity levels and let the system synthesis it on demand. Just as AI has finally conquered human speech understanding and generation, I think the same can finally be done for music and with very high fidelity. And with it, open up a level of creativity that doesn't exist. How about John Lennon singing a Jonny Cash music? Or Elvis singing modern music. Or less ambitious, creating new songs from existing artists/music that would sound just like they had created?

Artists can still have a significant role here by generating the key seeds to generate music and distributing those. And we could modify them yet again such as slowing down a track, making it more happy sounding, converting it to dance music, or vice versa.

This "local" level of music creation could also bypass some of the issues today around copyrights in that music is not distributed. You make it locally and on-demand.

We are very early stages of it but the potential is already shown and it is stunning at times what it can do. Here is a producer creating a Dua Lipa track for example:


Of course, the world will remain free to make music as they do know but I think 40 years from now, that would seem antiquated by many.

What do you think?
 
If we can extrapolate the current skills and limitations of LLMs into the future...I think AI will demolish the commercial / popular music industry. Taylor Swift and Dua Lipa bangers will be popping up like mushrooms after rain, fueled by endless marketing and psychological research. The masses will get exactly what they don't even realize they want.

And on the flip side, in dirty clubs, dark garages and dimly lit practice rooms, poor but incredibly angry musicians will create sounds that are completely original, unique and which will push the medium forward. A lot of it will be AI-assisted, as AI can be a powerful weapon in crafty, creative hands. Maniacs like Tatsuya Yoshida, magicians like Thom Yorke or Brian Eno; their ability to stir up striking and new emotions, AI will never take that away.

At least that's what I'm hoping for!
 
In a recent thread there was a discussion around what the technology would be like in 2074. I commented that by then AI will be able to on-demand generate any music we like. We could for example give it a seed song, and then modify it with our mood, desires, and even fidelity levels and let the system synthesis it on demand. Just as AI has finally conquered human speech understanding and generation, I think the same can finally be done for music and with very high fidelity. And with it, open up a level of creativity that doesn't exist. How about John Lennon singing a Jonny Cash music? Or Elvis singing modern music. Or less ambitious, creating new songs from existing artists/music that would sound just like they had created?

Artists can still have a significant role here by generating the key seeds to generate music and distributing those. And we could modify them yet again such as slowing down a track, making it more happy sounding, converting it to dance music, or vice versa.

This "local" level of music creation could also bypass some of the issues today around copyrights in that music is not distributed. You make it locally and on-demand.

We are very early stages of it but the potential is already shown and it is stunning at times what it can do. Here is a producer creating a Dua Lipa track for example:


Of course, the world will remain free to make music as they do know but I think 40 years from now, that would seem antiquated by many.

What do you think?
I agree.

As a musician (pianist, singer, and woodwind player), I find it sad / disheartening, but I believe we'll be hearing full symphonies performed 100% in the digital realm (i.e., NO players, just sampled sounds indistinguishable from "the real thing") a LOT quicker than "2074". Same for solo instruments / soloists - a "perfect" violinist playing a "perfect" Guarneri or Stradivarius with a "perfect orchestra" that puts the Royal Concertgebouw to shame, etc., is not far away.
 
but I believe we'll be hearing full symphonies performed 100% in the digital realm
Well, for people living in the digital realm that might be so. But I doubt we will hear a computer/robot orchestra play a concert on a regular basis anytime soon. This is all about producing media in the mainstream.
 
50 Years from now the world (if it still exists at all) will be unrecognisable due to the impact of AI.

I don't think we can even guess what that will look like - just as it was impossible to predict what impact the internet would have even just 30 years ago.

AI arts are just a small part of it
 
Well I might be a bit off topic but AI gets a bad rap and its potential is fantastic for medicine, conservation, traffic, finding criminals and cheats and all sorts of things. So why not music?
Anyway, because of a fun experience I had with my son this holiday and this is the latest thread on AI, I'll share it. Thanks to @krabapple and his apt description, we watched this movie "Upgrade" where AI is presented with some subtle humor in a totally positive manner. I believe, it was my first experience of that positivity. AI is always there to create harm in other movies, from "2001 A Space Odyssey" on. A good ride not to be missed, even by Debbie Downer types.

Back to AI and music. Sorry to interrupt.
 
I do like this AI voice morphing used to create a new 'McCartney' song from Beatles session fragments of ideas. The first half describes the process, the second half is the song. Some lines sound more like him than others, but getting closer...
 
I'm pretty sure that people will still pay to see real humans play instruments, same as photography. The process of creating art is part of the emotional experience for both the creator and the receiver. The challenge might well be to differentiate the real artist from the fake one of course, but we're already developing strategies around data certification. No doubt that we will be able to create music entirely in the digital domain and be able to manipulate it to our hearts content, but I can't see that will fully replace musicians playing pianos, guitars etc. Pretty sure I could generate a digital guitar sound and create a track out of it, but where is the fun in that?
More likely that progress in healthcare means that Keith Richards will still be touring in 2074!
 
much of modern pop music is so formulaic that it might as well be from AI
 
F*** AI. It has no place in any arts. We have brains so we should use them by performing music. Our point in living is largely putting effort into things to get better at them; expressing our human experience and limitations; the journey. Some people are only interested in the arts, should they just become homeless? No.
 
Well I might be a bit off topic but AI gets a bad rap and its potential is fantastic for medicine, conservation, traffic, finding criminals and cheats and all sorts of things. So why not music?
Anyway, because of a fun experience I had with my son this holiday and this is the latest thread on AI, I'll share it. Thanks to @krabapple and his apt description, we watched this movie "Upgrade" where AI is presented with some subtle humor in a totally positive manner. I believe, it was my first experience of that positivity. AI is always there to create harm in other movies, from "2001 A Space Odyssey" on. A good ride not to be missed, even by Debbie Downer types.

Back to AI and music. Sorry to interrupt.
It's already being put to use in multiple applications as you mention; healthcare manufacturing, defence and security et al. How it is used in arts and creative applications will be interesting, digital certification is already a thing to preserve ip and retain provenance but that's protecting, not creating

Thinking about the source of this topic which gatecrashed the susvara review, it's technically possible for say Roon as a user interface to collect anonymised EQ data that is probably reasonably able to identify device model as most people are likely to use a similar naming strategy; which can then drive future product development. Some big assumptions there, but that's what AI can do in a manufacturing process.
 
Last edited:
I've given up on the Billboard 100 since everyone is copying and sampling everyone else. Most female singers run the same filters and can't even identify them anymore. The tools will get better, but human creativity and the social aspects of concerts will still be big.
I listen to Peter Gabriel, Steve Winwood, Amy Lee from Evanescence, or bands like Kansas or Supertramp. Writing music from life experience with ability will still beat an AI tool creating a derivative work. I want music from real people and instruments. Even movies with CGI are just too fake.
It may open doors to people who can't afford studio time and have natural talent. Production time will be shortened. Like comedians, I don't think talented musicians or actors will lose their jobs even when AI matures.
 
I think AI's involvement in music will be well beyond mimicking artists and creating songs in 50 years and will be focused on integrating advanced tech with neuroscience..

Quite soon, AI will decode the intricate neural signatures linked to different genres, moods, and musical elements mapping the brain's response to various musical stimuli and developing algorithms capable of replicating these patterns allowing for the transmission of musical signals directly into neural circuits responsible for auditory perception and emotional response. By interpreting an individual's neural responses, the system could dynamically adjust the musical content to optimize enjoyment, ensuring a unique and fulfilling auditory experience for each person.

In this scenario, the need for traditional audio devices such as speakers, amps, headphones and DRC will all become obsolete. Musicians of the future might instead delve into the realms of neuro-aesthetics, crafting compositions specifically designed to elicit unique neural responses and emotions, pushing the boundaries of auditory experience.
 
As someone more involved in AI/machine learning in general, I think AI for music is going to be a big nothing burger at least in the foreseeable future.

Consider this:
  • AI model use to create new music given a prompt is only as good as the prompt's content. This implies creativity needed in order to get a creative output, which doesn't change much with respect to whether new tools increase or decrease creativity.
  • Intelligence models are only ever as good as their training data. If you try to infer data outside of their training set, then it doesn't work. The reason we don't notice this with our human brains is because we have thousands to millions of years of model training experience along with several decades of multimodal information collection.
  • Training new models requires inordinate amounts of computing power, and until some big advance comes along (analog photonics processing!) and makes this cheaper and faster, model training is super expensive and time consuming to get a quality result.
  • AIs don't actually "know" anything about music, specifically music theory rules. The same is true of other AIs solving math problems. There is currently research here to figure out how to add concrete rules and proofs to models, but it's not clear whether this is realistic or possible yet.
  • If you're concerned about AIs replacing musicians, then just get into live music because AIs will never be able to replace that by definition.
Basically, until something big happens, AI is similar to the change brought by the ability to record individual tracks instead of a band playing live.
 
Last edited:
AI model use to create new music given a prompt is only as good as the prompt's content.
I expect the seed and training to be my existing library of content. Pretty sure synthesis based on that dataset is in our future.

As for computing power, we are talking about decades from now.
 
I had created a thread about this new AI generation tool - well worth registering and trying a few examples If people hadn't seen that thread:

 
I agree.

As a musician (pianist, singer, and woodwind player), I find it sad / disheartening, but I believe we'll be hearing full symphonies performed 100% in the digital realm (i.e., NO players, just sampled sounds indistinguishable from "the real thing") a LOT quicker than "2074". Same for solo instruments / soloists - a "perfect" violinist playing a "perfect" Guarneri or Stradivarius with a "perfect orchestra" that puts the Royal Concertgebouw to shame, etc., is not far away.
Here another musician. Rick Beato reviewing various AI created songs, or rather song styles. His reviews start 6:00 into the video. I actually guessed correctly which AI style he would like best. It was ambient music.:)

 
Back
Top Bottom