• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

None of it matters anymore. None of it.

Some clues to identify AI music. Summary via screenshots. #1 regarding the yellowish color of AI created album covers for example may be true. I can agree with that.
Screenshot_2026-02-01_184321.jpg

Screenshot_2026-02-01_182934.jpgScreenshot_2026-02-01_181842.jpgScreenshot_2026-02-01_182133.jpgScreenshot_2026-02-01_182330.jpgScreenshot_2026-02-01_184138.jpg
The points are motivated in the video, some examples are mentioned:
 
If you like bluegrass, watch Bela Fleck and friends video "Wheels Up" on Tidal:




tidal.com




Watch Wheels Up (feat. Sierra Hull & Molly Tuttle) [Live at The Farmette in Lyons, Colorado, July 21, 2021] on TIDAL

tidal.com




Can AI ever generate that?
It doesn't matter that it can't replace human musicians and live performance. It's a tool. We should appreciate the near-human capabilities that it does offer. That doesn't mean we need to enjoy the no-effort AI junk generated by lazy people. Creative and interesting use of the tool is its own form of human skill and at this point we have few worthwhile examples of it.

At some point that line of averageness and mediocrity that currently distinguishes AI compositions, guided by humans, and the now "old school" human ones will become even more gray and indistinct.
 
Back during the dawn of digital a friend loaned me his Casio FZ-1 synth, one of the first sampling synths:


Screwing around with the controls I managed to land on settings that produced the same music—pitches, rhythms, sonority, the whole ball of wax—as a commercial CD I incorporated into late-night new-age broadcasts. This would be around 1990-ish.
 
So much music today is generated by AI, does it really matter if your DAC and amplifier have an SNR of >120dB? On a whim, I went to Suno and typed in the keywords that describe my favorite music genre and what it created was surprisingly good and only took a few seconds. I'd dare say it was downright excellent compared to what I could do with real instruments and my gruff voice. The music was clear, detailed, articulate and believe-it-or-not enjoyable through my HD-650 headphones and DX5ii. If modern music is simply generated from hundreds of samples leached off YouTube, Spotify, Apple Music, and Pandora, I argue this whole high-fi thing no longer matters any more. None of it.


You're making a weird category error.

What you have 'demonstrated' not to matter anymore has nothing to do with hi-fi home audio. It's about tools used to compose and generate music, not reproduce the audio at home.

Electronic music -- as in, generated directly as electical impluses (not as acoustic waves recorded from air and transduced to electricity) -- or even, music wholly generated by a machine (e.g., sequencers) -- has been around for decades. It's not driven anyone to abandon the pursuit of high quality audio reproduction at home.

Bizarre take.

[The OP continued posting to ASR but never bothered to respond further in this thread. Make of that what you will]
 
Last edited:
Well, for human produced music an experienced listener will want to be as close as possible to the concert/music hall experience that he may have had.
Regarding AI music, who cares: there is no live music referencials.
Earbuds will be perfect.
Music listening quality is going to be something of the past.
 
For my part, if I can't go watch a musician competently perform their work live, I lose interest in them as artists pretty fast.

So I have zero interest in AI generated music.

There will always be a market for those who are able to perform and write music with their human talents.

What may happen, similar to other industries, is the mediocre talents will get pushed out of profitability. The danger there is many great artists started in mediocrity and grew out of it, and new generations mat have less opportunity to do so.
 
Some clues to identify AI music. Summary via screenshots. #1 regarding the yellowish color of AI created album covers for example may be true. I can agree with that.
View attachment 508510

View attachment 508497View attachment 508498View attachment 508501View attachment 508503View attachment 508509
The points are motivated in the video, some examples are mentioned:
[MEDIA
Some clues to identify AI music. Summary via screenshots. #1 regarding the yellowish color of AI created album covers for example may be true. I can agree with that.
View attachment 508510

View attachment 508497View attachment 508498View attachment 508501View attachment 508503View attachment 508509
The points are motivated in the video, some examples are mentioned:
Why would you identify AI music at all. I rather worry about a few million AI agents that will flood the labour market for almost zero cost with all consequences.
 
Last edited:
Well, for human produced music an experienced listener will want to be as close as possible to the concert/music hall experience that he may have had.

Human produced music sold commercially most often takes place in smallish rooms/studios and very often the 'experience' was one of multiple recording/tracking sessions edited together, rather than a single 'live' performance.

Regarding AI music, who cares: there is no live music referencials.
Earbuds will be perfect.
This makes no sense.

The reference for home/personal audio reproduction isn't 'live music', the reference is the audio recording itself.

To the extent that is conveyed without added distortion by your system, it's 'high fidelity'.


Music listening quality is going to be something of the past.

Again, this conflates (at least) two different things.

The recording itself can be of poor quality , e.g. early Velvet Underground, but the presentation of that audio recording by your gear at home can either be faithful to it, or not.
 
So much music today is generated by AI, does it really matter if your DAC and amplifier have an SNR of >120dB? On a whim, I went to Suno and typed in the keywords that describe my favorite music genre and what it created was surprisingly good and only took a few seconds. I'd dare say it was downright excellent compared to what I could do with real instruments and my gruff voice. The music was clear, detailed, articulate and believe-it-or-not enjoyable through my HD-650 headphones and DX5ii. If modern music is simply generated from hundreds of samples leached off YouTube, Spotify, Apple Music, and Pandora, I argue this whole high-fi thing no longer matters any more. None of it. What are we trying to accurately reproduce exactly? Heck, with modern DSP mediocre speakers cand sound pretty darn good so what exactly are we striving to achieve?
Honestly I don’t understand your point. Music can be made without people, AI can accurately mimic instrumental waveforms, so the quality of the equipment reproducing and transducing that waveform for you doesn’t matter? Could you clarify what your complaint is?

I’m glad you like your $400 headphones.

( I guess this is an old post, but it just popped up for me. Sorry if I’m making a point others already made).
 
No doubt Beethoven is sad about this.

Conlon Nancarrow would have a chuckle.
Did you read "Stranger in a Strange Land"? One of the Martian concepts in the novel is that the physical body could "discorporate" while the mental and spiritual bodies continue to grow and function. Beethoven's like that, reaching a point where he could no longer play the piano in public as he became stone deaf, no longer alive in the performing arts where he once excelled. It is as if after op. 100, all his compositions are effectively posthumous.

Every performance of a Colon Nancarrow work would be/is a definitive performance of that work. There are the piano rolls and there are the pianos set up for those rolls and there are no other options. Just press "play".
 
Back
Top Bottom