• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

None of it matters anymore. None of it.

dalbert02

Member
Joined
Aug 14, 2018
Messages
48
Likes
42
Location
miami, fl
So much music today is generated by AI, does it really matter if your DAC and amplifier have an SNR of >120dB? On a whim, I went to Suno and typed in the keywords that describe my favorite music genre and what it created was surprisingly good and only took a few seconds. I'd dare say it was downright excellent compared to what I could do with real instruments and my gruff voice. The music was clear, detailed, articulate and believe-it-or-not enjoyable through my HD-650 headphones and DX5ii. If modern music is simply generated from hundreds of samples leached off YouTube, Spotify, Apple Music, and Pandora, I argue this whole high-fi thing no longer matters any more. None of it. What are we trying to accurately reproduce exactly? Heck, with modern DSP mediocre speakers cand sound pretty darn good so what exactly are we striving to achieve?
 
This is precisely the view of the vast majority of people these days. They listen to 'music' made by a producer with a computer, not by musicians playing actual instruments, on a crappy, tinny mobile phone speaker and seem perfectly content with it.
 
It is actually the opposite. Synthesizes music is free of ambient noise or distortion captured by a microphone. As such, its fidelity can be much higher than naturally recorded music, increasing demands to reproduce it more accurately.
 
It is actually the opposite. Synthesizes music is free of ambient noise or distortion captured by a microphone. As such, its fidelity can be much higher than naturally recorded music, increasing demands to reproduce it more accurately.
Agreed. The way in which a waveform was generated should not lessen our desire to reproduce it perfectly.

Unless we all give up listening to sounds because we've decided that we don't like those available.
 
Having said that, though :rolleyes:
In the Cowboy Junkies' famous Trinity Sessions recording... Margo Timmins was, in fact, not recorded live... well... not quite.
She was singing live, but remotely, and her presence on the single (Calrec Soundfield) mic recording that we all know and love was represented by... a Klipsch Heresy (well... the pro variant thereof). You can see "her" (it) to the right of multi-instrumentalist Jeff Bird's head in the photo below.
source: https://www.soundonsound.com/techniques/classic-tracks-cowboy-junkies-sweet-jane

1760319494331.jpeg

1760319565901.jpeg
 
But... fidelity to what?
If a guitarist plays a guitar, and their fingers squeak on the strings... that is, for better or worse, part of the performance. The human part.
Quiet background. Hearing every note as it decays into nothing.
 
But if the actual performance decayed into the ambient noise of the venue... how is decaying to nothing high in fidelity to the actual performance?
;)

Here's a fun thing to do sometime -- I cannot point you all to a high-res copy of this, but I am sure you all can find one. Bruce Cockburn did a wonderful album of Christmas music quite a few years ago. The last track on is a fairly spare, close-mic'd recording of Joy to the World in Cockburn's inimitable style, with some sort of gentle percussion ("jingle bells"?). The recording is remarkable in that the guitar is allowed to fade, by the sound of it, naturally to background. This results in a delightful, lingering audio capture of the guitar body resonances as the sound fades away.

Here's the YT version just to - perhaps?!? - whet your appetite (I don't use streaming services -- I have this on CD, but that doesn't help all y'all very much!).
If you don't like Cockburn, or Christmas music for that matter... it's quite short! :) The whole track is 47 seconds long, but the decay is a good 18 seconds of it.
 
So much music today is generated by AI...
Why lay the blame on AI?
Go back a bit more and perhaps lay the blame on an IC, or a transistor or a tube.
Maybe going even further back; you can lay the blame on the phonograph.

If the 'music' claims your soul or makes your foot tap, or you get an ear-worm; does it really matter whether the source is live, or from a studio, or using an nVidia chip?
Until reproduced sound is no longer distinguishable from natural sound, there is work to do. And there is always new music to find and love.
Short of stopping to enjoy the 'music'; you can always stick to the 'music' you indicate you've enjoyed!:)
 
Having said that, though :rolleyes:
In the Cowboy Junkies' famous Trinity Sessions recording... Margo Timmins was, in fact, not recorded live... well... not quite.
She was singing live, but remotely, and her presence on the single (Calrec Soundfield) mic recording that we all know and love was represented by... a Klipsch Heresy (well... the pro variant thereof). You can see "her" (it) to the right of multi-instrumentalist Jeff Bird's head in the photo below.
source: https://www.soundonsound.com/techniques/classic-tracks-cowboy-junkies-sweet-jane

View attachment 482574
View attachment 482575
One of my favorite albums, and I'll never be able to listen to it again without remembering that image. Thanks a lot! Not!
 
But if the actual performance decayed into the ambient noise of the venue... how is decaying to nothing high in fidelity to the actual performance?
This is relevant to the recording process, not the hifi reproduction process
 
It is actually the opposite. Synthesizes music is free of ambient noise or distortion captured by a microphone. As such, its fidelity can be much higher than naturally recorded music, increasing demands to reproduce it more accurately.
Even if the music is digitally created, your room DSP corrected your ears remain analog what you hear is ultimately a subjective experience.
 
Music makes me feel something. Might be profound, may be shallow. I appreciate quality; of musicianship, songwriting, ideas or production but a glib pop song on the car radio can have me grinning and singing.
Some deeply moving music is technically awful (Daniel Johnson!)

Can AI generated Music make me feel? It probably can, and perhaps that will start to happen more often. I suspect that the feelings will more towards shallow, simple enjoyment but I could be wrong. Perhaps I won't know it's AI generated, in which case does it matter?

There's pleasure and extra depth from knowing an artist: the backstreet, seeing them live ... I can't see AI adding much here.

Personally, I prefer my music to be made by people who care about what they are doing (and who get paid)
 
So much music today is generated by AI, does it really matter if your DAC and amplifier have an SNR of >120dB?
However they (or it) made the music you're listening to, there's no clear argument for adding distortion on playback.

SINAD of 120dB+ is unnecessary even if you're listening to the best recordings ever made, but IMO you should still have good playback gear even for lo-fi recordings.

For example, the grunge movement was often lo-fi and you could probably correctly argue many good albums were made on noticeably worse equipment than we have on our desks today. You could have made this same post about lo-fi rock music 30 years ago and it would have made just as much (or as little) sense.
 
So much music today is generated by AI ...

Hmmm, I listen to a lot of recent music and I can think of a scant few of those using AI tools (and this as deliberate experimentation using professional music tools, not slop-generators).

... does it really matter if your DAC and amplifier have an SNR of >120dB? On a whim, I went to Suno and typed in the keywords that describe my favorite music genre and what it created was surprisingly good and only took a few seconds. I'd dare say it was downright excellent compared to what I could do with real instruments and my gruff voice. The music was clear, detailed, articulate and believe-it-or-not enjoyable through my HD-650 headphones and DX5ii. If modern music is simply generated from hundreds of samples leached off YouTube, Spotify, Apple Music, and Pandora, I argue this whole high-fi thing no longer matters any more. None of it. What are we trying to accurately reproduce exactly? Heck, with modern DSP mediocre speakers and sound pretty darn good so what exactly are we striving to achieve?

And this proposition gets sillier: samples have been used for decades. Nothing to do with recent 'AI' trends. Neither samples nor electronic sources preclude dynamics, clear sonics, low noise floor and so on. Per your "surprisingly good" and "downright excellent" slop can be technically state-of-the-art, can't it? Not to mention that these recognisable simulacra are par-for-the-course now. That doesn't stop it being hackneyed shite—take a look at 'Sofia Kroun' per Suno with its London Calling typography and jarringly smooth-skinned 'singer' then listen to the bland beat, tepid riffage and dim-witted lyrics.

Now if your taste does lean to cliched slop like the samples offered on Suno that's a matter of taste, not reproduction quality. Contradicting yourself, you actually listened on Hi-Fi headphones. Not that there's anything wrong with deliberate Lo-Fi, but that's a different discussion.
 
Last edited:
So much music today is generated by AI, does it really matter if your DAC and amplifier have an SNR of >120dB? On a whim, I went to Suno and typed in the keywords that describe my favorite music genre and what it created was surprisingly good and only took a few seconds. I'd dare say it was downright excellent compared to what I could do with real instruments and my gruff voice. The music was clear, detailed, articulate and believe-it-or-not enjoyable through my HD-650 headphones and DX5ii. If modern music is simply generated from hundreds of samples leached off YouTube, Spotify, Apple Music, and Pandora, I argue this whole high-fi thing no longer matters any more. None of it. What are we trying to accurately reproduce exactly? Heck, with modern DSP mediocre speakers cand sound pretty darn good so what exactly are we striving to achieve?
How the music was created is orthogonal to the need to reproduce the music accurately.


Unless you think AI generated music doesn't sound worse with stacks of noise and distortion.


Personally I'll never knowingly own AI generated music. I have no objection to musicians using AI tools though - to help them in their creative process - as long as their vision for that music comes from them.

Any more than I object to synths, creative use of auto tune, effects boxes drum machines - or any other music technology.
 
Last edited:
Back
Top Bottom