• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Proper Definition of High-Resolution Music

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
For the 5% with the 5% recordings what is the perceptual quality difference? If rating on a scale of 0-100% with higher being max fidelity how much is lost without that last 5%? Does it drop you to 95% of what is possible? Does it have an overly large effect and cause your perception of the recording quality to drop to 80%?

I wish I had a reference to published peer-reviewed answer to that. Anecdotally, for me personally, the perceived three-way difference of the levels of distortions, as compared to live music, varies a lot with the genre of the music: from noticeable on European symphonies to unbearable on gamelan (in the case of gamelan, hypothesizing that a sufficiently convincing Hi-Res recording is technically achievable). For e.g. a-girl-and-a-guitar, and virtually all pop music, there is no difference between CD and Hi-Res I can discern.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
You are probably aware of a rather long thread discussing, among other things, the technical aspects of hi-res: https://www.audiosciencereview.com/...qa-creator-bob-stuart-answers-questions.7623/.

I narrowed down that thread to another one, and then this newer thread to a single post, which I believe captured the core issue:
https://www.audiosciencereview.com/...higher-sampling-rates.7939/page-2#post-194272

IMHO, the core issue is that, formally speaking, a limited-duration non-perfectly-periodic piece of music can't be Fourier-transformed to a finite spectrum representation, and thus the Nyquist–Shannon Sampling Theorem, which relies upon Fourier transform, strictly speaking is not applicable to music.

Thus, we must assume that a sampled digital representation of real-life music contains distortions, as compared to the original analog sound. Mathematically, increasing the sample rate and number of bits per sample makes this distortion lower.

Most of the time listeners can't perceive the difference of distortions between 44/16 and, say, 192/24. Sometimes they can: A Meta-Analysis of High Resolution Audio Perceptual Evaluation.

The 5% figure keeps coming up. Like: the difference can only be perceived by 5% of listeners, on 5% of music. A naive approach is to multiply these probabilities: the thinking goes that on average ~0.25% of listening sessions would be affected by the regular vs high fidelity differences.

But, and there is a big but! For a particular listener in the 5%, whose favorite music genres happen to fall in the 5% too, the number of affected listening sessions can be much higher, closer to 100% actually.

And vice versa, for a listener in the 95%, or for a listener whose favorite music genres happen to be in the 95%, the advantages of hi-res are immaterial. For them, the promise of hi-res is 100% snake oil.

While true in theory it would be nice if you can post examples/parts of the songs for which 44/16 did not do the job correctly meaning there was some audible distortion related to digitalization process.
 

Soniclife

Major Contributor
Forum Donor
Joined
Apr 13, 2017
Messages
4,500
Likes
5,417
Location
UK
While true in theory it would be nice if you can post examples/parts of the songs for which 44/16 did not do the job correctly meaning there was some audible distortion related to digitalization process.
Along with bound test logs showing the degradation is audible.
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,981
Likes
4,838
Location
Sin City, NV
But, and there is a big but! For a particular listener in the 5%, whose favorite music genres happen to fall in the 5% too, the number of affected listening sessions can be much higher, closer to 100% actually.

And vice versa, for a listener in the 95%, or for a listener whose favorite music genres happen to be in the 95%, the advantages of hi-res are immaterial. For them, the promise of hi-res is 100% snake oil.
I think the real question would be: is the 5% of the music those 5% prefer even mastered to a level where it's possible to realize the difference? Obviously tastes are inherently irrational and subjective, but while recent recordings are almost always better - I rarely find the musical content to be... in fact, it's probably right around that 5% mark. ;)

As seen in the demographics of many large ABX tests... the majority of people obsessed with getting that last 1% out of their audio chain - are somewhat affluent, middle aged (or older), men... who statistically speaking, can't even hear well enough to make 16/44 a necessity. I'd venture to guess that many of the current consumers of 24/192 media also have a large vinyl collection that they prefer to listen to through tube amps on exotic (but inaccurate in many cases) loudspeakers. They certainly aren't employing DSP in their signal chain for that... so likely the room is horrible as well.

In the interest of full disclosure, I have a 5W pentode amp and some whizzer cone speakers which I occasionally dig out to listen on. I still prefer the sound of digital with room correction through accurate studio monitors 99% of the time... but for that 1%, nostalgia rules the day. In fact, the majority of hi-res content I have is actually sourced from vinyl... go figure.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
I think the real question would be: is the 5% of the music those 5% prefer even mastered to a level where it's possible to realize the difference?

No, let's leave mastering from this. Let's have any example from real world music, whatever genre, whatever mastering technique used, just that it shows that 44.1/16 recording showed distortion more than say 0.2% when compared to analogue original.
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,636
Likes
7,497
To the OP, I would argue against the definition of high-resolution music being expanded to the recording equipment or quality - not because I disagree with the OP's overall view, but rather because I agree with the OP.

To me, "high-resolution" applied to music is a digital concept. It clearly refers to increased resolution aka precision of sampling, via higher sample rates and longer digital words (aka higher bit depth).

The problem with this definition is that it is based on the fallacious "stair-step" concept of digital sampling, which mistakenly thinks (or acts as if it thinks) that sampling any given frequency more than twice will somehow make the sample more "accurate" and therefore increase the "sharpness" or "musical resolution."

This is a false premise, and so expanding that false-premise definition to take into account the frequency response capabilities of microphones and other recording equipment seems like the wrong way to remedy the problem with our concept of "high-res."

Which brings me to Sergei's point, below, which I think exemplifies the flawed assumptions that drive the high-res discussion. Nyquist-Shannon sampling theory remains correct - you need only sample a frequency twice in order to properly encode it digitally and decode it back to analogue. Increasing the sample rate does not reduce distortion.

A finite sample will indeed always produce quantization error, and yes, any "error" in recording/encoding a musical signal can technically be considered to be distortion. But digital quantization error comes from bit depth, not sample rate, and it manifests as noise, not distortion - and crucially, that noise is at an extremely low level; it does not rise to the level of obvious audibility the way that, say, an amplifier's or speaker's distortion might do at high volume or at certain frequencies, or the way a stylus and cartridge might transmit sibilant distortion when tracking certain records, or the way a recording might have distortion from microphone overload.

With dither, the quantization error noise of a digital recording is easily randomized sufficiently that its perceived audibility is even lower; and with noise-shaped dither, it's transformed into something even less perceivable.

If this were not true, then SACD/DSD would be impossible: DSD has a native bit-depth of 1, and a native noise floor of only -6dB! With noise-shaping dither, though, the original analogue signal can be accurately reconstructed from the 1-bit digital samples, so much so that DSD is considered not only high fidelity but also high-resolution.

When it comes to sample rate, any sample rate more than twice that of the highest frequency that needs to be recorded is unnecessary and does not add anything to the recording or to the resulting playback of the decoded analogue signal.

As a practical matter, of course we want a sample rate that is somewhat more than the highest desired recording frequency, so there is headroom to implement digital filtering to prevent aliasing from frequencies in the original source that are higher than the sample rate can capture - this is why the CD standard has a sample rate higher than 40kHz (2x20kHz). But while a higher sample rate provides more room for the antialiasing filter to work, the higher sample rate does not capture the audible-range frequencies any better than the lower sample rate does: 44.1kHz samples frequencies up to 20kHz exactly as accurately as 192kHz does. This is not my listening opinion - this is what digital sampling theory predicts, and that theory is mathematically and empirically verified.

So the definition of high-resolution audio is not so much flawed in and of itself - rather, it's a pretty accurate definition. It's just that what it defines is not especially meaningful when it comes to determining audio quality.


You are probably aware of a rather long thread discussing, among other things, the technical aspects of hi-res: https://www.audiosciencereview.com/...qa-creator-bob-stuart-answers-questions.7623/.

I narrowed down that thread to another one, and then this newer thread to a single post, which I believe captured the core issue:
https://www.audiosciencereview.com/...higher-sampling-rates.7939/page-2#post-194272

IMHO, the core issue is that, formally speaking, a limited-duration non-perfectly-periodic piece of music can't be Fourier-transformed to a finite spectrum representation, and thus the Nyquist–Shannon Sampling Theorem, which relies upon Fourier transform, strictly speaking is not applicable to music.

Thus, we must assume that a sampled digital representation of real-life music contains distortions, as compared to the original analog sound. Mathematically, increasing the sample rate and number of bits per sample makes this distortion lower.

Most of the time listeners can't perceive the difference of distortions between 44/16 and, say, 192/24. Sometimes they can: A Meta-Analysis of High Resolution Audio Perceptual Evaluation.

The 5% figure keeps coming up. Like: the difference can only be perceived by 5% of listeners, on 5% of music. A naive approach is to multiply these probabilities: the thinking goes that on average ~0.25% of listening sessions would be affected by the regular vs high fidelity differences.

But, and there is a big but! For a particular listener in the 5%, whose favorite music genres happen to fall in the 5% too, the number of affected listening sessions can be much higher, closer to 100% actually.

And vice versa, for a listener in the 95%, or for a listener whose favorite music genres happen to be in the 95%, the advantages of hi-res are immaterial. For them, the promise of hi-res is 100% snake oil.
 

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,687
Likes
4,068
Specification for High resolution audio:
It should create orgasmic delight in listeners.

Okay, spec chosen. Job done. ISO9000 certification upcoming no doubt.

I would advise any audiophile-wannabe to go and listen to a very good hi-fi system once. I mean, not with a totaldac and strange speaker design. But something flat with good dispersion & transparent electronics.

Once you realize that it sounds good, that you have a lot of details, instrument separation, that you can hear dynamic changes, etc. but you got no orgasmic delight, you will be less ready to upgrade and spend $$$. Even if having a good stereo is still super-nice.
 

Eirikur

Senior Member
Joined
Jul 9, 2019
Messages
318
Likes
509
When it comes to sample rate, any sample rate more than twice that of the highest frequency that needs to be recorded is unnecessary and does not add anything to the recording or to the resulting playback of the decoded analogue signal.
Worse even, the "inaudible" content may actually interfere directly with the reproduction of the audible range!
I found a nice example describing 2.2kHz + 16kHz tones interfering in a tweeter to produce a significant 11.6kHz peak. When that 16kHz is near or above your hearing threshold your are left with significant audible distortion that doesn't get masked in any way by the main tone.

Depending on the particular units in use this will undoubtedly also happen with "pairs" of sonic and ultrasonic tones, e.g. the accidentally captured 28kHz ultrasonic of a roadie cleaning his glasses :)
Just look at the Youtube videos Amir made analyzing hi-res content and you'll know its not hypothetical.
 

CuteStudio

Active Member
Joined
Nov 23, 2017
Messages
119
Likes
65
For me Hi-res is really just lossless.

By lossless I mean without information missing. Technically 44/16 doesn't have to have missing information (sort-of!), it's just a form of data compression we inherited from the fact the CD was a bit too small when Philips brought it out - already at that time 48/24 was the standard. A type of sparse sampling and most decent filters can put it all together again with reasonable accuracy.

There are two types of lossy storage methods for me: One is stuff like MP3s where the audio data is grouped into sets to save space, the other is very simple when all the information above a certain level is missing.
Even some hi-res stuff at 192/24 is still lossy due to the missing data beyond this level, in analog terms it's know as 'clipping' and is the majority case in digital audio for many genres today. More modern gear prevents the clip with some very clever non-linear compression. Some compression is inevitable, but when the waveform looks like the fortications of a castle it's time to admit that some of the data is lost: hence 'lossy'.

Lossy waveform of either type are not really HiFi in my view.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
I wish I had a reference to published peer-reviewed answer to that. Anecdotally, for me personally, the perceived three-way difference of the levels of distortions, as compared to live music, varies a lot with the genre of the music: from noticeable on European symphonies to unbearable on gamelan (in the case of gamelan, hypothesizing that a sufficiently convincing Hi-Res recording is technically achievable). For e.g. a-girl-and-a-guitar, and virtually all pop music, there is no difference between CD and Hi-Res I can discern.

Just a thought. Gamelan music might be the type to have much more intersample overs than most music does. If you have some, try reducing the level by -4 db in an audio editor. Then play it making up the gain in analog if you can. See if that helps. Of course it is possible poor mixing and mastering practices have let the ill effects of intersample overs get into the recording before it gets to you. In which case you'll not be able to remove it.
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
While true in theory it would be nice if you can post examples/parts of the songs for which 44/16 did not do the job correctly meaning there was some audible distortion related to digitalization process.

No, let's leave mastering from this. Let's have any example from real world music, whatever genre, whatever mastering technique used, just that it shows that 44.1/16 recording showed distortion more than say 0.2% when compared to analogue original.

Along with bound test logs showing the degradation is audible.

Sorry no ABX logs this week - way too busy with other things. Yet I'll give you something right away.

You probably heard Pas De Deux from Tchaikovsky's Nutcracker. One of the most frequently performed pieces of classical music. Deezer lists 100+ CD-quality renditions of it. I haven't listened to all of them, but did listen to about a dozen over the years. None of them sounded like the live renditions played by a competent - leave alone brilliant - orchestra!

Explanation is simple. When played live, this piece, in my estimate, has a dynamic range of 60 to 80 dB, depending on orchestra, venue, and seat. Let's take 60 dB as a safe estimate everybody could agree on. 60 dB is ~10 bits of PCM. Which leaves just 6 bits to encode the quiet opening of the piece on a CD. This is a seriously Atari-quality-sound territory.

Naturally, Atari-quality is unacceptable, so the whole piece gets compressed: to about 24 dB +- 4 dB in the CD renditions I've heard. The Pas De Deux emotional impact critically depends on the dynamic range. When compressed so severely, the piece leaves a listener wonder - what was all that about? When heard live, there are no such questions - the piece is undeniably "orgasmic", as is popular to say on this forum.

It also happens to have sharp transients, especially between 3:30 to 3:40 marks. When heard live, the transients are cutting through the wall of sound like a sword, seemingly right to the listener's heart. On CD, it is like - Meh, what was that again? Those who heard it live, especially performed by a fine European-school orchestra, must understand what I'm describing.

So, couple years after I deemed the selection of classical music SACDs becoming acceptable, I donated my entire collection of classical music CDs. If you ever visit Los Altos, California, you will probably still find these CDs in its central public library.
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
Just a thought. Gamelan music might be the type to have much more intersample overs than most music does. If you have some, try reducing the level by -4 db in an audio editor. Then play it making up the gain in analog if you can. See if that helps. Of course it is possible poor mixing and mastering practices have let the ill effects of intersample overs get into the recording before it gets to you. In which case you'll not be able to remove it.

My take on this is that gamelan orchestra is an ancestral symphony orchestra. In recording, same considerations apply. CD format just doesn't cut it IMHO.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
Sorry no ABX logs this week - way too busy with other things. Yet I'll give you something right away.

You probably heard Pas De Deux from Tchaikovsky's Nutcracker. One of the most frequently performed pieces of classical music. Deezer lists 100+ CD-quality renditions of it. I haven't listened to all of them, but did listen to about a dozen over the years. None of them sounded like the live renditions played by a competent - leave alone brilliant - orchestra!

Explanation is simple. When played live, this piece, in my estimate, has a dynamic range of 60 to 80 dB, depending on orchestra, venue, and seat. Let's take 60 dB as a safe estimate everybody could agree on. 60 dB is ~10 bits of PCM. Which leaves just 6 bits to encode the quiet opening of the piece on a CD. This is a seriously Atari-quality-sound territory.

Naturally, Atari-quality is unacceptable, so the whole piece gets compressed: to about 24 dB +- 4 dB in the CD renditions I've heard. The Pas De Deux emotional impact critically depends on the dynamic range. When compressed so severely, the piece leaves a listener wonder - what was all that about? When heard live, there are no such questions - the piece is undeniably "orgasmic", as is popular to say on this forum.

It also happens to have sharp transients, especially between 3:30 to 3:40 marks. When heard live, the transients are cutting through the wall of sound like a sword, seemingly right to the listener's heart. On CD, it is like - Meh, what was that again? Those who heard it live, especially performed by a fine European-school orchestra, must understand what I'm describing.

So, couple years after I deemed the selection of classical music SACDs becoming acceptable, I donated my entire collection of classical music CDs. If you ever visit Los Altos, California, you will probably still find these CDs in its central public library.
I agree about how most recordings of the Nutcracker sound. But you are describing a mastering issue more than a problem with 16 bit issue. You'll have to hear a recording of it where they didn't use any compression, and the recording chain is quiet enough. Good luck finding that.
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,981
Likes
4,838
Location
Sin City, NV
No, let's leave mastering from this. Let's have any example from real world music, whatever genre, whatever mastering technique used, just that it shows that 44.1/16 recording showed distortion more than say 0.2% when compared to analogue original.
I don't think "leaving mastering out" is really an option. There's no doubt that a live performance contains elements that are impossible to reproduce perfectly in a recorded performance. It's equally possible for there to be environmental factors which exceed the limitations of the microphone and/or recording medium, as it is for effects/distortions to be added afterward in the studio which exceed the reproduction environment's resolution.

The real question (to me at least) is whether those differences add or subtract from the result when experienced through 'normal' hifi gear - since having separate 3D transducers for each instrument, properly located in an identical acoustic space, to precisely recreate the original performance isn't possible/feasible. Or more to the point, even if it were possible - it would be no less complicated and expensive to set up than simply hiring the band/orchestra to perform it again.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
Sorry no ABX logs this week - way too busy with other things. Yet I'll give you something right away.

You probably heard Pas De Deux from Tchaikovsky's Nutcracker. One of the most frequently performed pieces of classical music. Deezer lists 100+ CD-quality renditions of it. I haven't listened to all of them, but did listen to about a dozen over the years. None of them sounded like the live renditions played by a competent - leave alone brilliant - orchestra!

Explanation is simple. When played live, this piece, in my estimate, has a dynamic range of 60 to 80 dB, depending on orchestra, venue, and seat. Let's take 60 dB as a safe estimate everybody could agree on. 60 dB is ~10 bits of PCM. Which leaves just 6 bits to encode the quiet opening of the piece on a CD. This is a seriously Atari-quality-sound territory.

This is guesswork - why don't you load it into Audacity and check the dynamic range.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
I don't think "leaving mastering out" is really an option. There's no doubt that a live performance contains elements that are impossible to reproduce perfectly in a recorded performance. It's equally possible for there to be environmental factors which exceed the limitations of the microphone and/or recording medium, as it is for effects/distortions to be added afterward in the studio which exceed the reproduction environment's resolution.

The real question (to me at least) is whether those differences add or subtract from the result when experienced through 'normal' hifi gear - since having separate 3D transducers for each instrument, properly located in an identical acoustic space, to precisely recreate the original performance isn't possible/feasible. Or more to the point, even if it were possible - it would be no less complicated and expensive to set up than simply hiring the band/orchestra to perform it again.

We are here discussing differences between analog recording and 44.1/16 digital conversion of that recording. For that comparison mastering is irrelevant.
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,981
Likes
4,838
Location
Sin City, NV
We are here discussing differences between analog recording and 44.1/16 digital conversion of that recording. For that comparison mastering is irrelevant.
I must have read both the OP and subsequent posts differently then... I was under the impression that it was a discussion of "the proper definition of high resolution music..." (and whether that was possible/necessary). I'll see myself out then. :rolleyes:
 

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,459
Location
Australia
I would think that high definition music is the level which cannot be identified, via critical hearing trials, from a higher defined level.

OR,

The highest definition currently available.

It is a matter of definition.
 

Soniclife

Major Contributor
Forum Donor
Joined
Apr 13, 2017
Messages
4,500
Likes
5,417
Location
UK
My take on this is that gamelan orchestra is an ancestral symphony orchestra. In recording, same considerations apply. CD format just doesn't cut it IMHO.
When you have the time it would be great to see some evidence of this, I assume you have a hires recording that you don't think is unbearable, so down sample it to 16/44 and test it.

P.S. When comparing different sample rates and bit depths are you best up sampling the lower to match the higher rate to make the ABX software's job easier with less chance of a tell?
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
I must have read both the OP and subsequent posts differently then... I was under the impression that it was a discussion of "the proper definition of high resolution music..." (and whether that was possible/necessary). I'll see myself out then. :rolleyes:

No need, I'm sorry if I sounded harsh. I merely wanted to point out that whatever happened during mastering should be digitized correctly, and I don't remember I have ever seen a music sample which clearly demonstrates 44.1/16 format weakness.
 
Top Bottom