• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). Come here to have fun, be ready to be teased and not take online life too seriously. We now measure and review equipment for free! Click here for details.

Proper Definition of High-Resolution Music

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,018
Likes
1,345
Location
Sin City, NV
#61
Here's a interesting demonstration:

As far as whether or not higher resolution is required in order to preserve the audible portion of the audio information... hard to say, as that depends on whether or not you consider inaudible information meaningful. It's clear that the dynamic range is adequate as that is limited to at least a few dBs less than 16/44 allows depending on NR used - however, as far as frequency range then it could be considered inadequate.

Again the real issue is whether or not anything outside the ~96dB DR and ~20-20k FR limitations actually affects the enjoyment of the recording to any meaningful extent is the question. Even if it were detectable, I would argue that the 'losses' (if any) are insignificant - when compared to the inherent differences between live instruments in more optimal spaces and recorded audio played via transducers in less optimal spaces (regardless of analog or digital at any resolution).

Edit: Considering this paper (which I haven't purchased but members could get), it would appear that even the DR limitation might be considered nearly inadequate... although I'd guess 16/44 with dithering would be ~120dB?.
 
Last edited:

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
2,734
Location
Zg, Cro
#62
As far as whether or not higher resolution is required in order to preserve the audible portion of the audio information... hard to say, as that depends on whether or not you consider inaudible information meaningful.
In what way can inaudible information be meaningful in context of listening to the music?
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,018
Likes
1,345
Location
Sin City, NV
#63
In what way can inaudible information be meaningful in context of listening to the music?
Apparently directly pertinent in the context of arousal actually:
It is known that alpha-band power in a human electroencephalogram (EEG) is larger when the inaudible high-frequency components are present than when they are absent. Traditionally, alpha-band EEG activity has been associated with arousal level.
...
The participants did not distinguish between these excerpts in terms of sound quality. Only a subjective rating of inactive pleasantness after listening was higher for the excerpt with high-frequency components than for the other excerpt.
Ridiculously small sample size and a few other issues would prevent any broad conclusions from being drawn perhaps... but still interesting.

I can't find it at the moment, but there is a video (linked on here) with a presentation from an engineer from ESS describing a similar result found during their work on sigma-delta filter refinement or something along those lines.

EDIT: Found it... granted, it's a manufacturer - so you may need the salt shaker handy, but it's an interesting (if anecdotal) contribution to the theory:
Essentially it's a 'guess' as to why some people inside ESS appeared to 'hear' noise/artifacts outside the audible range.
 
Last edited:

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
2,734
Location
Zg, Cro
#64
The participants did not distinguish between these excerpts in terms of sound quality. Only a subjective rating of inactive pleasantness after listening was higher for the excerpt with high-frequency components than for the other excerpt.

I can't really make anything out of that statement. To me it doesn't seem to be in the domain of audio science but something else.
 

Eirikur

Senior Member
Joined
Jul 9, 2019
Messages
318
Likes
451
#65
Here's a interesting demonstration [of preceived higher frequencies in a vinyl record]:
While an interesting demonstration, I would add this very enlightening comment below that video:
periurban10 months ago
This test was fascinating! It is extremely unlikely that any audio was deliberately recorded to tape with frequencies above 20kHz, and I speak as a semi-pro musician who has recorded music to analogue and digital and had my own vinyl record cut. It simply isn't possible or desirable to cut a vinyl record with any meaningful frequencies above 20kHz. Indeed, the engineers in mastering suites and cutting lathe operators will be at great pains to make sure nothing above 20kHz or so will get anywhere near the cutting lathe. In my day the lathe head was cooled with liquid helium, and the higher the energy of the upper frequencies the more helium they had to send to the cooling mechanism in the cutting head. There was a filter there designed to cut out anything above 16kHz.
The typical frequency response of an analogue studio tape recorder was often designed to roll off after 20kHz, although many (especially at 30ips) did not! Instead they introduced unwanted harmonics. You can see this here http://www.endino.com/graphs/ where the input signal would be flat, but you can see the frequency response on some recorders tailing off at 20kHz (as they should!), but on some it is ramping up! The author doesn't say, but I would take that emphasis in the upper range to be a measure of the additional distortion at those difficult to handle frequencies. Electronics begin to struggle with any energetic signal above 20kHz.
The signals you are seeing in this video are undoubtedly harmonics related to the fundamentals in the audio range, and they are being produced by the playback. These harmonics are not encoded into the groove. There hasn't ever been a lathe that can cut high frequencies like that, and even if there were your needle wouldn't be able to resolve them and play them back.
So, what about the Disney record that was digitally recorded? Simple. It was mastered properly for vinyl, in the modern era, with the benefit of experience. The other tracks (where the frequency response can be seen extending upwards of 20kHz) were not. Even though the Disney record is better you can still see a few times where the needle is struggling to track the fundamentals and produces odd bursts of upper frequency harmonics.
The effect of those upper frequencies (that no-one can ever hear) is to take away energy from the ones you can hear. That's why a lot of old (and cheaply made) records (such as the ones played here) sound bad.
Edit: the Enoch Light record is actually 4 channel stereo and uses the Electro-Voice EV-4 (a.k.a. Stereo-4) quadraphonic system according to Discogs.

This informative article on JVC's CD-4 system informs us that such encoding schemes do employ high-frequency content (20-45kHz)
JVC's system squeezes four separate channels into the two walls of the conventional LP record groove by some ingenious electronics. The two front channels are recorded like a normal stereo record, with a frequency range up to 15,000 cycles per second. The two other signals, however, are recorded over a high-frequency range of 20,000-45,000 cps using a 30,000 cps signal as a "carrier," similar to FM broadcasting.
So, a high-quality record after all - just not with hi-res stereo!
 
Last edited:

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,018
Likes
1,345
Location
Sin City, NV
#66
While generally correct, it's not physically impossible to even cut a groove for 122kHz apparently. I would agree with the last bold comment generally however - although I would also argue that the incredibly high noise floor and limited DR on old, cheaply made recordings has much more to do with their bad sound than a non-filtered FR does.

The participants did not distinguish between these excerpts in terms of sound quality. Only a subjective rating of inactive pleasantness after listening was higher for the excerpt with high-frequency components than for the other excerpt.

I can't really make anything out of that statement. To me it doesn't seem to be in the domain of audio science but something else.
It simply means that they weren't able to hear a difference between the selections when tested - i.e. they were audibly equivalent. Regardless of this fact when asked to pick one which left them "feeling better" after listening - they picked the one with high-frequency elements. Essentially the test was more about how our brains are capable of responding to sound that our ears are incapable of discerning. The EEG measurements confirm there was neural activity in response to the information, and waveform confirms the presense/absense of the information in the media. Despite that there was no audible difference between the two.

This shouldn't really be all that difficult to conceive... after all we've been using infrasonics in warfare for sometime now. As well as high frequency sound for 'age restricting' and other forms of pest control. On both ends of the audible range, research is on going.
 
Last edited:

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,018
Likes
1,345
Location
Sin City, NV
#68
For further consideration... on the topic of "what is there" rather than "what we hear". Again, not arguing whether it's heard (with the ears at least), but possible that there is something detectable in some form that is deterministic in a qualitative sense regarding musical instruments.

For me that's the interesting thing to consider - is a recording made for the purpose of "archiving a performance"... in which case, perhaps even 24/192 is 'inadequate'. Or is the purpose merely to "extract the necessary information" required to create a synthetic representation of that performance - in which case 16/44 should be plenty.

Taken to a visual analogy... if documenting presence of people at a life event is the desire - a photograph should suffice (and memory will fill in the rest for those who were present). If desiring to synthetically represent the mannerisms and conversations occurring at the time - then video would be required. However if the desire is to communicate the entire environment to someone who wasn't present... then a VR presentation would be required (presumably crafted from a system of multiple cameras of multiple resolutions and microphones covering the entire venue). That way a person who was not present could walk around and observe the environment as a whole - as if they were a participant.

The bottom line is that all applications of data are not necessarily obvious at the time such data is captured - the more that is, the more that can be theoretically achieved at a later date. That being said, nothing will change human hearing, so from a practical standpoint CD audio is essentially perfect, and certainly adequate.
 
Last edited:

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
2,734
Location
Zg, Cro
#69
It simply means that they weren't able to hear a difference between the selections when tested - i.e. they were audibly equivalent. Regardless of this fact when asked to pick one which left them "feeling better" after listening - they picked the one with high-frequency elements. Essentially the test was more about how our brains are capable of responding to sound that our ears are incapable of discerning. The EEG measurements confirm there was neural activity in response to the information, and waveform confirms the presense/absense of the information in the media. Despite that there was no audible difference between the two.

This shouldn't really be all that difficult to conceive... after all we've been using infrasonics in warfare for sometime now. As well as high frequency sound for 'age restricting' and other forms of pest control. On both ends of the audible range, research is on going.
I know what it means. What I find hard to believe is that somebody was "feeling better" because of some very low amplitude ultrasonic noise that was present in the recording. I would like to see some more research with properly conducted blind tests supporting that thesis before starting to believe in that.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
2,734
Location
Zg, Cro
#70
For further consideration... on the topic of "what is there" rather than "what we hear". Again, not arguing whether it's heard (with the ears at least), but possible that there is something detectable in some form that is deterministic in a qualitative sense regarding musical instruments.
Well, this is audio forum so "what is there" is not relevant, "what we hear" is.
 
Joined
Nov 23, 2017
Messages
73
Likes
41
#71
No, let's leave mastering from this.
Mastering is the Elephant in the Room, we always tiptoe around it. But there is it. Big. In the room.

Resolution is by implication a description of how close to the original analog waveform of the mix the recording is when transcribed to the medium. There's really no point in worrying about whether there are 250 or 500 points representing a section of waveform when the mastering has clip-pressed it to the rail. No point at all.

For much music today you get an army of polishers getting a lovely high shine to something... ...that a cursory listen and view in a waveform editor reveals as something that needs to be flushed down the pipes.
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,018
Likes
1,345
Location
Sin City, NV
#72
Well, this is audio forum so "what is there" is not relevant, "what we hear" is.
So ultrasonic harmonics have no relevance? SNR below audibility levels are meaningless? I would argue that although you are correct, according to Webster's first definition of "Audio" - a discussion of "Sound" should be at least somewhat relevant on an audio site regardless of human audibility. Especially since we're interested in measurements over subjective analysis... and test equipment can certainly measure much more than is audible.

I would think it's at least as relevant as a discussion of chemistry or anthropology would be on a medical site.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
2,734
Location
Zg, Cro
#73
Mastering is the Elephant in the Room, we always tiptoe around it. But there is it. Big. In the room.

Resolution is by implication a description of how close to the original analog waveform of the mix the recording is when transcribed to the medium. There's really no point in worrying about whether there are 250 or 500 points representing a section of waveform when the mastering has clip-pressed it to the rail. No point at all.

For much music today you get an army of polishers getting a lovely high shine to something... ...that a cursory listen and view in a waveform editor reveals as something that needs to be flushed down the pipes.
I didn't say mastering doesn't matter, I said I would accept any sample with whatever mastering technique and whatever genre to prove audible difference between analog recording and digitized with 44.1/16.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
2,734
Location
Zg, Cro
#74
So ultrasonic harmonics have no relevance? SNR below audibility levels are meaningless? I would argue that although you are correct, according to Webster's first definition of "Audio" - a discussion of "Sound" should be at least somewhat relevant on an audio site regardless of human audibility. Especially since we're interested in measurements over subjective analysis... and test equipment can certainly measure much more than is audible.

I would think it's at least as relevant as a discussion of chemistry or anthropology would be on a medical site.
Yep, utlrasonic harmonics have no relevance, in fact you are better without them. DAC filter them so amps and speakers don't have to deal with them.

Yep again, SNR below audibility level speaks only of quality of engineering as there is no impact on SQ. Where do you set audible threshold is another issue, of course.

Audio and sound are 2 very different terms - I don't think anybody here wants to discuss utlrasound measurements etc.
 

Eirikur

Senior Member
Joined
Jul 9, 2019
Messages
318
Likes
451
#75
And how exactly is this take away happening? [upper frequencies cannibalizing audible]
Good question and not entirely clear to me either.

Let me try: in the context of the remark (a vinyl medium) any physical space on the record used for non-audible frequencies is taken away from a better/smoother cut for the audible part?

From the same article (dated March 1973!)
One claimed disadvantage of JVC's system is that the grooves on the disc have to be wider than normal because of the extra information they contain. Originally it was thought that CD-4 albums would be only able to contain 15 minutes of material per side, but recent developments have boosted this figure to 25 minutes for popular material, 30 minutes for classical (and there are relatively few programs which exceed these limits).
and also
CD-4 records also have to be cut at a quieter level than regular records in order to accommodate the additional groove information. This process makes the signal-to-noise ratio very low (i.e., the dominance of the source sound ("signal") over any other sound reproduced during playback ("noise") will not be as great with a CD-4 record), However, JVC looks forward to ameliorating this problem in the near future along with that of the cutting speed.
(word of the day in bold)
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
2,734
Location
Zg, Cro
#76
Good question and not entirely clear to me either.

Let me try: in the context of the remark (a vinyl medium) any physical space on the record used for non-audible frequencies is taken away from a better/smoother cut for the audible part?
I don't think that is how CD-4 worked.
 
Joined
Nov 23, 2017
Messages
73
Likes
41
#77
prove audible difference between analog recording and digitized with 44.1/16.
I found that there was quite a difference between listening to the raw 44.1/16 and having it upscaled to 88.2/24 and then listening to it.
This suggests there are two aspects to the rate/quantisation discussion:
1. Is there any info lost that may be useful
2. Upsampling is a form of 'decompression' (in the data set size realm) and should offer better sound.

For me the upsampled 88.2 was a more relaxed listen with a better long term listenability.
 

Eirikur

Senior Member
Joined
Jul 9, 2019
Messages
318
Likes
451
#78
I found that there was quite a difference between listening to the raw 44.1/16 and having it upscaled to 88.2/24 and then listening to it.
Main question: how was it upsampled? Almost any DAC these days will internally expand to 24 or 32 bits first (with or without dither) and will upsample to whatever is needed to generate a smooth and faithful analog representation in volts.

This suggests there are two aspects to the rate/quantisation discussion:
1. Is there any info lost that may be useful
Nope for sampling rate, not in any DAC worth its salt.
Quantization to 16 bits may have a tiny amount of audible noise in soft passages at high volume settings, this cannot be recovered by upsampling + increased word size. The algorithm may however apply a noise shaped dither that takes of the edge, even though it actually reduces fidelity to ~15.5bits. Examples here.

2. Upsampling is a form of 'decompression' (in the data set size realm) and should offer better sound
No it is not, upsampling is (or should be) a representation of the same data in a different time quantization. As an example: this is how Audacity upsamples a 16764Hz sinus; top is 44.1kHz and bottom 352.8kHz. As you can see, those few samples are sufficient to recreate the full wave.
2019-08-08 21_02_34-Audacity.png


Depending on the used algorithm upsampling can be a filter, e.g. Windows has a low latency fast SRC algorithm, at the expense of accuracy.

For me the upsampled 88.2 was a more relaxed listen with a better long term listenability.
Some people here would ask you if you ABXed this, I'll just say it can be true as the upsampler algorithm may have done some noise shaping. It is even possible that the higher sample rate hides some PLL jitter in some digital transmission in your system.
 

tmtomh

Active Member
Joined
Aug 14, 2018
Messages
227
Likes
546
#79
Regardless of what one might or might not hear, it is inaccurate and IMHO misleading to refer to upsampling as decompression. Upsampling does not decompress anything or add anything. Whatever was lost in the analogue to digital conversion (or the digital downsampling) that resulted in the digital file you are playing cannot in any way be recovered by upsampling.

Now, it certainly is possible that a DAC presented with a 16/44.1 signal might produce a slightly different analogue output than that same DAC presented with a 24/88.2 signal - because it's possible that the DAC's internal processing or algorithms or filters might be applied in a different way depending on what the resolution and bit depth is of the signal it receives. Even so, I would expect any competently designed DAC to produce an extremely similar analogue result from either digital signal.

Again, I am not trying to tell anyone that they are crazy or deluded - we perceive what we perceive. I am just speaking about what happens with the musical source (and therefore indirectly about the potential likely and unlikely cause(s) of why we might perceive what we perceive).
 

Blumlein 88

Major Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
7,838
Likes
9,421
#80
Here's a interesting demonstration:

As far as whether or not higher resolution is required in order to preserve the audible portion of the audio information... hard to say, as that depends on whether or not you consider inaudible information meaningful. It's clear that the dynamic range is adequate as that is limited to at least a few dBs less than 16/44 allows depending on NR used - however, as far as frequency range then it could be considered inadequate.

Again the real issue is whether or not anything outside the ~96dB DR and ~20-20k FR limitations actually affects the enjoyment of the recording to any meaningful extent is the question. Even if it were detectable, I would argue that the 'losses' (if any) are insignificant - when compared to the inherent differences between live instruments in more optimal spaces and recorded audio played via transducers in less optimal spaces (regardless of analog or digital at any resolution).

Edit: Considering this paper (which I haven't purchased but members could get), it would appear that even the DR limitation might be considered nearly inadequate... although I'd guess 16/44 with dithering would be ~120dB?.
Lot of the info in that paper was used in this article by Amir.
https://www.audiosciencereview.com/forum/index.php?threads/dynamic-range-how-quiet-is-quiet.14/

If you didn't see this recent thread, some of the same info was discussed.
https://www.audiosciencereview.com/...-from-recording-to-listening-room.8205/page-4
 

Similar threads

Top Bottom