• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Can we hear the bottom bits of 24?

OP
E

earlevel

Addicted to Fun and Learning
Joined
Nov 18, 2020
Messages
550
Likes
779
I think your demonstration is great, @earlevel.
Correct me if I misunderstand, but you're not out to teach electronics theory, only to demonstrate the reality (futility?) of signals that are -138dB rms.

@sarumbear - give "level_test_24bit.wav" about 120dB of gain and play it back (rms should then be -18dB). Hope you like vintage Nintendo?
Yes, that's a good way to put it. The 16-bit is probably not surprising to anyone, but it does give some confidence things are working as expected. Someone might think they can similarly hear the quietest 24-bit sounds, just at a lower level than the 16-bit test. But no, no one will hear that.

In reality, most people won't check it out, a few will accuse me of trying to trick them. Or they'll say that it doesn't matter whether they can be heard, that it messes up the soundstage even when it's too low to be heard. But I'm just putting it out there, l don't even ask people if they can hear it.

To be clear, all the files are just digital gain shifts of the same thing. The 5-bit is there to learn what to listen for, safely.
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,323
Location
UK
The 16-bit is probably not surprising to anyone, but it does give some confidence things are working as expected. Someone might think they can similarly hear the quietest 24-bit sounds, just at a lower level than the 16-bit test. But no, no one will hear that.
That is my point. A reference file that nobody should hear is not a reference. Anything that does not exist is not a reference.
 
OP
E

earlevel

Addicted to Fun and Learning
Joined
Nov 18, 2020
Messages
550
Likes
779
I respect what you do. Teaching is a noble thing to do.

Eyecharts work because they show you an error directly. Silent audio tracks does not. If we use your eye test analogy you produced an extremely black board and printed differing levels of black letters on it. You then ask people to switch the lights and see those lines.
OK...feel free to make any last comments if you wish, I think we're looking at two different things. None of the files are silent. If the "24-bit" sweep is silent, that is the very point that should be concluded. For the 16-bit file, I think few will conclude it's silent, but they may conclude something about how easy (not not) it is to hear on their own system. That signal is precisely the same magnitude as the difference between a higher-res audio file and it's reduction to 16-bit, the dithered noise floor. But purposely more ear-catching, so people don't have to wrestle over how much of the noise is their system and how much is the audio file's noise floor.

Recording engineers routinely work with mixes of 32-bit float samples (the actually mix engine is likely 64-bit, but the busses are usually 32-bit, can be 64-bit). They get nervous when they (or mastering engineers) need to reduce it for distribution. No worries, most have come to accept that dithering the truncation produces very good results, and it's a matter of checking a box on the export.

But there are similar situations where that choice isn't so simple. I won't derail things here by going into them, but I'd be happy to discuss with you privately if you want. The point I'm getting as is that there are reasons to discuss the magnitude of error in a 24-bit truncation. And if you want to have some vague feeling about how loud that is, a test signal might be handy for reference. If a person can't hear the signal, which you appear to agree they won't, then perhaps that person may conclude that it's not work the distraction of doing awkward thing to improve the integrity of the lsb.

Of course, the fact is there's virtually always (exception be digital gain and reverb fades) ample Gaussian noise to sufficiently dither 24-bit, but "virtually" makes it a non-starter for audio professionals, and plus many people feel their studio noise floor is far below the 24-bit level (true story). And again, that's why I encourage them to play the "24-bit" sweep.
 

Holmz

Major Contributor
Joined
Oct 3, 2021
Messages
2,020
Likes
1,242
Location
Australia
Let's not forget that you can detect tones 20db below the noise floor. Do with that as you wish.

Where is ^that^ noise floor?

if it is pink noise at 85 dB(A) and the FFT is 1k so that the bins are ~90 Hz wide, then the tones may be larger than they appear.
(like the things in the sideview mirrors)

If the noise is at 20 dB, then we sort of need to know the tone’s SPL?
 
OP
E

earlevel

Addicted to Fun and Learning
Joined
Nov 18, 2020
Messages
550
Likes
779
That is my point. A reference file that nobody should hear is not a reference. Anything that does not exist is not a reference.
This is the puzzling thing. You say,

A reference file that nobody should hear...

True, we expect that.

...is not a reference

Not correct. Anyway, I only explained that I included them "for reference", this seem like a silly point to argue, that I'm using wrong words (OK for 5- and 16-? But not for -24 because it's not expected to be heard? What? If I didn't want someone to refer to it, why would I even create it?)

Anything that does not exist is not a reference.

What exactly doesn't exist? It's a file. And it contains a signal. Don't answer, I can't believe I spent this much time on this trivial detail.
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827

audio2design

Major Contributor
Joined
Nov 29, 2020
Messages
1,769
Likes
1,830
Where is ^that^ noise floor?

if it is pink noise at 85 dB(A) and the FFT is 1k so that the bins are ~90 Hz wide, then the tones may be larger than they appear.
(like the things in the sideview mirrors)

If the noise is at 20 dB, then we sort of need to know the tone’s SPL?

Hence why I said "do with that as you wish" :) We can detect tones 20db below noise, but you still need to be at a level of detect-ability.

To make matters more complicated, that low level tone may not be detectable without noise, but detectable with noise.
 

JRS

Major Contributor
Joined
Sep 22, 2021
Messages
1,158
Likes
1,006
Location
Albuquerque, NM USA
Interesting points, for sure, a lot to discuss. But I'd say there are fundamental limitations on two basic front:

The electronics side (Johnson noise, shot noise...).

The human side. Well, obviously it's reasonable to argue that the electronics is a sufficiently limiting factor, and it doesn't matter if there is one super-human who can outdo our hearing expectations. But setting that aside, there are intuitive limits on hearing as well. For example:

There has to be enough sound pressure to deflect the eardrum. Other masses too, but if the ear drum doesn't deflect, it's a non-starter. My only point here is that it's intuitive that there is some minimum. Then we can talk about exactly what it might be.

And, the body itself makes noise—again, I'm just making intuitive arguments, I don't know the physiology well enough, but I've read that if human hearing was much better, we'd be kept awake by the blood flowing through out veins. I think maybe the brain does or could ignore some things, but my point here is that even humans have a noise floor, even if they could find a noise-free room.

Besides having a floor, there is a ceiling. And it doesn't do any good to speculate that some super-human might have a far greater ceiling than normal people. For instance, it's no good arguing that someone in the world might happily listen to music at 140 dB SPL, because WTF would they be listening on :). That means, without question, that the lowest bit is far, far below 0 dB SPL. Meaning a super-human would require extreme downward sensitivity (in a battle with the noise floors), that upward dynamic range won't come into play significantly.

So while it's not possible to rule out that some human has far better hearing than most (there is no "black swan" way of disproving that), we can be confident that between the noise floors of electronics and humans, not to mention practical environments, there must certainly be some limitation, and it seems to be within the 24-bit range and not outside of it.

PS—Just making deductions based on my limit knowledge of hearing, would love to hear thoughts from others
My thoughts as well--there are physical limits which impose a limit on gain--0dB supposedly correlates with the amplitude of oscillation to be something on the order of a hydrogen atom, and are transduced by exposing an ionic channel in the membrane, which in turn lead to changes in the transmembrane voltage across the capacitance of the fluid membrane: (water-fat-water). Just as our eyes under completely dark adapted state can reliably detect 4 or 5 photons, some can detect 1 with a greater than 50% accuracy. There you are dealing with the absolute limits of nature. At 0 dB, the world is a very quiet place indeed, and one heart beat is clearly audible. The quietest place on the planet is a Microsoft lab which measures -26.4dB. It is said that ones bones grinding while walking can be heard as well as can a squishy sound accompanying eye movement. Unfortunately, the article didn't mention any efforts to measure these to determine if not 0, then what is the absolute limit of audibility.

BTW Microsoft uses it for listening to the vibration of capacitors among many more interesting pursuits. Point is we are likely at the physical limits.

Dynamic range is likewise limited. It's not hard to bury the needle of the FM coding of amplitude--things don't get louder--even theoretically, human neurons are electro-mechanical system with an estimated absolute upper frequency of 1000Hz, and likely don't exceed 500Hz, corresponding to a dynamic rage of 35dB. Mechanical coupling of the eardrum and cochlear fluid is adjustable, as is the amount of tonic inhibition present during adaptation--roughly analogous to the ASA of film. All told maybe 100dB of usable dynamic rage. At that point output is at the rails, ain't no more no matter how much Capt. Kirk may implore us. So there may be extraordinary hearing folk walking about, who sitting in a mechanically isolated 30' cubed anechoic chamber in Redmond might pick up the penultimate bit, but not convinced. Interestingly, it is true that detection is improved whe picking out a signal amongst noise as to the same signal against silence. I completely understand that as it's likely easier to pick up the absence of noise (signal) then the weak signal itself.
 

JRS

Major Contributor
Joined
Sep 22, 2021
Messages
1,158
Likes
1,006
Location
Albuquerque, NM USA
My question would be: what is the self-noise of the human hearing apparatus? We already know that this is variable, that with those with tinnitus have elevated levels of self noise. Has this parameter been measured? I've seen fletcher-munson curves showing limits of audibility depending on frequency. Maybe this should be [or already is] a thread here. I'm older, know that my hearing ability is compromised. When it's late at night, and there's nothing going on, I can hear some low-level hiss emanating from my nervous system. I'm sure I'm not alone. And I know that my audio gear, when there's pauses between tracks, has less self-noise than my ears at 4:00 in the morning.
There is some evidence that stochastic resonance is injected deliberately to improve hearing. In other words feature recognition is enhanced with background noise. My own brain, often a noisy place, sees it perhaps as picking out a spot against a white background vs that of of a clear/neutral one.
 

JRS

Major Contributor
Joined
Sep 22, 2021
Messages
1,158
Likes
1,006
Location
Albuquerque, NM USA
How is it even possible for 'hearing 24 bits' not to be a moot point when even the very best DAC which has ever been tested on this forum can only achieve something like 22 bits of resolution?
Don't underestimate the magic of biology. The gain of the rod photoreceptors can be approximated by 1mV/ 5hf where H is Planck's constant and f is within the frequencies of visible light about 3E8/0.5E-6=6 x 10^14* 5x 6.2E-34 x 6x10E14= 186E-20 puts gain at about 10^15 under ideal circumstances. That's likely a bit of a stretch, but 100 billion is reasonable under ordinary circumstances vs a really good photomultiplier tube of 100million. Not sure about solid state, but damn that's impressive for a squishy carbon based system.
 

JRS

Major Contributor
Joined
Sep 22, 2021
Messages
1,158
Likes
1,006
Location
Albuquerque, NM USA
Re-read what I wrote. I said the SPL difference between a quite hall and the canon ball explosion.
144 would be more like the difference between a quiet orchestra in a seated arena and a Saturn V with all 5 F1 engines full bore which have been captured at 200dB.
 

JRS

Major Contributor
Joined
Sep 22, 2021
Messages
1,158
Likes
1,006
Location
Albuquerque, NM USA
OK...feel free to make any last comments if you wish, I think we're looking at two different things. None of the files are silent. If the "24-bit" sweep is silent, that is the very point that should be concluded. For the 16-bit file, I think few will conclude it's silent, but they may conclude something about how easy (not not) it is to hear on their own system. That signal is precisely the same magnitude as the difference between a higher-res audio file and it's reduction to 16-bit, the dithered noise floor. But purposely more ear-catching, so people don't have to wrestle over how much of the noise is their system and how much is the audio file's noise floor.

Recording engineers routinely work with mixes of 32-bit float samples (the actually mix engine is likely 64-bit, but the busses are usually 32-bit, can be 64-bit). They get nervous when they (or mastering engineers) need to reduce it for distribution. No worries, most have come to accept that dithering the truncation produces very good results, and it's a matter of checking a box on the export.

But there are similar situations where that choice isn't so simple. I won't derail things here by going into them, but I'd be happy to discuss with you privately if you want. The point I'm getting as is that there are reasons to discuss the magnitude of error in a 24-bit truncation. And if you want to have some vague feeling about how loud that is, a test signal might be handy for reference. If a person can't hear the signal, which you appear to agree they won't, then perhaps that person may conclude that it's not work the distraction of doing awkward thing to improve the integrity of the lsb.

Of course, the fact is there's virtually always (exception be digital gain and reverb fades) ample Gaussian noise to sufficiently dither 24-bit, but "virtually" makes it a non-starter for audio professionals, and plus many people feel their studio noise floor is far below the 24-bit level (true story). And again, that's why I encourage them to play the "24-bit" sweep.
Perhaps a more persuasive way to make the demo is brief -5db to adjust to say normal listening volume about 80db. Then drop in 6dB decrements to the 16 bit limit. Hold there ad have user turn up volume to make that clear, then start with the -6dB drops again. My guess is by the time one gets to -20 bits won't hear squat. Then drop 21 wont hear squat 22 wont hear squat, 23 wont.... And then ask again if they heard it. I believe that demonstrating that it's not just the last one, but perhaps the last 4 that are inaudible with their system maxed which may help to drive the point home.

BTW glad you could join us tonight. Should leave the nitwits to their own devices where -24 bits is plain as day
 
OP
E

earlevel

Addicted to Fun and Learning
Joined
Nov 18, 2020
Messages
550
Likes
779
Perhaps a more persuasive way to make the demo is brief -5db to adjust to say normal listening volume about 80db. Then drop in 6dB decrements to the 16 bit limit. Hold there ad have user turn up volume to make that clear, then start with the -6dB drops again. My guess is by the time one gets to -20 bits won't hear squat. Then drop 21 wont hear squat 22 wont hear squat, 23 wont.... And then ask again if they heard it. I believe that demonstrating that it's not just the last one, but perhaps the last 4 that are inaudible with their system maxed which may help to drive the point home.

BTW glad you could join us tonight. Should leave the nitwits to their own devices where -24 bits is plain as day
I like that, I will try that if I ever have the need, or write an article about such a thing. The 16- and 24-bit tests were related to discussion of dithering 16- and 24-bit audio, but your idea is much better for giving an idea how much each bit level contributes.

Your hearing comments were fascinating—do you work in a related field?
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,022
Likes
36,344
Location
The Neitherlands
Hence why I said "do with that as you wish" :) We can detect tones 20db below noise, but you still need to be at a level of detect-ability.

To make matters more complicated, that low level tone may not be detectable without noise, but detectable with noise.

I have tried this in the past but by mixing white noise (at -10dB) and switching on and off a 1kHz tone at lower levels .
Did not simulate using quantization noise nor shaped noise though this might skew the results ?
It was hard to detect a tone well below noise levels.
I was more interested in music down noise levels so mixed music with peaks at 0dB noise levels and can only hear noise.

So while one can detect tones switching on and off (think morsecode) it is not the case using music. If I am not mistaken that's what audio is all about.
 

Tangband

Major Contributor
Joined
Sep 3, 2019
Messages
2,994
Likes
2,797
Location
Sweden
We are talking about audio signal, not internal processing, which is mostly 32-bit floating anyway.
Yes, but you need to convert the microphone signal to analog when recording real instruments. This is not done with 32 bit. This is done with much less resolution, depending on the A/D and the microphone noise.
When using reverb, eq plugins, exiters , compression effect and so on, this plug ins in a DAW is not 32 bit floating point, they are often much less than 24 bit when they are made . Different reverbs uses sampling technique when they are made, before they are used in a DAW. Sometimes they are just 16-20 bit resolution. All faults add.
 
Last edited:

audio2design

Major Contributor
Joined
Nov 29, 2020
Messages
1,769
Likes
1,830
So while one can detect tones switching on and off (think morsecode) it is not the case using music. If I am not mistaken that's what audio is all about.

We appear to be exploring theoretical limits in this thread. Hence why I didn't draw conclusions.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,022
Likes
36,344
Location
The Neitherlands
Isn't the detect a tone in music a practical issue depending on the noise spectrum/type, tone amplitude and frequency as well as training ?
Theory is fine but then there is practice as well. :)
 

audio2design

Major Contributor
Joined
Nov 29, 2020
Messages
1,769
Likes
1,830
Isn't the detect a tone in music a practical issue depending on the noise spectrum/type, tone amplitude and frequency as well as training ?
Theory is fine but then there is practice as well. :)

Oh absolutely, but this thread is more about theoretical limits versus "music" limits. Theoretical limits define what is possible, not what actually happens of course, but if you know the theoretical limits, it gives you better boundaries for further experimentation.
 

Cbdb2

Major Contributor
Joined
Sep 8, 2019
Messages
1,553
Likes
1,534
Location
Vancouver
Ever record anything? Theres always noise. Room noise, instrument noise (especially elect. guits), mic noise, syths etc have electronics noise, preamp noise. If you get a signal thats 80db above the noise your doing a good job, youll never get 100db S/N.
 
Top Bottom