• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Why we hear what we hear

Yes ! Check the old "negative feedback is bad trope" :) My audiophile bingo card is almost full now I'm only missing the copper sounds warm and silver is bright and detailed ..
Too late. I already got 'House!' with 'Vague hand waving about hearing timing errors of microseconds'.

Probably a new game will start soon, though.
 
Watched the video in fast mode. The biological entry part is OK and tells already known facts. At the end of the video he tells about the speed of transistors and cables regarding stored energy which covers low level details. This part is not totally wrong but technical irrelevant for modern semiconductors and cables for audio usage. And the speed of transients is of course measurable. Therefore his conclusion not being able to measure this is wrong. On the other hand, with a level versus frequency plot this may not be visible.
That's a typical argumentative strategy of golden ears or snake oil marketing, start with reasonable theories and statements and then smoothly blend irrelevant to it and wrong statements and conclusions.

If I was Adam Audio I would kindly ask him to remove or redo the video without my company's name on it.
 
our ears are sensitive to few microseconds
This is an important claim and the exact nature of this sensitivity, if it exists, is crucial. I remember looking into this years ago and IIRC it was about discrimination of two full amplitude signals in sequence, not music in general or phase distortion at all.

Two big logical problems with claims like this...

1) suppose we really are sensitive to any phase or timing distortion down to a few microseconds. How do you hear that distortion over the 100-1000x greater time distortion your speakers or headphones are adding?

2) if this stuff is audible, how come people can only hear it in sighted tests?

This is the worst kind of nonsense, the stuff that sounds good unless you know something to begin with.
 
From the summary it appears the video boils down to three very common misconceptions in audio:

1. As @kemmler3D and others have noted, he claims that human hearing is "faster" than it actually is.
2. He mysteriously ignores the time/phase effects of multiple speaker drivers, multiple speakers, and room reflections, which are orders of magnitude greater than the effects he's going on about.
3. He doesn't understand or can't grok that musical "speed" and sonic frequency are the same thing.

That's it. It's the very common set of related misunderstandings about the practical requirements and impacts of speed aka phase.
 
That video is based on one paper. The background descriptions of the hearing process is OK, but the video maker's shallow understanding of the topic lead him to making and accept some leaps and assumptions that aren't supported by all the research.

If anyone is really interested in a more accurate overview, I recommend the sixth edition of the standard primer on the topic, An Introduction to the Psychology of Hearing. It's technical from a biomed POV but people who are used to reading audio science research should have few problems with it.

This question of hearing "speed" is complicated in that there are different kinds of hearing speed. We can process extremely short interaural time differences, but our ability to resolve frequency is limited in part by tbe length of the cochlea and by the speed at which the neurons in our inner ears fire and the speed of the movement of stereocili in the viscous fluid of our cochlea. That speed is much slower than the speed of interaural time differences we can process. A lot of presumptions get make based on the short interaural time differences we can resolve as part of our process of localization, but we don't resolve all aural information at those kinds of time thresholds.

As to what degrees of these things are perceptible at what thresholds and in the presence of what kind of maskers, those are heavily studied areas in hearing research and paint a much more complicated and conditional picture than this video would lead one to believe.
 
Last edited:
In general I do not like meta data analysis papers.

But there is something with the discussion on timing, and general transient system behaviour, that many here seem not to like.
There is IMO more to a system than just the frequency response and SINAD of tones.
Although a lot of that (or most of it), is in the speaker.
 
If I was Adam Audio I would kindly ask him to remove or redo the video without my company's name on it.
Maybe Adam Audio should be more cautious about who they sponsor or advertise with. ;)


JSmith
 
Maybe Adam Audio should be more cautious about who they sponsor or advertise with. ;)


JSmith
Possibly true, if I would own an audio company I wouldn't offer such merchandise at all, as you never know by whom it will be used. ;)
 
In general I do not like meta data analysis papers.

But there is something with the discussion on timing, and general transient system behaviour, that many here seem not to like.
There is IMO more to a system than just the frequency response and SINAD of tones.
Although a lot of that (or most of it), is in the speaker.

I watched this video like four or six weeks ago, not again recently, so I may not accurately be remembering everything in it.

If I remember he says something like we can hear differences of 10 uS, which is true when it comes to interaural time differences, but then he applies that threshold to transient attack time, which really doesn't have anything in particular to do with interaural time difference, and further asserts that we don't and can't measure speeds like that in our systems. I think in the first case it's a misapplication or a misunderstanding of the difference between how our ear/brain processes interaural time difference and how our ear/brain hears transient attacks and uses information from them to resolve complex waves to take that 10 uS threshold and apply it across the board to all hearing functions.

In fact, interaural time differences are weak contributors to stream segregation, which is a hugely important part of auditory perception by which we cognitively separate complex waves into streams of separate, continuous sound objects (it's why we perceive the string section playing a continuous linear melody in a symphony, which other instruments playing other lines instead of just perceiving separate sound events). But our ability to separate out the same auditory information reaching one ear from a second instance of that auditory information reaching the same ear is much slower than the ITDs we can resolve. More like 30 or 40 msec. (Remember a big part of our hearing is mechanical and, as fast as it is, and it's a system built for speed, it does take time for the ear drum and the ossicle bones and the basilar member and the stereocilia and all the moving parts of the ear to move, come to rest, and move again. Yup, our hearing system has ringing and delays in coming to rest again. We don't use ITD to hear pitch, for example, and in fact, although humans can hear sounds up to frequencies as fast as 20,000 cycles per second, our ability to resolve those sounds as having a pitch starts to decay above 4kHz, because our hearing isn't fast enough (in fact, up to about 4kHz or 5kHz the neurons associated with each inner hair cell will fire phase locked to the signal frequency but above that the time it takes for the hair cells to move, return to place and move again is too slow for it to happen at the same point in the signal's phase).

So, the question of hearing and time resolution is multifaceted and ITD time thresholds aren't important or meaningful to all aspects of hearing and sound perception. Applying that time frame to all hearing phenomenon other than ITD and localization, isn't accurate.

Also, as I recall, he says we don't and can't measure speeds that fast in bench and other common audio tests. I'm not a audio engineer and I maybe know less about it than I do about hearing and auditory perception, and certainly less than many people here, but I think the rise time of a 10kHz square wave is in the 10s of nanoseconds, so looking at the rise time of a 10kHz square wave will tell you about a system's instant on speed resolution up to the threshold of our hearing at speeds much faster than the ITDs we can resolve. Also we look at impulse responses and group delay and other things that related to transient timing in a system, it's not true that we don't have routine ways of measuring these sonic and system characteristics at human thresholds of time resolution and beyond. But someone here can correct me if I'm incorrect on this point.

Finally, it IS true, and sometimes frustrating for people who want the whole experience of sound reproduction to fit into a neat, easily defined box of phenomena only related to the characteristics of the sound stimulus, that different individuals can show considerable variation in sensitivity to different thresholds among some hearing phenomenon. So, like if test subjects are presented with a tone and then the tone again but presented 180 degrees out of phase from the original presentation, there will be different neural patterns. Our hearing responds differently to differences of absolute phase. But while some of us seem to be sensitive to that in the context of musical listening on audio systems, others of us (myself included) struggle to be sensitive to the difference.

Our hearing is an incredibly complex process that's evolved over millions of years and, frankly, is only partially understood -- not only to we just have much more sophisticated ability to study real time brain and neural functioning today that we didn't have years ago, but things about how the ear works are still being discovered.

For example, we didn't even know about the active gain and frequency selectivity and non linear compression done but the outer hair cells in our cochlea (and controlled at least in part from the brain in real time) until the 1970s and the control processes by which it works are still not understood. That's why I think the sometimes knee-jerk assumption that "X can't be heard" is a little dangerous -- there are a lot of differences among individuals in both hearing and sensitivity, hearing and sensitivity are plastic and can be trained, we don't really understand everything about how we hear and in particular about how the brain controls the ear, hearing thresholds with complex tones are often different from hearing thresholds with sinusoidal tones. But I also don't think it gets us closer to understanding to leap to conclusions like the OP does about auditory time resolution, and about sound/time measurements of equipment. A lot of those assertions don't seem to be wholly supported by the evidence we have so far, or are being drawn from an incomplete or mistaken understanding of hearing.
 
This is where I hope @j_j chimes in.
I don't know... he ignored it the first time it was posted :)

I didn't watch the video, only read a summary on sound science subforum on head-fi:
Kind of what I expected :)
 
I don't know... he ignored it the first time it was posted :)

I didn't watch the video, only read a summary on sound science subforum on head-fi:
Kind of what I expected :)

Well, when it comes to the "we didn't know" we knew the phenomenon in the 1930's that he says we didn't know until the 1970's, about compression, for example. Ditto the "frequency selectivity" and the compression aspect of how the ear actually reacted. Look at Fletcher's masking studies in the 1930's. It's clear then. That's a touch before the 1970's, to say the least. Yes, we know more about the mechanism, but once people finally listened to Zwicker's recap of Fletcher, the question of frequency selectivity was solidly established. That's before I was born. I'm 71, you know.

The "hearing with complex tones", what do you call the difference between "noise masking" and "tone masking". Noise is, in the real sense, simply a kind of complex tone. Zwicker, Hellman, Zwislocki all addressed that. Now, Zwicker got it wrong, I must say, searching for the "magic power law", such is the nature of research, but I dare say that's well in hand now. Co-articulation? That's new? Really? It is correct that most masking models FOR CODECS did not consider co-articulation until the 2000's, that was a pure limit of computational power, and that's now long since passed.

The bit about inner hair cell firing? Yes, that is exactly coupled to a fall-off in pitch sensitivity, but it's above about 800Hz where the problem starts, not 4kHz. There's nothing mysterious about it IF YOU UNDERSTAND HOW THE BASILAR MEMBRANE WORKS, which some people continue to deny, but I can't do anything about that. Simply put, the excitation about the center frequency at any point on the basilar membrane changes phase very quickly at the peak response of the cochlear filter, yes, at low frequencies, that means the excitation has a sudden phase shift at the point where the actual tone is present. Nothing special there. This works to some extent to 2kHz, and just barely to 4kHz. Above that, variations in cochlear stimulation amplitude are what's left, which is less accurate. So? Why does this matter?

Then there's the idea that we can't hear phase of a continuous tone at high frequencies, no we can't, BUT we can very well hear the onset of a signal envelope in a modulated tone or sudden onset right up to whatever one's frequency limits are. So don't extend that result where it doesn't belong, and for ITD's that takes over at 2kHz, give or take.

As to auditory perception being "complex", that's a different issue, when we address the detectors (basilar membrane, etc), is the OP asserting that information undetectable by the actual input to the auditory nerve somehow gets to the brain via some other path? Really. Do tell me about that. This is now mixing cognition with the auditory periphery, and trying to make an appeal to ignorance out of an unwarranted mixing of systems.

As far as "don't and can't measure on the bench"? That's beyond preposterous. We can measure things much faster than the ear, even using redbook standard.

Measuring "speeds" and the like includes the issue of the detector SNR. Even redbook is very close to 16 bits, and the inner hair cells, which are the detectors, are doing very well to get to 5 or 6 bits. The bandwidth of redbook is 20khz, the widest bandwidth on the cochlea is about 6 kHz. Bandwidth and SNR set time resolution, end of discussion. I think this thread pointed to a talk I gave on the issue. Apparently the writer thinks that facts are a "leap". So it goes.

My position is that the claims described a few posts above are misinformed as to date, time, and understanding, misrepresent the actual state of knowledge, deny how and when the mathematics can be applied, confuse the different levels of auditory perception, in particular trying to argue that the later cognitive systems can somehow extract things that aren't captured, and then playing the fallacy of ignorance over and over and over and over again.

Along the way, the "we can't measure" is confused between preference, ability to distinguish, and broadened to include much that goes into much more complex issues than simple thresholds of hearing. Certainly 1960's "tests" are just about useless. (the old 1kHz sine wave test says just about nothing) but apparently the author is not aware that measuring a system's impulse response and stationarity of response, to an extent obviously unimagined, are trivial today, using, for instance, a 20/96 ADC/DAC and MATLAB (or octave), or any number of on-market measurement systems. Ditto for distortion, noise floor, etc. Now, of course, 'Noise Floor' has to be measured as a function of frequency. No kidding. Measurements do not provide "one number", and haven't since the 1980's. So let's leave that one out the next attempt at the fallacy of ignorance.

Finally, the last paragraph about "simplistic" which is, frankly, a deeply offensive professional insult, completely ignores my positions that have been expressed for more than 30 years or so now, and expressed with little subtlety or reticence. Obviously, the statements are not informed, but most certainly appear to me to be yet another attempt to dismiss testable facts, much like the "teach the controversy" attempts to sell creationism in the place of evolution, and various other offensive attempts to convince the publilc that "science knows nothing".

So, that's what I think in a nutshell. The whole bit is a classic "Gish Gallop", for starters, that leans heavily on the fallacy of ignorance. It's not simplistic, it is based on disinformation tactics that we see everywhere in the press and social media today.

To explain a Gish Gallop, please see: https://en.wikipedia.org/wiki/Gish_gallop
To explain the fallacy of ignorance: https://en.wikipedia.org/wiki/Argument_from_ignorance also https://en.wikipedia.org/wiki/Irrelevant_conclusion , https://en.wikipedia.org/wiki/Fallacy_of_composition and finally:

Which is heavily used throughout the piece, along with attempts at: https://en.wikipedia.org/wiki/Forcing_(magic) via the equivocation, allowing the response of "but that's not what I meaaaannnnttttt".

All of them classical propagandistic conduct.
 
Last edited:
Well, when it comes to the "we didn't know" we knew the phenomenon in the 1930's that he says we didn't know until the 1970's, about compression, for example. Ditto the "frequency selectivity" and the compression aspect of how the ear actually reacted. Look at Fletcher's masking studies in the 1930's. It's clear then. That's a touch before the 1970's, to say the least. Yes, we know more about the mechanism, but once people finally listened to Zwicker's recap of Fletcher, the question of frequency selectivity was solidly established. That's before I was born. I'm 71, you know.

The "hearing with complex tones", what do you call the difference between "noise masking" and "tone masking". Noise is, in the real sense, simply a kind of complex tone. Zwicker, Hellman, Zwislocki all addressed that. Now, Zwicker got it wrong, I must say, searching for the "magic power law", such is the nature of research, but I dare say that's well in hand now. Co-articulation? That's new? Really? It is correct that most masking models FOR CODECS did not consider co-articulation until the 2000's, that was a pure limit of computational power, and that's now long since passed.

The bit about inner hair cell firing? Yes, that is exactly coupled to a fall-off in pitch sensitivity, but it's above about 800Hz where the problem starts, not 4kHz. There's nothing mysterious about it IF YOU UNDERSTAND HOW THE BASILAR MEMBRANE WORKS, which some people continue to deny, but I can't do anything about that. Simply put, the excitation about the center frequency at any point on the basilar membrane changes phase very quickly at the peak response of the cochlear filter, yes, at low frequencies, that means the excitation has a sudden shift at the point where the actual done is present. Nothing special there. This works to some extent to 2kHz, and just barely to 4kHz. Above that, variations in cochlear stimulation amplitude are what's left, which is less accurate. So? Why does this matter?

Then there's the idea that we can't hear phase of a continuous tone at high frequencies, no we can't, BUT we can very well hear the onset of a signal envelope in a modulated tone or sudden onset right up to whatever one's frequency limits are. So don't extend that result where it doesn't belong, and for ITD's that takes over at 2kHz, give or take.

As to auditory perception being "complex", that's a different issue, when we address the detectors (basilar membrane, etc), is the OP asserting that information undetectable by the actual input to the auditory nerve somehow gets to the brain via some other path? Really. Do tell me about that. This is now mixing cognition with the auditory periphery, and trying to make an appeal to ignorance out of an unwarranted mixing of systems.

As far as "don't and can't measure on the bench"? That's beyond preposterous. We can measure things much faster than the ear, even using redbook standard.

Measuring "speeds" and the like includes the issue of the detector SNR. Even redbook is very close to 16 bits, and the inner hair cells, which are the detectors, are doing very well to get to 5 or 6 bits. The bandwidth of redbook is 20khz, the widest bandwidth on the cochlea is about 6 kHz. Bandwidth and SNR set time resolution, end of discussion. I think this thread pointed to a talk I gave on the issue. Apparently the writer thinks that facts are a "leap". So it goes.

My position is that the claims described a few posts above are misinformed as to date, time, and understanding, misrepresent the actual state of knowledge, deny how and when the mathematics can be applied, confuse the different levels of auditory perception, in particular trying to argue that the later cognative systems can somehow extract things that aren't captured, and then playing the fallacy of ignorance over and over and over and over again.

Along the way, the "we can't measure" is confused between preference, ability to distinguish, and broadened to include much that goes into much more complex issues than simple thresholds of hearing. Certainly 1960's "tests" are just about useless. (the old 1kHz sine wave test says just about nothing) but apparently the author is not aware that measuring a system's impulse response and stationarity of response, to an extent obviously unimagined, are trivial today, using, for instance, a 20/96 ADC/DAC and MATLAB (or octave), or any number of on-market measurement systems. Ditto for distortion, noise floor, etc. Now, of course, 'Noise Floor' has to be measured as a function of frequency. No kidding. Measurements do not provide "one number", and haven't since the 1980's. So let's leave that one out the next attempt at the fallacy of ignorance.

Finally, the last paragraph about "simplistic" which is, frankly, a deeply offensive professional insult, completely ignores my positions that have been expressed for more than 30 years or so now, and expressed with little subtlety or recitence. Obviously, the statements are not informed, but most certainly appear to me to be yet another attempt to dismiss testable facts, much like the "teach the controversy" attempts to sell creationism in the place of evolution, and various other offensive attempts to convince the publilc that "science knows nothing".

So, that's what I think in a nutshell. The whole bit is a classic "Gish Gallop", for starters, that leans heavily on the fallacy of ignorance. It's not simplistic, it is based on disinformation tactics that we see everywhere in the press and social media today.

To explain a Gish Gallop, please see: https://en.wikipedia.org/wiki/Gish_gallop
To explain the fallacy of ignorance: https://en.wikipedia.org/wiki/Argument_from_ignorance also https://en.wikipedia.org/wiki/Irrelevant_conclusion , https://en.wikipedia.org/wiki/Fallacy_of_composition and finally:

Which is heavily used throughout the piece, along with attempts at: https://en.wikipedia.org/wiki/Forcing_(magic) via the equivocation, allowing the response of "but that's not what I meaaaannnnttttt".

All of them classical propagandistic conduct.
1741979698523.gif
 
1) suppose we really are sensitive to any phase or timing distortion down to a few microseconds. How do you hear that distortion over the 100-1000x greater time distortion your speakers or headphones are adding?

In some fashions we are. That's several orders of magnitude, like about 4, longer than redbook CD can manage.

2) if this stuff is audible, how come people can only hear it in sighted tests?
*ding* Give that man a cigar!
 
In some fashions we are. That's several orders of magnitude, like about 4, longer than redbook CD can manage.
So, years ago, trying to play devil's advocate and justify high-res formats, I "discovered" that the sampling rate of redbook is slower than certain timings people are capable of hearing. A smoking gun!

It was kindly explained to me that the time resolution of redbook is much better than the sampling rate, because it's a product of the bit depth as well as the sampling rate. While this makes sense on some level, I have to admit I'm not totally clear on what timing information that we can hear is and isn't possible to capture in Redbook... mostly because I'm not totally clear on what kind of timing distortion can be audible.
 
I watched this video like four or six weeks ago, not again recently, so I may not accurately be remembering everything in it.

If I remember he says something like we can hear differences of 10 uS, which is true when it comes to interaural time differences, but then he applies that threshold to transient attack time, which really doesn't have anything in particular to do with interaural time difference, and further asserts that we don't and can't measure speeds like that in our systems. I think in the first case it's a misapplication or a misunderstanding of the difference between how our ear/brain processes interaural time difference and how our ear/brain hears transient attacks and uses information from them to resolve complex waves to take that 10 uS threshold and apply it across the board to all hearing functions.

In fact, interaural time differences are weak contributors to stream segregation, which is a hugely important part of auditory perception by which we cognitively separate complex waves into streams of separate, continuous sound objects (it's why we perceive the string section playing a continuous linear melody in a symphony, which other instruments playing other lines instead of just perceiving separate sound events). But our ability to separate out the same auditory information reaching one ear from a second instance of that auditory information reaching the same ear is much slower than the ITDs we can resolve. More like 30 or 40 msec. (Remember a big part of our hearing is mechanical and, as fast as it is, and it's a system built for speed, it does take time for the ear drum and the ossicle bones and the basilar member and the stereocilia and all the moving parts of the ear to move, come to rest, and move again. Yup, our hearing system has ringing and delays in coming to rest again. We don't use ITD to hear pitch, for example, and in fact, although humans can hear sounds up to frequencies as fast as 20,000 cycles per second, our ability to resolve those sounds as having a pitch starts to decay above 4kHz, because our hearing isn't fast enough (in fact, up to about 4kHz or 5kHz the neurons associated with each inner hair cell will fire phase locked to the signal frequency but above that the time it takes for the hair cells to move, return to place and move again is too slow for it to happen at the same point in the signal's phase).
If it takes time to move, then it seems like it is analogous to shacking a bowling ball?
But the ear can hear to 20kHz, so it obviously can wriggle around at that rate which is 0.05 msec, or 50 usec

So, the question of hearing and time resolution is multifaceted and ITD time thresholds aren't important or meaningful to all aspects of hearing and sound perception. Applying that time frame to all hearing phenomenon other than ITD and localization, isn't accurate.

Also, as I recall, he says we don't and can't measure speeds that fast in bench and other common audio tests. I'm not a audio engineer and I maybe know less about it than I do about hearing and auditory perception, and certainly less than many people here, but I think the rise time of a 10kHz square wave is in the 10s of nanoseconds, so looking at the rise time of a 10kHz square wave will tell you about a system's instant on speed resolution up to the threshold of our hearing at speeds much faster than the ITDs we can resolve. Also we look at impulse responses and group delay and other things that related to transient timing in a system, it's not true that we don't have routine ways of measuring these sonic and system characteristics at human thresholds of time resolution and beyond. But someone here can correct me if I'm incorrect on this point.

Finally, it IS true, and sometimes frustrating for people who want the whole experience of sound reproduction to fit into a neat, easily defined box of phenomena only related to the characteristics of the sound stimulus, that different individuals can show considerable variation in sensitivity to different thresholds among some hearing phenomenon. So, like if test subjects are presented with a tone and then the tone again but presented 180 degrees out of phase from the original presentation, there will be different neural patterns. Our hearing responds differently to differences of absolute phase. But while some of us seem to be sensitive to that in the context of musical listening on audio systems, others of us (myself included) struggle to be sensitive to the difference.

Our hearing is an incredibly complex process that's evolved over millions of years and, frankly, is only partially understood -- not only to we just have much more sophisticated ability to study real time brain and neural functioning today that we didn't have years ago, but things about how the ear works are still being discovered.

For example, we didn't even know about the active gain and frequency selectivity and non linear compression done but the outer hair cells in our cochlea (and controlled at least in part from the brain in real time) until the 1970s and the control processes by which it works are still not understood. That's why I think the sometimes knee-jerk assumption that "X can't be heard" is a little dangerous -- there are a lot of differences among individuals in both hearing and sensitivity, hearing and sensitivity are plastic and can be trained, we don't really understand everything about how we hear and in particular about how the brain controls the ear, hearing thresholds with complex tones are often different from hearing thresholds with sinusoidal tones. But I also don't think it gets us closer to understanding to leap to conclusions like the OP does about auditory time resolution, and about sound/time measurements of equipment. A lot of those assertions don't seem to be wholly supported by the evidence we have so far, or are being drawn from an incomplete or mistaken understanding of hearing.
It is certainly complex, but the idea that there is an importance on attack and timing seems more germane than steady-state behaviour.
In fact the diagonal coaming back from the brain to the ear, sort of squelch the steady-state sound after a while.
 
Most of it is based in the "The Human Auditory System and Audio" paper, published by Milind N. Kunchur
Interesting, at least (it's more of a collection of 218 previous studies about the matter)
Well that would be a death sentence from my point of view.
 
Attacks are indeed more important. Of course, since the ear does a time-frequency analysis, nobody should be surprised, and nobody who works in hearing is the slightest bit surprised.

The question is "what part of that gets to the auditory nerve" and that, frankly, is not that hard to know, once the math is examined. Now, there ARE some issues involving anti-aliasing/anti-imaging filters at 44.1kHz that might barely, maybe, conceivably (although never shown in the real world) might interact with the compression mechanism, basically as pre-echo, which is most certainly an issue with codecs. At 48khz the plausibility can not be utterly dismissed, but it's very, very farfetched. At 50khz, you're fine. The impulse response of even a ridiculous filter design is in the clear.

All of that, of course, is directed only to somebody who can still hear to 20kHz or so, which in the modern (too loud) world, is a very limited class of young people. By the time you're 30, you're SOL anywhere except maybe if you lived on the African plains with no firearms.

There is no doubt that young people MAY be able to hear somewhat over 20kHz, but as a sensation, not a tone, which is not surprising, because that's the very entrance to the cochlea as far as detector location, and that's what gets mangled thoroughly by modern life the fastest.
 
Last edited:
Back
Top Bottom