Yes ! Check the old "negative feedback is bad trope"Yeah, if you complain about feedback, you lost the plot...![]()
Yes ! Check the old "negative feedback is bad trope"Yeah, if you complain about feedback, you lost the plot...![]()
Too late. I already got 'House!' with 'Vague hand waving about hearing timing errors of microseconds'.Yes ! Check the old "negative feedback is bad trope"My audiophile bingo card is almost full now I'm only missing the copper sounds warm and silver is bright and detailed ..
That's a typical argumentative strategy of golden ears or snake oil marketing, start with reasonable theories and statements and then smoothly blend irrelevant to it and wrong statements and conclusions.Watched the video in fast mode. The biological entry part is OK and tells already known facts. At the end of the video he tells about the speed of transistors and cables regarding stored energy which covers low level details. This part is not totally wrong but technical irrelevant for modern semiconductors and cables for audio usage. And the speed of transients is of course measurable. Therefore his conclusion not being able to measure this is wrong. On the other hand, with a level versus frequency plot this may not be visible.
This is an important claim and the exact nature of this sensitivity, if it exists, is crucial. I remember looking into this years ago and IIRC it was about discrimination of two full amplitude signals in sequence, not music in general or phase distortion at all.our ears are sensitive to few microseconds
Maybe Adam Audio should be more cautious about who they sponsor or advertise with.If I was Adam Audio I would kindly ask him to remove or redo the video without my company's name on it.
Possibly true, if I would own an audio company I wouldn't offer such merchandise at all, as you never know by whom it will be used.Maybe Adam Audio should be more cautious about who they sponsor or advertise with.
JSmith
In general I do not like meta data analysis papers.
But there is something with the discussion on timing, and general transient system behaviour, that many here seem not to like.
There is IMO more to a system than just the frequency response and SINAD of tones.
Although a lot of that (or most of it), is in the speaker.
I don't know... he ignored it the first time it was postedThis is where I hope @j_j chimes in.
This ^if this stuff is audible, how come people can only hear it in sighted tests?
I don't know... he ignored it the first time it was posted
![]()
Relationships between physical sound, auditory sound perception, and music perception
If I can offer a couple of slide decks/talks here: First, on context, environment, etc. http://www.aes-media.org/sections/pnw/pnwrecaps/2013/apr_jj/ Pay particular to the discussion around slide 14 of http://www.aes-media.org/sections/pnw/ppt/jj/heyser.pptx When anything (context, the wind...www.audiosciencereview.com
I didn't watch the video, only read a summary on sound science subforum on head-fi:
Kind of what I expected![]()
Testing audiophile claims and myths
So, what’s he getting at? ... I think he’s trying to remind us that audio perception is incredibly rich and layered, and there’s a lot going on that standard measurements can’t fully explain. And that’s why subjective impressions still matter, even if they don’t always align perfectly with what...www.head-fi.org
![]()
Well, when it comes to the "we didn't know" we knew the phenomenon in the 1930's that he says we didn't know until the 1970's, about compression, for example. Ditto the "frequency selectivity" and the compression aspect of how the ear actually reacted. Look at Fletcher's masking studies in the 1930's. It's clear then. That's a touch before the 1970's, to say the least. Yes, we know more about the mechanism, but once people finally listened to Zwicker's recap of Fletcher, the question of frequency selectivity was solidly established. That's before I was born. I'm 71, you know.
The "hearing with complex tones", what do you call the difference between "noise masking" and "tone masking". Noise is, in the real sense, simply a kind of complex tone. Zwicker, Hellman, Zwislocki all addressed that. Now, Zwicker got it wrong, I must say, searching for the "magic power law", such is the nature of research, but I dare say that's well in hand now. Co-articulation? That's new? Really? It is correct that most masking models FOR CODECS did not consider co-articulation until the 2000's, that was a pure limit of computational power, and that's now long since passed.
The bit about inner hair cell firing? Yes, that is exactly coupled to a fall-off in pitch sensitivity, but it's above about 800Hz where the problem starts, not 4kHz. There's nothing mysterious about it IF YOU UNDERSTAND HOW THE BASILAR MEMBRANE WORKS, which some people continue to deny, but I can't do anything about that. Simply put, the excitation about the center frequency at any point on the basilar membrane changes phase very quickly at the peak response of the cochlear filter, yes, at low frequencies, that means the excitation has a sudden shift at the point where the actual done is present. Nothing special there. This works to some extent to 2kHz, and just barely to 4kHz. Above that, variations in cochlear stimulation amplitude are what's left, which is less accurate. So? Why does this matter?
Then there's the idea that we can't hear phase of a continuous tone at high frequencies, no we can't, BUT we can very well hear the onset of a signal envelope in a modulated tone or sudden onset right up to whatever one's frequency limits are. So don't extend that result where it doesn't belong, and for ITD's that takes over at 2kHz, give or take.
As to auditory perception being "complex", that's a different issue, when we address the detectors (basilar membrane, etc), is the OP asserting that information undetectable by the actual input to the auditory nerve somehow gets to the brain via some other path? Really. Do tell me about that. This is now mixing cognition with the auditory periphery, and trying to make an appeal to ignorance out of an unwarranted mixing of systems.
As far as "don't and can't measure on the bench"? That's beyond preposterous. We can measure things much faster than the ear, even using redbook standard.
Measuring "speeds" and the like includes the issue of the detector SNR. Even redbook is very close to 16 bits, and the inner hair cells, which are the detectors, are doing very well to get to 5 or 6 bits. The bandwidth of redbook is 20khz, the widest bandwidth on the cochlea is about 6 kHz. Bandwidth and SNR set time resolution, end of discussion. I think this thread pointed to a talk I gave on the issue. Apparently the writer thinks that facts are a "leap". So it goes.
My position is that the claims described a few posts above are misinformed as to date, time, and understanding, misrepresent the actual state of knowledge, deny how and when the mathematics can be applied, confuse the different levels of auditory perception, in particular trying to argue that the later cognative systems can somehow extract things that aren't captured, and then playing the fallacy of ignorance over and over and over and over again.
Along the way, the "we can't measure" is confused between preference, ability to distinguish, and broadened to include much that goes into much more complex issues than simple thresholds of hearing. Certainly 1960's "tests" are just about useless. (the old 1kHz sine wave test says just about nothing) but apparently the author is not aware that measuring a system's impulse response and stationarity of response, to an extent obviously unimagined, are trivial today, using, for instance, a 20/96 ADC/DAC and MATLAB (or octave), or any number of on-market measurement systems. Ditto for distortion, noise floor, etc. Now, of course, 'Noise Floor' has to be measured as a function of frequency. No kidding. Measurements do not provide "one number", and haven't since the 1980's. So let's leave that one out the next attempt at the fallacy of ignorance.
Finally, the last paragraph about "simplistic" which is, frankly, a deeply offensive professional insult, completely ignores my positions that have been expressed for more than 30 years or so now, and expressed with little subtlety or recitence. Obviously, the statements are not informed, but most certainly appear to me to be yet another attempt to dismiss testable facts, much like the "teach the controversy" attempts to sell creationism in the place of evolution, and various other offensive attempts to convince the publilc that "science knows nothing".
So, that's what I think in a nutshell. The whole bit is a classic "Gish Gallop", for starters, that leans heavily on the fallacy of ignorance. It's not simplistic, it is based on disinformation tactics that we see everywhere in the press and social media today.
To explain a Gish Gallop, please see: https://en.wikipedia.org/wiki/Gish_gallop
To explain the fallacy of ignorance: https://en.wikipedia.org/wiki/Argument_from_ignorance also https://en.wikipedia.org/wiki/Irrelevant_conclusion , https://en.wikipedia.org/wiki/Fallacy_of_composition and finally:
Equivocation - Wikipedia
en.wikipedia.org
Which is heavily used throughout the piece, along with attempts at: https://en.wikipedia.org/wiki/Forcing_(magic) via the equivocation, allowing the response of "but that's not what I meaaaannnnttttt".
All of them classical propagandistic conduct.
1) suppose we really are sensitive to any phase or timing distortion down to a few microseconds. How do you hear that distortion over the 100-1000x greater time distortion your speakers or headphones are adding?
*ding* Give that man a cigar!2) if this stuff is audible, how come people can only hear it in sighted tests?
So, years ago, trying to play devil's advocate and justify high-res formats, I "discovered" that the sampling rate of redbook is slower than certain timings people are capable of hearing. A smoking gun!In some fashions we are. That's several orders of magnitude, like about 4, longer than redbook CD can manage.
If it takes time to move, then it seems like it is analogous to shacking a bowling ball?I watched this video like four or six weeks ago, not again recently, so I may not accurately be remembering everything in it.
If I remember he says something like we can hear differences of 10 uS, which is true when it comes to interaural time differences, but then he applies that threshold to transient attack time, which really doesn't have anything in particular to do with interaural time difference, and further asserts that we don't and can't measure speeds like that in our systems. I think in the first case it's a misapplication or a misunderstanding of the difference between how our ear/brain processes interaural time difference and how our ear/brain hears transient attacks and uses information from them to resolve complex waves to take that 10 uS threshold and apply it across the board to all hearing functions.
In fact, interaural time differences are weak contributors to stream segregation, which is a hugely important part of auditory perception by which we cognitively separate complex waves into streams of separate, continuous sound objects (it's why we perceive the string section playing a continuous linear melody in a symphony, which other instruments playing other lines instead of just perceiving separate sound events). But our ability to separate out the same auditory information reaching one ear from a second instance of that auditory information reaching the same ear is much slower than the ITDs we can resolve. More like 30 or 40 msec. (Remember a big part of our hearing is mechanical and, as fast as it is, and it's a system built for speed, it does take time for the ear drum and the ossicle bones and the basilar member and the stereocilia and all the moving parts of the ear to move, come to rest, and move again. Yup, our hearing system has ringing and delays in coming to rest again. We don't use ITD to hear pitch, for example, and in fact, although humans can hear sounds up to frequencies as fast as 20,000 cycles per second, our ability to resolve those sounds as having a pitch starts to decay above 4kHz, because our hearing isn't fast enough (in fact, up to about 4kHz or 5kHz the neurons associated with each inner hair cell will fire phase locked to the signal frequency but above that the time it takes for the hair cells to move, return to place and move again is too slow for it to happen at the same point in the signal's phase).
It is certainly complex, but the idea that there is an importance on attack and timing seems more germane than steady-state behaviour.So, the question of hearing and time resolution is multifaceted and ITD time thresholds aren't important or meaningful to all aspects of hearing and sound perception. Applying that time frame to all hearing phenomenon other than ITD and localization, isn't accurate.
Also, as I recall, he says we don't and can't measure speeds that fast in bench and other common audio tests. I'm not a audio engineer and I maybe know less about it than I do about hearing and auditory perception, and certainly less than many people here, but I think the rise time of a 10kHz square wave is in the 10s of nanoseconds, so looking at the rise time of a 10kHz square wave will tell you about a system's instant on speed resolution up to the threshold of our hearing at speeds much faster than the ITDs we can resolve. Also we look at impulse responses and group delay and other things that related to transient timing in a system, it's not true that we don't have routine ways of measuring these sonic and system characteristics at human thresholds of time resolution and beyond. But someone here can correct me if I'm incorrect on this point.
Finally, it IS true, and sometimes frustrating for people who want the whole experience of sound reproduction to fit into a neat, easily defined box of phenomena only related to the characteristics of the sound stimulus, that different individuals can show considerable variation in sensitivity to different thresholds among some hearing phenomenon. So, like if test subjects are presented with a tone and then the tone again but presented 180 degrees out of phase from the original presentation, there will be different neural patterns. Our hearing responds differently to differences of absolute phase. But while some of us seem to be sensitive to that in the context of musical listening on audio systems, others of us (myself included) struggle to be sensitive to the difference.
Our hearing is an incredibly complex process that's evolved over millions of years and, frankly, is only partially understood -- not only to we just have much more sophisticated ability to study real time brain and neural functioning today that we didn't have years ago, but things about how the ear works are still being discovered.
For example, we didn't even know about the active gain and frequency selectivity and non linear compression done but the outer hair cells in our cochlea (and controlled at least in part from the brain in real time) until the 1970s and the control processes by which it works are still not understood. That's why I think the sometimes knee-jerk assumption that "X can't be heard" is a little dangerous -- there are a lot of differences among individuals in both hearing and sensitivity, hearing and sensitivity are plastic and can be trained, we don't really understand everything about how we hear and in particular about how the brain controls the ear, hearing thresholds with complex tones are often different from hearing thresholds with sinusoidal tones. But I also don't think it gets us closer to understanding to leap to conclusions like the OP does about auditory time resolution, and about sound/time measurements of equipment. A lot of those assertions don't seem to be wholly supported by the evidence we have so far, or are being drawn from an incomplete or mistaken understanding of hearing.
Well that would be a death sentence from my point of view.Most of it is based in the "The Human Auditory System and Audio" paper, published by Milind N. Kunchur
Interesting, at least (it's more of a collection of 218 previous studies about the matter)