Human hearing is shit, that has been established by a bunch of tests.
But that is not the story.
We have huge brains, that work in a certain manner.
The data that is collected from our ears is subject to very powerful real time processing, we hear fuck all but derive a lot.
This is not unusual and is one of the characteristics of human brain function, and indeed applies to all our data input senses. For example we stream audio and video data, but we hear sounds and see objects we recognise ( and dont start me off on language). The amount of processing power required to get there is staggering.
So why are human listening tests so out of sync with the way we work ?
Weather one can hear a frequency means bugger all, a more accurate test ( just pulling this out of ass ) maybe would be a voice one has heard a million times, with your test tone superimposed . Vs, the voice clean.
Likewise A/B tests make no sense if the sample is not very very familiar.
Unfortunately the way our brains work also has distinct disadvantages when it comes to testing a sense in isolation, all that data that is steaming in constantly is effectively mixed at process time, sound, vision, taste, touch all are part of the fabric of memory ( sight has its own dual advanced pre processor so tends to dominate ). In short we get things wrong a lot by design. That does not mean we do not detect the noise, it means you need to do a LOT of tests on LOTS of people before making any conclusions.
Not being very knowledgeable in the acoustic testing thing, this is my opinion, so if there are a bunch of tests that address brain function as one of the test drivers, then very sorry
But that is not the story.
We have huge brains, that work in a certain manner.
The data that is collected from our ears is subject to very powerful real time processing, we hear fuck all but derive a lot.
This is not unusual and is one of the characteristics of human brain function, and indeed applies to all our data input senses. For example we stream audio and video data, but we hear sounds and see objects we recognise ( and dont start me off on language). The amount of processing power required to get there is staggering.
So why are human listening tests so out of sync with the way we work ?
Weather one can hear a frequency means bugger all, a more accurate test ( just pulling this out of ass ) maybe would be a voice one has heard a million times, with your test tone superimposed . Vs, the voice clean.
Likewise A/B tests make no sense if the sample is not very very familiar.
Unfortunately the way our brains work also has distinct disadvantages when it comes to testing a sense in isolation, all that data that is steaming in constantly is effectively mixed at process time, sound, vision, taste, touch all are part of the fabric of memory ( sight has its own dual advanced pre processor so tends to dominate ). In short we get things wrong a lot by design. That does not mean we do not detect the noise, it means you need to do a LOT of tests on LOTS of people before making any conclusions.
Not being very knowledgeable in the acoustic testing thing, this is my opinion, so if there are a bunch of tests that address brain function as one of the test drivers, then very sorry