This topic may have been beat to death but I have some personal data to share that I think is lessor know.
Let's start with the problem set. There are thousands and thousands of audiophiles that believe in thousands of audio products making an improvement or having audible effect, that audio science and engineer dismisses as having any value.
The problem we face is the argument that so many people can't be wrong. And that it is the science that has a blind spot here. They point to experiences they have where they routinely hear night and day differences. Yes, they are not bias controlled and we could conclude the argument there but I think there is more to it. And it is something I personally learned from reading an online blog (sorry the name/link escapes me for the moment).
Let me start with a story of my own. I am at RMAF show at the Synergistic suite. Most of you know that they produce some of the most eye raising products when it comes to efficacy. Yet I heard they were going to do an AB of everything they sell. I sit there in a packed room and they proceed to do exactly that.
One of the products they have are these tiny sticky tabs. They are probably the width of your thumb or thereabouts. When I walked in, I saw a few of them in the ceiling. The played some music and then proceeded to put a few of them around the face of the loudspeaker. They then played the same song again. This time, the sound was more open, higher resolution, etc.
As you can imagine, I was one heck of a skeptic. No way, no how would a tiny thing like that make such a difference. Yet here I was and the improvement was the proverbial "night and day." Try as I did to not here that difference, I was hearing it.
So expectation bias was not there. What then leads to someone like me hearing such fidelity improvement. The answer turned out to be simple: listening better. When we are told there is a comparison and that something will be added, we tend to take the baseline case not so serious. I just listened to the "before" music casually. But when the thumb tacks where added, now I was seeking to hear every tiny amount of difference. And of course, with searching it, I would not hear things that "I did not hear before." The opening of the music and more detail becoming audible were the results of my brain being far more analytical.
I have since applied that technique on purpose. In A/B tests now I can go back to the baseline and with the same intent to hear more, indeed hear more. I now have a powerful tool to get past some of this effect even in the context of sighted listening tests.
Last week for example there was a test of various sampling rates at our local audiophile society. The test started at 44.1, went up to 48 and all the way up to 192 KHz. The change from 44.1 to 48 again was very "audible." I heard this silky cymbal brush (sorry don't know the musical term) being so much more airy in the higher sample rate. In the baseline 44.1 my recollection was that it was no where near as delineated. Right then I caught myself. For the rest of the tests, I only focused on that part of the music and applied control as to whether the effect was searching for fidelity. Like clockwork, I could tailor my perception one way or the other. The mere fact of focusing or not focusing would hugely impact the perception of fidelity.
In summary, sighted A/B testing is not just flawed because it is sighted, but because it is human nature to attempt to analyze the sound for fidelity. And that search results in hearing fidelity that was always there but taken for granted in the baseline case.
Ok, that is my sermon for this Tuesday morning. I have to drive to work and will be in meetings most of the day. So be good .
Let's start with the problem set. There are thousands and thousands of audiophiles that believe in thousands of audio products making an improvement or having audible effect, that audio science and engineer dismisses as having any value.
The problem we face is the argument that so many people can't be wrong. And that it is the science that has a blind spot here. They point to experiences they have where they routinely hear night and day differences. Yes, they are not bias controlled and we could conclude the argument there but I think there is more to it. And it is something I personally learned from reading an online blog (sorry the name/link escapes me for the moment).
Let me start with a story of my own. I am at RMAF show at the Synergistic suite. Most of you know that they produce some of the most eye raising products when it comes to efficacy. Yet I heard they were going to do an AB of everything they sell. I sit there in a packed room and they proceed to do exactly that.
One of the products they have are these tiny sticky tabs. They are probably the width of your thumb or thereabouts. When I walked in, I saw a few of them in the ceiling. The played some music and then proceeded to put a few of them around the face of the loudspeaker. They then played the same song again. This time, the sound was more open, higher resolution, etc.
As you can imagine, I was one heck of a skeptic. No way, no how would a tiny thing like that make such a difference. Yet here I was and the improvement was the proverbial "night and day." Try as I did to not here that difference, I was hearing it.
So expectation bias was not there. What then leads to someone like me hearing such fidelity improvement. The answer turned out to be simple: listening better. When we are told there is a comparison and that something will be added, we tend to take the baseline case not so serious. I just listened to the "before" music casually. But when the thumb tacks where added, now I was seeking to hear every tiny amount of difference. And of course, with searching it, I would not hear things that "I did not hear before." The opening of the music and more detail becoming audible were the results of my brain being far more analytical.
I have since applied that technique on purpose. In A/B tests now I can go back to the baseline and with the same intent to hear more, indeed hear more. I now have a powerful tool to get past some of this effect even in the context of sighted listening tests.
Last week for example there was a test of various sampling rates at our local audiophile society. The test started at 44.1, went up to 48 and all the way up to 192 KHz. The change from 44.1 to 48 again was very "audible." I heard this silky cymbal brush (sorry don't know the musical term) being so much more airy in the higher sample rate. In the baseline 44.1 my recollection was that it was no where near as delineated. Right then I caught myself. For the rest of the tests, I only focused on that part of the music and applied control as to whether the effect was searching for fidelity. Like clockwork, I could tailor my perception one way or the other. The mere fact of focusing or not focusing would hugely impact the perception of fidelity.
In summary, sighted A/B testing is not just flawed because it is sighted, but because it is human nature to attempt to analyze the sound for fidelity. And that search results in hearing fidelity that was always there but taken for granted in the baseline case.
Ok, that is my sermon for this Tuesday morning. I have to drive to work and will be in meetings most of the day. So be good .