- Joined
- Jan 27, 2019
- Messages
- 7,508
- Likes
- 12,666
If you demonstrated it under controlled conditions, then that is that. I remember how the rec audio thing would get pretty nasty. Those were the days!
The experience brought up some interesting questions for me.
The objectivists (and, again, in spirit I considered myself one of them even if not being as technically knowledgeable) would keep saying that any well designed CD player or DAC would sound indistinguishable from another. I mentioned that, while I wasn't doubting that general principle per se, I sure seemed to be hearing very distinct differences between a couple CD players and a DAC I owned.
I did the blind testing of the CD players/DAC then presented the results to the objectivist boys on the newsgroup (Arny, J.J. and others...). The results were positive for identifying each player, and I asked for suggestions. The objectivist/engineers looked at how I performed the test and suggested some ways to tighten up the method, especially using a voltmeter to match output at the speaker terminals. So I did another set of blind tests using a borrowed voltmeter to match levels. Again, I could identify the units with almost perfect accuracy. I even did a blind test where I was outside the room from the CD players/switching and easily identified between two of them.
When I presented the results of the second batch of tests, it wasn't greeted with "well, I guess you heard a difference" but rather, especially in Arny's case, "something probably went wrong...there could have been communication 'tells' between switcher and listener for instance." I explained the protocol again, which did not seem to allow for such a thing. In other words, Arny's suggestion of how the results could have been invalid seemed extremely implausible given the protocol.
So I was left with my own results, and the skepticism of folks like Arny about the results.
Which led to the interesting question of how to think about the results.
On one hand, I can understand and agree with someone like Arny. If you have a technical knowledge that leads you to think a result is implausible, then do you accept a blind test someone else performed and posted on the internet as a data point at all? Well, you weren't there, maybe the test did not occur strictly as described, maybe you'd spot some problem if you were there. This is one reason why there is the demand for replication of results by other parties in science. So I can see the case for being skeptical for an Arny K.
On the other hand...I performed the tests. I know that I described them correctly, and being familiar with how every step went down, I could not see an area where the type of objections made were plausible.
So do you accept your own tests as a data point, and say "well, I understand why someone who wasn't there has to be cautious in accepting my account...but I was there, so I guess I really did detect differences. Maybe one or more of the DACs involved were substandard, or coloured or whatever."
Or, do you say "well, despite that I did the blind tests exactly as the engineers described, they still doubt the results, so I won't accept the results of my own blind test." ?
If the latter, it would seem any attempt at doing one's own blind test would be moot - as the only results one could accept are those that had the results some objectivist/engineer had. (But then...what of replicability?)
So it brings up interesting questions about to what degree you can accept the results of blind tests you didn't perform. If someone on a forum describes their protocol and it seems sound, yet the results are surprising, do you accept it as evidence? And how much weight do you put on your own tests, if the results are surprising?