restorer-john
Grand Contributor
I hope @GrimSurfer's ban is temporary and he feels that he can return.
I hope so. @GrimSurfer may have other distractions or issues that may have caused the out of character outbursts.
I hope @GrimSurfer's ban is temporary and he feels that he can return.
It is perfectly okay to prefer the triode sound. Claiming it is of superior fidelity because it meets your preference is wrong.
The test where you take a recording of the output of sound run through a device, and then run it through another whole chain and treat that as if you were evaluating the device is methodologically absurd.
That would be something you'd have to take into account and much more so on the "screwed up" than "lying". But any time anyone posts the results of tests that you know had agendas to going in, questions must be asked. For tests like this to have relevance to me there should be a couple of people with, first the technical background to check the correctness of the playback chain, and second they have the "scientific" mindset that would ensure they are doing their best to ensure correct results.I hesitate to even post this, because I know precisely what the response will be -- it's impossible to do, therefore it didn't actually happen, they must all have screwed up somehow or be lying -- and I don't know, maybe that's actually true.
Agreed, once the above level of detail has been applied and they show audible differences, then it's time to figure out what, at the current level of science, is being missed by known measurements. As things stand today, no undeniable results have ever been presented to make me want to believe in magic dust science.If they do the test blind, and get statistically meaningful results, well, now you've got yourself a puzzle to figure out.
I want to be clear here, my critique isn't about the sound, it's about the methodology. Your test was sound(ish): You took the actual physical devices, tested them separately, and then tested them in conjunction, and were in principle able to determine whether the differences were caused by addition or subtraction. (But it was a sighted test relying on auditory memory, so obviously it's not actually proving anything.)
The test where you take a recording of the output of sound run through a device, and then run it through another whole chain and treat that as if you were evaluating the device is methodologically absurd.
The test where you take a recording of the output of sound run through a device, and then run it through another whole chain and treat that as if you were evaluating the device is methodologically absurd.
If they do the test blind, and get statistically meaningful results, well, now you've got yourself a puzzle to figure out.
I super-hate those kinds of "tests," because they rest on the assumption that the only difference between devices is an additive distortion.
Can you explain why it would be an absurd method?I want to be clear here, my critique isn't about the sound, it's about the methodology. Your test was sound(ish): You took the actual physical devices, tested them separately, and then tested them in conjunction, and were in principle able to determine whether the differences were caused by addition or subtraction. (But it was a sighted test relying on auditory memory, so obviously it's not actually proving anything.)
The test where you take a recording of the output of sound run through a device, and then run it through another whole chain and treat that as if you were evaluating the device is methodologically absurd.
I get your point.The test where you take a recording of the output of sound run through a device, and then run it through another whole chain and treat that as if you were evaluating the device is methodologically absurd.
Well, either the signal is as the original, either it is distorted.they rest on the assumption that the only difference between devices is an additive distortion.
I think you are being wrongly characterized in this thread as supporting the ideas of "those bad subjectivists".
Since when is blind testing about determining preferences? It seems to me the primary benefit of blind testing is to determine if there is ANY DIFFERENCE AT ALL between a given set of options. I don't particularly care if a person "prefers" one thing over another. I want to know if they can even hear a difference between the things at all (because in so many situations where they insist they can I don't really believe it). Are $500 speaker cables "better" than lamp cord? Let's first determine if any difference can be heard at all.
Blind testing is about preference, and has been for decades in sensory and product testing, if and when A and B can reasonably be expected to sound/taste/smell differently. In the case of audio, that would be : transducers. Toole and Olive's blind tests of loudspeakers, for example, were tests of *preference*, not difference.
I am most definitely not producing a stock argument.
What I am saying is it appears you do not understand the purpose of DBT. It should have nothing to do with preference. If you set the test for preference then you've already decided that a difference is there, otherwise the test is pointless.
I agree. I think it has clearly has utility in both cases. In the context of what we are concerned with on this website, I think most of the discussions center on the question of "difference", which likely eliminates most claimed "preferences" with regard to audio electronics.
Leaving aside
1) the issues with self-reported data, which will always be biased
2) the correct determination and use of p-values (almost always incorrect on audio forums)
3) the incorrect framing/understanding of a correctly obtained p=0.05 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4448847/)
In practice, assuming the above has been dealt with, the puzzle would be solved by checking the replicability of the experiment. That's what happens in science when a result point to "a puzzle" if
1) the puzzle is interesting enough
2) the field welcomes replicability
3) has funds/resources for replicability
Somehow, I think the audio is automatically disqualified by the above constraints
I guess when differences are so preponderant that they are obvious for everyone, preference is the only way to go.The only reason 'difference' questions could be considered more relevant than preference on this site is because Amir mostly measures solid state/digital gear, and not electromechanical gear or 'tube' gear
I guess when differences are so preponderant that they are obvious for everyone, preference is the only way to go.
On top of this, Harman's goal was to identify a set of measurements that help them building loudspeakers that common people prefer at a cheaper price, to get a competitive advantage.