What's the point of doing scientific listening tests if people then have to do their own non-scientific listening tests to decide? If I am convinced by the method, shouldn't I just accept the results? - it is science after all. Isn't that like Amir doing measurements of DACs but recommending that we all do our own measurements before purchase just to be on the safe side?
As SoundandMotion already pointed out, it is about what _you_ could hear and even more important
, if it is _really_ _important_ to you when listening to music.
I´ve asked it several times (but never got an answer) what people should do if somebody somewhere could detect differences in controlled listening test. Do they have to buy "blindly" new stuff according to those results or should they better try for themselves? (And as a subsequent question, couldn´t they try themselves anyway?).
And of course one should look at the hypothesis examined by any experiment and should evaluate to what group of people any test result could be expanded/generalized.
I could be wrong! For me the motivation behind the experiments is more important than the low level details - but most people prefer talking about the low level details. My suspicion is that people are more in love with the methodology and the lovely statistics than having any expectation that it will ever generate anything useful. It may generate lots of lovely tables and histograms that can be published and read by other people interested in the methodology, but that's not the same as something that's useful!
As said earlier, the most important part is to learn to listen, especially for evaluation purposes. When doing "blind" listening tests, it´s not needed to emphasize on the "scientific" part, as doing something right is important in any case, less important is if you could call it a "scientific" experiment wrt all details.
One gets wrong/misleading results using controlled listening tests as easy as with less controlled "sighted" listening.
Thirty years later, is CD transparent? "Ah well, that depends what you mean by transparent...". OK, is CD audibly the same as high res? "Ah well, you see, it depends on what you mean by audibly the same...". OK, is high res worth it? "Ah well, some meta-analysis suggests that under some circumstances then there may be evidence that it could sound different. More testing is needed...". Etc.!
I understand that it might be annoying, but it is important that _my_ "transparent or not transparent" might not match _yours_ . Stating something like "transparent for _every_ human being" is usually not warranted, surely not for CD .