This thread is an object lesson in how *not* to approach a problem armed with science.
A research question has to be formulated. It has to be precise and specific, such that the experiment tests *only* the (relationship between the) variables under scrutiny.
The logic and thinking behind development of the research question has to be ultra-disciplined - the opposite of what is happening here.
You don't start with a social bluff, adjusting your position as you play to an evolving public gallery. This particular process is a charade. A public spectacle. It's not doing anyone any favours.
The whole experimental environment is protected in science - such that interpretation of results is not compromised. The opposite is happening here.
There was never a properly-conducted blind test in prospect. It is right to recognise that, even if a good one had been designed (and protected from undue influence), it would not address all the things a particular party is vexed about in Golden's review (assuming that was what this was all about in the first place). Chagrin is not a good basis for scientific endeavour.
Good luck resolving your differences. You will not do so on this horizon. You can't (and shouldn't be able to) police subjective audio experiences on the internet. And the debate regarding relationship of equipment measurements (and brain phenomena such as expectation bias) to SQ is far from finished.
It won't be completed in this thread. Nor any that doesn't adopt a fresh approach. That fresh aporoach requires open-mindedness, dignified conduct - and *utter* methodological rigour.
Why doesn't the audiophile community deserve that?
No doubt the dogmatism and the antagonism here will continue ad nauseum.
Instead of pearl clutching all round, why not just admit it's really about popcorn and site stats?
My serious point, in case anyone wonders, is that we could do a lot better to improve our audio understandings, and that it might be worth it.