Mat, I like you, I think you are a nice guy, but still...we are in the same endless discussion.... Does sighted impressions has any relevance at all?
Cheers, and yes, I’m interested in that question.
This is still the most fundamental question in the audiophile community. To answer the question we need data, no opinions. Why after decades of this, there is not a plethora subjective advocates showing blind test result, confirming subjective impressions are at least useful? why the lack of confirmation? if you believe you loudspeakers assessment are good, why you do not take the time for blind testing, show us the data and shut up all the people that roll their eyes when confronted to any type of subjective opinion. why not become a subjective legend?
Although I have blind-tested probably more of my own equipment than the average ASR member, I’m sure you won’t be surprised that I don’t have the time, inclination, resources, or funding to build a Harman Kardon-style speaker shuffler in my home
Which is the type of “ carefully taking care of all the variables” data people would want right?
Which of course makes your question rhetorical.
If we’re looking for scientific level confidence in any subjective assessment of the actual sound of loudspeaker, blind testing is the way to go. However, that still leaves all sorts of pertinent and interesting questions on the table.
If your demand is going to be “
Only scientific level data ought to be persuasive” then it’s going to turn around and bite some unexamined assumptions.
Amir makes recommendations for equipment and loudspeakers largely based on how they measure. And that’s mostly how ASR members evaluate and recommend equipment.
Except the elephant in the room is that nobody - no consumer no audiophile virtually no ASR member - is purchasing loudspeakers or other equipment to listen in blind conditions.
The actual use-case for the equipment is sighted conditions.
So if we are going to be consistent and demand good data, the question is:
What scientific date do YOU (or Amir or anyone else) have showing how the measurements/blind testing of loudspeakers predict perception/preferences in the SIGHTED conditions in which we will be using the speakers?
As far as I know: there is no such rigourous data.
What do we do with this rather obvious yawning gap between the appeal to measurements/blind testing and the relevance to real world use for which we have no scientific data?
Without addressing it most of the discussion here becomes incoherent.
And this is the gap that I am often raising here.
Something rational has to bridge that gap like
Sighted listening is less reliable than blind listening, but it can still be informative. We need to maintain the relevance of blind testing and measurements while not throwing the baby out with the bathwater, by throwing away all sighted listening as uninformative.
The problem is some here seem to take the view “
any impressions drawn from sighted listening is to be treated as bias/fantasy until they can be verified via measurements or blind testing.”
But all that does is leave the gap - the relevance of blind listing and measurements to sighted listening - that I have pointed out.
And it can become a very facile, handwavy response whenever somebody doesn’t want to take any particular sighted listening report seriously. And it can feed into the outside view that ASR runs on some dogmatic rejection of the everyday experience of audiophiles: that their experience is all seen as bullshit by default until you’ve got the blind tests or measurements.
But once you allow for the fact that it is incoherent to fully reject sighted listening, reasonable questions arise as to “
OK, so under what type of unscientific conditions can it be rational to provisionally accept impressions under sighted conditions?”
And that’s precisely what I’ve often tried to address. Trying to fit my own experience within the above context.
For instance: if sighted listening is so unreliable, why have my impressions of loudspeakers I have owned for many years remain remained so consistent? I can look at posts I’ve made all the way back to 2001 and see that the description I gave are precisely the ones I give today for the same loudspeakers that I have owned since then. They sound the same to me. Same with the number of my other loudspeakers.
If it’s a bias effect then it is so persistent that it would suggest that my buying on measurements would be fruitless.
But if it’s not a bias effect - especially if this is not that uncommon among audiophiles - then this suggests
some reliability to sighted listening.
What about when there is coherence between what different audiophiles perceive in loudspeaker? If I’m describing a loudspeaker that somebody has already heard and they find agreement with my description, then my description couldn’t have influenced what they heard. The loudspeaker sounds the same to us. What’s the most plausible explanation?
And then there are the many cases where our perception not only agrees, but is also coherent with measurements for the loudspeaker. For instance I’ve listened to a number of speakers that Kal has reviewed - both of us heard some of the same “ issues” that later showed up in the measurements.
What’s the best explanation for that?
What’s the explanation for why Erin’s subjective evaluations of loudspeakers so often correlate well with the measurements?
We’re not going to have the scientific data to answer these questions, but much of the experience of every day audiophiles make these questions pertinent.
Some on ASR keep implying that my asking these type of questions are based on ignoring measurements and blind testing research. Whereas it’s precisely the opposite. The type of questions I’m getting into are INSPIRED by such research! It’s looking at the implications, and role blind testing and measurements CAN play in helping us choose our gear, but trying to do so with a coherent framework which does not become so hyper-skeptical about informal listening insights that it eats itself.
Cheers.