...This forum is focused on informing consumers, thereby impacting purchasing decisions. Companies viewed to have performed poorly have been mercilessly criticized and mocked. The defense of that was that such conclusions were based on measurements, not opinions. That may or may not have been a valid defense, but it was a clear one. However, if companies, particularly small ones like SVS, are now being criticized (and thereby having their business impacted) based on subjective evaluations, there should be some careful thought about that, IMHO. Despite having remarkably similar preference scores, one would come away from reading ASR thinking that the the M106 (5.79) is a good, recommended speaker and the SVS (5.70) is a poor, not recommended speaker. Now, that very well might be the case! Maybe the preference scoring model is invalid, despite its good lineage. Maybe listening, even sighted listening weeks apart, is superior to the preference score at evaluating speakers. I just think we should be clear about the implications of that. Do we think blind tests or, at least, level-matched listening with both speakers being present at the same time, playing the same music, is necessary to come to a critical comparison, or not? I do think that, if they’re going to trump the data, the subjective reviewers should be longer, level-matched, and done against a consistent “good” reference speaker. But that’s just my view.
With influence — either over the epistemological contours of acceptable reviewing or over readers’ purchasing decisions and, thereby, companies’ business — comes responsibility. That’s my point, as simply put as I can make it.