- Thread Starter
- #501
@whazzup I recommend that you have a look at thé Erin's web site, here's a good exemple
https://www.erinsaudiocorner.com/loudspeakers/micca_mb42xiii/
https://www.erinsaudiocorner.com/loudspeakers/micca_mb42xiii/
@whazzup I recommend that you have a look at thé Erin's web site, here's a good exemple
https://www.erinsaudiocorner.com/loudspeakers/micca_mb42xiii/
And the point that you want to make is...?
I think @bobbooo did a thorough job of addressing these issues. However, you’re adding the conclusion that if trained listeners can’t perform as well sighted as blind then the training is of no value and, further, sighted listening is of no value. That’s a straw man. No one is arguing that. This isn’t a binary choice between “perfect” and “worthless.”
That's a good exemple of a sighted listening test done with a will to minimise biais (reviewer and reader).
Exactly the issue I had. I actually replied that I thought it was a strawman as well, but I edited it out. Thank you @Rusty Shackleford for calling it out.
There are two completely different issues here. Let's not obfuscate them.
#1 is whether being trained makes you less biased or more accurate when providing sighted listening impressions.
#2 is whether it's possible for sighted listening impressions of loudspeakers to provide useful information (albeit not as accurate/reliable as a blinded test)
#1 is still up for debate, from my perspective.
#2 has already been addressed by myself and others, including an evaluation of published evidence, and confirmed by Sean Olive.
How did Erin minimise the bias for sighted listening? Because he spends more time talking about his subjective / professional opinions versus the objective data, and Amir didn't?
I think you'll have to read studies about biais : what they are, what they do, etc. There's also a lot of vulgarisation on YouTube (quality varies a lot), at least in french.
Exactly the issue I had. I actually replied that I thought it was a strawman as well, but I edited it out. Thank you @Rusty Shackleford for calling it out.
There are two completely different issues here. Let's not obfuscate them.
#1 is whether being trained makes you less biased or more accurate when providing sighted listening impressions.
#2 is whether it's possible for sighted listening impressions of loudspeakers to provide useful information (albeit not as accurate/reliable as a blinded test)
#1 is still up for debate, from my perspective.
#2 has already been addressed by myself and others, including an evaluation of published evidence, and confirmed by Sean Olive.
Aww...now you're being vague....you said it's a good example but cannot point out what's good about it?
Nowhere in the review he mentioned he did blind testing. So the only difference is he listened to it before doing the measurements? And that is good enough for you? So we're only talking about the bias of knowing the measurements before the listening tests? That is the difference between Amir and Erin?
I'm not vague, it seems that your knowledge on the subject are limited.
On Erin's site the visual is different too. Again I think your knowledge are limited. The subject is very interesting and it changed a lot of things in my life
Nah, I read through some of your posts on the SVS thread again. It appears you're just unhappy Amir didn't do more work to relate his subjective opinions to the data and Erin did. So it's really not about the validity of 'sighted testing'. You just want people to hand you the knowledge.
Sure, I have limited knowledge in a lot of areas. That's why I ask simple, straightforward questions to get straightforward answers.
So what is YOUR interpretation of the role of a critical listener and how much weight do you place on their sighted evaluations?
Sure, you can disregard my questions 2a/2b, but still, would you care to address questions 1 and 3? Your interpretation of the study and Olive's words, of course.
Not my understanding of what Amirm has said repeatedly in this thread .Amir has said that, whatever definition of critical/trained listener we come up with, research based on those definitions don’t apply to him, because his skills are superlative. Given that, there’s no research that will settle this debate.
Not my understanding of what Amirm has said repeatedly in this thread .
But hayho.
Whether you do a test blind or sighted, there is no guarantee of correctness. Every test has a margin of error. Turn a sub on and off in your room. Do you need a double blind test to trust what it does in your room? No. A blind test would generate the same as sighted test.
Sightest tests have higher error rate than blind tests, all else being equal. But they are also extremely fast and low effort. For this reason, the industry uses them as quick tests and then perform occasional double blind test as a backstop. No different than a lab test at a doctor's office which is quick but has lower accuracy than one sent to an external lab.
Again, look at Olive research:
Ranking of speakers G, D and T did not change in sighted versus blind. Only speaker S changed.
Best engineering is about optimization and getting 90% right for 10% of the effort/cost. We are not purists here with infinite budget and time to double blind tests speakers.
If we could show sighted tests to be mostly wrong as they are in electronics, then sure, we would not attempt it. But reality is that difference between speakers is quite large and is able to overpower listener bias when said listener a) has no stake in the outcome and b) has better thresholds of detections of impairments.
As I have said, we do this in the industry all the time where the outcome really matters. Jobs and company reputations are at stake. Yet we do it because the risk/reward is appropriate.
...as far as #1, the Harman training makes no claim that it reduces sighted bias, the study they conducted shows that bias affected the experienced listeners more than the inexperienced listeners. Finally, Harman to this day still conducts blind testing with their trained listeners, there would be no need to do that if they felt the trained listeners wouldn't be biased in any way.
Dr. Olive's posts made all of this pretty clear in my opinion so since blind testing isn't feasible at this time, the main takeaway of the thread is that the CTA-2034 measurements are king followed by the subjective impressions.
We define a trained listener as someone who has normal audiometric hearing and has passed Level 8 or higher in our How to Listen Training software. We also look at their performance in actual tests in terms of how discriminating and consistent they are in rating products.
And yes, achieving level 10 in How to Listen doesn't guarantee you will be good at evaluating loudspeakers, which is why we monitor your performance in this task.. But my experience indicates that the training combined with actual experience in tests tests to correlate with performance in tests.
That is a difficult question to answer. How can you test the reliability of a sighted tesr? How do you test reliability other than repeating the evaluation ? As we've shown even trained listeners are biased by price, brand, design, etc. I would certainly want to see measurements that reinforce what the listener is reporting.
Now, as applied to the subjective review crowd in general, I can see a protest "We are fine provisionally accepting sighted speaker impressions from Amir (a "trained listener") but forget about the stereophile/absolute sound, untrained audiophiles and all the other riff-raff. That stuff is useless."
Except I don't find it to be useless. When I encounter or audition a new speaker I often look up what reviewers and other audiophiles are saying about it. I don't care much about reading someone's emotional reaction "I was swept away...blah, blah..." I pay attention to whether the person is characterizing the sound, and how well they do it. When I see a consensus happening on the general character of a speaker, I most often find it to be "accurate" to my own impressions of that speaker. (Sometimes this is when I've heard the speaker after being intrigued by the descriptions/reviews I'm reading, but often enough I'm looking for these subjective impressions after I first heard the speaker and formed my own impression). When I find a reviewer who seems to be pretty accurate in this way and/or whose tastes I have divined over time, I can find their subjective take on a speaker to be somewhat informative.
Apart from this being a test of four, and only four, loudspeakers, simple 'ranking' is a crude measure. Alternately, a takeaway from that graph is that blinding the test reduced the strength of preference for the first two and dislike for the third, enough that the four actually become remarkably similar to each other, preference-wise. In fact the error bars for A ,B,C,D all appear to overlap once the test is blinded.
A point *against* sighted testing.
You have said that there are perhaps at best a few hundred 'trained listeners' . I wanted to know when and how we are to know who is a 'trained listener' and what degree a faith we are expected to put in their sighted evaluations.
You are now adding the claim that the (undisputed) 'quite large' differences between speakers 'overpowers' cognitive bias when the listener 1) has 'no stake' (which strikes me as problematic: one element of cognitive bias is that it is not necessarily conscious; that the listener *feels* they are stake-free is not sufficient) and 2) is good at detecting impairments -- presumably either a native talent , or a product of....listener training?
I don't think clarity has been achieved here. And I don't give a tinker's damn for the excuse that the 'industry' takes shortcuts because it lacks infinite money etc...I want to know what the value to consumers is, of any given person'ts 'sighted' report of loudspeaker quality. If there are in fact only 'a few hundred if that' trained listeners in the world, the safe presumption is that a random person's sighted report, such as encountered endlessly in audiophile forums and publications, and of course loudspeaker marketing, is worthless.
First, you don't have to put faith in me. You all have to decide what you put your faith in. For me, Sean said it best:You have said that there are perhaps at best a few hundred 'trained listeners' . I wanted to know when and how we are to know who is a 'trained listener' and what degree a faith we are expected to put in their sighted evaluations.
In ranking objective/subjective measurements in terms of how reliable and trustworthy test they are I would say:
#1 A well-controlled double-blind listening test.
#2 Meaningful Objective Measurements that Predict #1
#3 A sighted listening test.