• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

The frailty of Sighted Listening Tests

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
Olive doesn’t say “highly desirable.” He says “must be done blind.”

Can I clarify, are we looking at the same paper? If it's this one, then the language Olive uses is "you would be well advised."
1596654726722.png

Nonetheless, I can completely agree that if you want to know how a speaker "truly" sounds, the listening tests should be done blind. I already accept that, and I think you and everyone else here does too. But what doesn't necessarily follow from Olive statement (either one) is that if a listening test is NOT done blind, then results are completely invalid. His experiment doesn't demonstrate that at all, nor was it designed to test that hypothesis.

My interpretation of the paper was:
1) Speaker preference ratings are affected by sighted listening.
2) The magnitude of the "sighted" effect is 0-2 units on the preference scale.
3) If you want to eliminate this 0-2 unit variation, you'll need to perform listening tests blinded.

However, we’re also not talking about the difference between one of the top-measuring speakers and one of the bottom-measuring speakers. We’re talking about the case of several of the top-performing speakers here, heard weeks apart in sighted listening.

Insofar as Amir’s “recommendeds” are valuable and can impact a company’s business, it seems that some evidence that he can make such distinctions under such circumstances is needed.

That sounds a little complicated to me...
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,891
Likes
35,912
Location
The Neitherlands
That's a lot of questions I'll do my best.

... answers ....

O.K. now it is clear what you were looking for. It really wasn't before to me, it seemed open and vague.

Suppose ASR only posted plots and perhaps some comments on the plots (talking 'bout the speakers).
I can tell you with absolute certainty that while I understand how the plots are taken and what they show the plots do tell me something about the tonal character and that they have a certain directivity I honestly cannot predict from all these plots how these will sound in my room.
I can guesstimate how they will sound on an open field and how tonal balance might change when I walk around it.
Can't say how it will sound in my room though. Way too inexperienced for that.

In this case I can look for reviews, perhaps even from people I kind of trust.

When Amir would decide to only post plots I would not be much wiser. When he also mentions about its sound (in his room) he just mentions how he hears it. It is additional info that adds to the whole review.
I don't give a crap if he does this sighted or blinded nor if he compares it and to what (haven't heard his speakers so have no reference anyway).
His impressions and comments may tell me more than all the plots.

When other reviewers review speakers you can be 100% certain they will be sighted. You can't be certain about motives either or taste of the reviewer (unless you know things he reviewed and agree with that).

Blind takes away some biases. That's all. Nice if you have something to prove.
As Olive said. There is a place for sighted listening as well.

I get the defensive stuff from Amir and why he feels compelled to spend time to defend his integrity and that there sometimes are discrepancies and inconsistencies. Don't know if it is worth a thread and many hours that could be spent on more useful things.

I suppose those that read the discussed papers and comments already know Olive's standpoint in this. If not, he hasn't made things clear enough.

Personally I think it is interesting to debate the pro's and con's of blind vs sighted tests.
To me its a waste of time to argue about the site's owner and if and how much he agrees with someone else.
I don't even care how good a listener he is. I really don't. I only care how something will sound to me in my house. I don't give a crap about other peoples research either. I read it, digest it and if it got me curious I'll follow up on it, audition it etc.

All the research read everywhere and also my experience tells me that to test/compare with a high degree of accuracy a test should be:
Blind, level matched, well chosen test material, well defined test parameters, optimal conditions, checked by measurement), repeatable, have statistic relevance and it must be clear how the test was performed and the conclusions are drawn.
One can draw certain conclusions from such a test. It won't tell me if I like it and works well in my home. I may have a feeling it might or might not.

A sighted test in my home will tell me all I want/need to know and is very easy and quick to do. That is when sighted has its place.
I don't care what others think about that or if if Amir is inconsistent now and then for whatever reason. I expect the man to be honest to himself and have no reason to think he is not.
All people that have an opinion have fans, followers, people that don't care, haters. That's life.
I can understand others may feel different about all this but cannot put myself in their shoes.
 
Last edited:

Racheski

Major Contributor
Forum Donor
Joined
Apr 20, 2020
Messages
1,116
Likes
1,699
Location
Chicago
Not at all, you read what I've wrote, I clearly state that I'm not an expert (clear text). I even proposed to bring Dr Olive to exchange with us.

Opinions < scientific papers < meta analisys (which may lead To scientific consensus). Sure a single study is not enough that's why I shared pkane's post. Now I speak french, so what can I share is limited. With good faith it's easier To find studies about cognitiv biaises.

Now you're trying To bring it down to me personnaly, I won't awnser to that, this is unneccessary.
Actually he is extending an olive branch to you in an effort to better understand your perspective. Many forums would have banned, ignored, or called you some very unsavory things at this point, but folks here are actually engaging with you.
 

MattHooper

Master Contributor
Forum Donor
Joined
Jan 27, 2019
Messages
7,194
Likes
11,808
I think there is an interesting range of issues bound up on this thread. One far "pole" could be the remit of this site where "science" is emphasized, where another pole would be "what any of us view as practical, given inevitable limitations and compromises."

In the first case, insofar as this site markets itself as audioSCIENCEreview, and insofar as a strong emphasis is place on an appeal to scientific thinking and methodology to separate it from the woo-woo crowd, posters like Rusty Shackleford strike me as bringing honest and reasonable pressure to those claims. I mean, let's face it, if we REALLY want to claim a strong foothold in science, the type of questions/issues being posed by Rusty are pretty much softball, compared to the level of rigor and skeptical pressure many claims here (including in Amir's reviews) would come under in a 'real' scientific community. Really hewing to the level of rigor a scientific discipline demands, including reviews by skeptical peers taking your methods apart piece by piece, is a real crucible relative to the cautious questioning from Rusty and others as to the tenability of the sighted listening reports.

On the other hand, we all have to deal with issues of practicality. Presumably it is impractical for Amir to bother blind testing the speakers (?).

So in regards to at least the sighted descriptions we have to let go of pretenses to science. Which, I think we pretty much agree upon (?).

Then we are left asking if the subjective/sighted speaker (or other) descriptions are of any worth? Here again for me practicality decides.
Are sighted, subjective descriptions distinctly less reliable than carefully controlled blind tests? Presumably, yes. Are they therefore wholly unreliable, useless, able to convey no information? I don't think we have to go that far, and it seems lots agree. We know speakers sound different from one another. Sometimes very different. So while, as in everywhere else in life, results have bias hovering over them as possible spoilers, the subjective impressions are at least plausible, and in a practical manner we can enjoy them and provisionally accept the descriptions as getting at something of the character of the speakers.

Basically for me the practicality issue is of a piece with many other epistemic issues that are guided by the helpful heuristic "Extraordinary Claims Require Extraordinary Evidence." The more a claim seems to be dubious given what we know scientifically or through general experience, the more cautious and rigorous we will want to be in demanding evidence. Where plausible claims can be provisionally accepted. Otherwise we could never get anything done or make any decision all day long without subjecting the claim to scientific experiment. You tell me you bought a new OLED TV, I say "ok, accepted." I know they exist and are readily available at the local Best Buy. You don't even have to show me your receipt. You say you just bought a living T-Rex from the local pet shop, well...then I can't fall back on the plausibility of the claim and I need to demand much more rigorous evidence.

Same when someone claims to hear a difference between speakers. Ok, fine, cool, in practical term we can proceed as that's entirely plausible: tell me what they sounded like.

You tell me you can hear differences between AC cables and then, sorry, that's implausible on various fronts, and it's time to demand more rigorous evidence (measurements/blind testing).

So, I'm good with the subjective impression from Amir. I enjoy them and hope they remain a feature of the reviews.
 
Last edited:

Rusty Shackleford

Active Member
Joined
May 16, 2018
Messages
255
Likes
550
Can I clarify, are we looking at the same paper? If it's this one, then the language Olive uses is "you would be well advised."
View attachment 76773
Nonetheless, I can completely agree that if you want to know how a speaker "truly" sounds, the listening tests should be done blind. I already accept that, and I think you and everyone else here does too. But what doesn't necessarily follow from Olive statement (either one) is that if a listening test is NOT done blind, then results are completely invalid. His experiment doesn't demonstrate that at all, nor was it designed to test that hypothesis.

My interpretation of the paper was:
1) Speaker preference ratings are affected by sighted listening.
2) The magnitude of the "sighted" effect is 0-2 units on the preference scale.
3) If you want to eliminate this 0-2 unit variation, you'll need to perform listening tests blinded.

That sounds a little complicated to me...

I was quoting the blog post:

F331ECBE-AC1E-4C71-A477-6750FA99BB97.png


The preference scale is very small. So 1-2 units is a pretty large effect. The question is, “Will two speakers that measure as similarly as the Revel and SVS likely vary 2 units?” Saying “yes” is a pretty strong claim, given the scale. Going from “recommenced” to “not recommended” is even larger. It may be true! But I think more than sighted listening might be necessary to demonstrate it.
 

Rusty Shackleford

Active Member
Joined
May 16, 2018
Messages
255
Likes
550
O.K. now it is clear what you were looking for. It really wasn't before to me, it seemed open and vague.

Suppose ASR only posted plots and perhaps some comments on the plots (talking 'bout the speakers).
I can tell you with absolute certainty that while I understand how the plots are taken and what they show the plots do tell me something about the tonal character and that they have a certain directivity I honestly cannot predict from all these plots how these will sound in my room.
I can guesstimate how they will sound on an open field and how tonal balance might change when I walk around it.
Can't say how it will sound in my room though. Way too inexperienced for that.

In this case I can look for reviews, perhaps even from people I kind of trust.

When Amir would decide to only post plots I would not be much wiser. When he also mentions about its sound (in his room) he just mentions how he hears it. It is additional info that adds to the whole review.
I don't give a crap if he does this sighted or blinded nor if he compares it and to what (haven't heard his speakers so have no reference anyway).
His impressions and comments may tell me more than all the plots.

When other reviewers review speakers you can be 100% certain they will be sighted. You can't be certain about motives either or taste of the reviewer (unless you know things he reviewed and agree with that).

Blind takes away some biases. That's all. Nice if you have something to prove.
As Olive said. There is a place for sighted listening as well.

I get the defensive stuff from Amir and why he feels compelled to spend time to defend his integrity and that there sometimes are discrepancies and inconsistencies. Don't know if it is worth a thread and many hours that could be spent on more useful things.

I suppose those that read the discussed papers and comments already know Olive's standpoint in this. If not, he hasn't made things clear enough.

Personally I think it is interesting to debate the pro's and con's of blind vs sighted tests.
To me its a waste of time to argue about the site's owner and if and how much he agrees with someone else.
I don't even care how good a listener he is. I really don't. I only care how something will sound to me in my house. I don't give a crap about other peoples research either. I read it, digest it and if it got me curious I'll follow up on it, audition it etc.

All the research read everywhere and also my experience tells me that to test/compare with a high degree of accuracy a test should be:
Blind, level matched, well chosen test material, well defined test parameters, optimal conditions, checked by measurement), repeatable, have statistic relevance and it must be clear how the test was performed and the conclusions are drawn.
One can draw certain conclusions from such a test. It won't tell me if I like it and works well in my home. I may have a feeling it might or might not.

A sighted test in my home will tell me all I want/need to know and is very easy and quick to do. That is when sighted has its place.
I don't care what others think about that or if if Amir is inconsistent now and then for whatever reason. I expect the man to be honest to himself and have no reason to think he is not.
All people that have an opinion have fans, followers, people that don't care, haters. That's life.
I can understand others may feel different about all this but cannot put myself in their shoes.

I think there is an interesting range of issues bound up on this thread. One far "pole" could be the remit of this site where "science" is emphasized, where another pole would be "what any of us view as practical, given inevitable limitations and compromises."

In the first case, insofar as this site markets itself as audioSCIENCEreview, and insofar as a strong emphasis is place on an appeal to scientific thinking and methodology to separate it from the woo-woo crowd, posters like Rusty Shackleford strike me as bringing honest and reasonable pressure to those claims. I mean, let's face it, if we REALLY want to claim a strong foothold in science, the type of questions/issues being posed by Rusty are pretty much softball, compared to the level of rigor and skeptical pressure many claims here (including in Amir's reviews) would come under in a 'real' scientific community. Really hewing to the level of rigor a scientific discipline demands, including reviews by skeptical peers taking your methods apart piece by piece, is a real crucible relative to the cautious questioning from Rusty and others as to the tenability of the sighted listening reports.

On the other hand, we all have to deal with issues of practicality. Presumably it is impractical for Amir to bother blind testing the speakers (?).

So in regards to at least the sighted descriptions we have to let go of pretenses to science. Which, I think we pretty much agree upon (?).

Then we are left asking if the subjective/sighted speaker (or other) descriptions are of any worth? Here again for me practicality decides.
Are sighted, subjective descriptions distinctly less reliable than carefully controlled blind tests? Presumably, yes. Are they therefore wholly unreliable, useless, able to convey no information? I don't think we have to go that far, and it seems lots agree. We know speakers sound different from one another. Sometimes very different. So while, as in everywhere else in life, results have bias hovering over them as possible spoilers, the subjective impressions are at least plausible, and in a practical manner we can enjoy them and provisionally accept the descriptions as getting at something of the character of the speakers.

Basically for me the practicality issue is of a piece with many other epistemic issues that are guided by the helpful heuristic "Extraordinary Claims Require Extraordinary Evidence." The more a claim seems to be dubious given what we know scientifically or through general experience, the more cautious and rigorous we will want to be in demanding evidence. Where plausible claims can be provisionally accepted. Otherwise we could never get anything done or make any decision all day long without subjecting the claim to scientific experiment. You tell me you bought a new OLED TV, I say "ok, accepted." I know they exist and are readily available at the local Best Buy. You don't even have to show me your receipt. You say you just bought a living T-Rex from the local pet shop, well...then I can't fall back on the plausibility of the claim and I need to demand much more rigorous evidence.

Same when someone claims to hear a difference between speakers. Ok, fine, cool, in practical term we can proceed as that's entirely plausible: tell me what they sounded like.

You tell me you can hear differences between AC cables and then, sorry, that's implausible on various fronts, and it's time to demand more rigorous evidence (measurements/blind testing).

So, I'm good with the subjective impression from Amir. I enjoy them and hope they remain a feature of the reviews.

Both of your posts are very thoughtful, and I agree with much of what each of you have written.

My concerns are:

1) If Amir claims that his sighted listening can detect things the measurements cannot, that’s an epistemological claim that runs counter the the accepted wisdom on this site. It means that there are some things better determined by listening, even sighted listening, than by measurements. This surely opens up room for broader subjective claims, which can no longer be shut down with objective claims.

2) If Amir claims that his sighted listening, as opposed to others’, is valid only because he is a trained listener, then there needs to be some evidence to support that. We also need the type of training that’s acceptable clearly specified. Perhaps we also need hearing test results. But it cannot be that only Amir is able to make valid subjective claims based on sighted listening. Openness, fairness, and reproducibility matter.

3) Amir has taken pride at ASR’s reviews impacting manufacturers, positively and negatively. He took credit for Schiit’s changes in measurements and introduction of new product lines. Given that, the “recommended” versus “not recommended” distinctions matter. So if he’s going to say that, between two very similarly measuring speakers, one is “recommended” and one is a “not recommended” — and further claim that this is more than just his personal preference and is of greater value than others’ — there needs to be some proven validity to those distinctions, which brings us back to 1 and 2.
 

NDC

Member
Joined
Jul 18, 2020
Messages
86
Likes
115
Location
Sydney, Australia

youngho

Senior Member
Joined
Apr 21, 2019
Messages
486
Likes
799
1) If Amir claims that his sighted listening can detect things the measurements cannot, that’s an epistemological claim that runs counter the the accepted wisdom on this site. It means that there are some things better determined by listening, even sighted listening, than by measurements. This surely opens up room for broader subjective claims, which can no longer be shut down with objective claims.

Rusty, if you read Kevin Voecks' comments on listening tests for the Salon 2, I think you might agree that he seems to feel that some things are better determined by listening, than by measurements. Similarly, Floyd Toole has commented in his book, "A recent listening test proved its worth when it revealed that a loudspeaker having excellent looking spinorama data (Section 5.3), which normally is sufficient to describe sound quality, was not rated highly as expected. The problem was found to be intermodulation distortion, an extremely rare event, associated with the ways sounds from a woofer and tweeter combined in a concentric arrangement--so constant vigilance and listening are essential." FLOYD TOOLE. AND KEVIN VOECKS. FROM HARMAN.

Furthermore, you are setting up a straw man argument about your perception of "the accepted wisdom on this site."

2) If Amir claims that his sighted listening, as opposed to others’, is valid only because he is a trained listener, then there needs to be some evidence to support that. We also need the type of training that’s acceptable clearly specified. Perhaps we also need hearing test results. But it cannot be that only Amir is able to make valid subjective claims based on sighted listening. Openness, fairness, and reproducibility matter.

I highly advise that you take the radical step of simply not reading that section of his posts.

3) Amir has taken pride at ASR’s reviews impacting manufacturers, positively and negatively. He took credit for Schiit’s changes in measurements and introduction of new product lines. Given that, the “recommended” versus “not recommended” distinctions matter. So if he’s going to say that, between two very similarly measuring speakers, one is “recommended” and one is a “not recommended” — and further claim that this is more than just his personal preference and is of greater value than others’ — there needs to be some proven validity to those distinctions, which brings us back to 1 and 2.

I strongly suggest that you consider not following this forum further, if you are so bothered, or even better, listen to both very similarly measuring speakers and offer your own opinion, making sure that you include some proof of validity.

Young-Ho
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
It was not a study simply about how employees of his company might prefer their own products.
Every study has a motivation. The motivation for this test was clear: typical sales, marketing and speaker designers being so proud of their hearing ability that they had no use for likes of Sean Olive or controlled testing. You know, standard subjectivsts. They have golden ears because they say they do with no verification.

So Sean set up to demonstrate that they could be wrong. Here is the selection criteria for listeners:

1596667484540.png


100% in compliance with the goals of research to find out if that bias impacts listening test results. It did. Interestingly it was not a home run as it did not change the ranking of the speakers. But did show preference level changing some.

So what you state above is simply incorrect. You have to read the research and understand what it is trying to do. This is a conference paper and not peer reviewed so you can't expect the exacting standards used in J. AES papers. People are free to express opinions and Sean provided some generalities.

Bottom line is that three Harman speakers were used and judged by 40 Harman employees. That makes it way, way outside of any normal study of bias for speaker preference among the general public with no relationship with products being shown.

Importantly, nothing in the paper reads on what we are talking about. No one had critical listening skills. What they call "experienced" just means industry experience or taking a few tests. That is not who we are talking about.

If the test wanted to include true trained listeners, it would have had a test to find such people. Nothing like that is in the paper. People were taken at face value that they are "experienced."

Here is a fun conjecture on my part: I am confident Sean would not want to perform such a study on their small pool of critical listeners used in later studies and show them that what they do outside of blind tests is worthless! That would be friendly fire of worst kind.
 

Racheski

Major Contributor
Forum Donor
Joined
Apr 20, 2020
Messages
1,116
Likes
1,699
Location
Chicago
Both of your posts are very thoughtful, and I agree with much of what each of you have written.

My concerns are:

1) If Amir claims that his sighted listening can detect things the measurements cannot, that’s an epistemological claim that runs counter the the accepted wisdom on this site. It means that there are some things better determined by listening, even sighted listening, than by measurements. This surely opens up room for broader subjective claims, which can no longer be shut down with objective claims.
Where does he specifically claim this?

2) If Amir claims that his sighted listening, as opposed to others’, is valid only because he is a trained listener, then there needs to be some evidence to support that. We also need the type of training that’s acceptable clearly specified. Perhaps we also need hearing test results. But it cannot be that only Amir is able to make valid subjective claims based on sighted listening. Openness, fairness, and reproducibility matter.
He has discussed the Harman trained listening courses he has undergone ad nauseum. Why are they not valid?
3) Amir has taken pride at ASR’s reviews impacting manufacturers, positively and negatively. He took credit for Schiit’s changes in measurements and introduction of new product lines. Given that, the “recommended” versus “not recommended” distinctions matter. So if he’s going to say that, between two very similarly measuring speakers, one is “recommended” and one is a “not recommended” — and further claim that this is more than just his personal preference and is of greater value than others’ — there needs to be some proven validity to those distinctions, which brings us back to 1 and 2.
Where does he take credit for manufacturers introducing new product lines? How do you know that his binary Yes/No recommendation at the end of the reviews have such a profound influence on manufacturers?
 
OP
P

patate91

Active Member
Joined
Apr 14, 2019
Messages
253
Likes
137
Every study has a motivation. The motivation for this test was clear: typical sales, marketing and speaker designers being so proud of their hearing ability that they had no use for likes of Sean Olive or controlled testing. You know, standard subjectivsts. They have golden ears because they say they do with no verification.

So Sean set up to demonstrate that they could be wrong. Here is the selection criteria for listeners:

View attachment 76796

100% in compliance with the goals of research to find out if that bias impacts listening test results. It did. Interestingly it was not a home run as it did not change the ranking of the speakers. But did show preference level changing some.

So what you state above is simply incorrect. You have to read the research and understand what it is trying to do. This is a conference paper and not peer reviewed so you can't expect the exacting standards used in J. AES papers. People are free to express opinions and Sean provided some generalities.

Bottom line is that three Harman speakers were used and judged by 40 Harman employees. That makes it way, way outside of any normal study of bias for speaker preference among the general public with no relationship with products being shown.

Importantly, nothing in the paper reads on what we are talking about. No one had critical listening skills. What they call "experienced" just means industry experience or taking a few tests. That is not who we are talking about.

If the test wanted to include true trained listeners, it would have had a test to find such people. Nothing like that is in the paper. People were taken at face value that they are "experienced."

Here is a fun conjecture on my part: I am confident Sean would not want to perform such a study on their small pool of critical listeners used in later studies and show them that what they do outside of blind tests is worthless! That would be friendly fire of worst kind.


In fact t
Brand loyalty is ONE of th biais (As far as I know we all have prefered brands)
They used 4 speakers, yes 3 were Harman
For experienced listeners, until you have a way to prove your claims, it's pure speculations. I'll tre to find the link article about Glen Could and the ability to hear things.
 
OP
P

patate91

Active Member
Joined
Apr 14, 2019
Messages
253
Likes
137
Where does he specifically claim this?


He has discussed the Harman trained listening courses he has undergone ad nauseum. Why are they not valid?

Where does he take credit for manufacturers introducing new product lines? How do you know that his binary Yes/No recommendation at the end of the reviews have such a profound influence on manufacturers?


For your first question you'll have to read recent speakers review.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
My concerns are:

1) If Amir claims that his sighted listening can detect things the measurements cannot, that’s an epistemological claim that runs counter the the accepted wisdom on this site.
There is no such "wisdom." Fact remains that even small differences in measurements can change the nature of a speaker's sound rather strongly. Anyone who thinks measurements should be all we do is mistaken about the science and certainly shares no wisdom with me.

I have no ability to tell the sound of two similarly "good measuring" speakers by just looking at the graphs. Similar looking measurements may translate into different perception. Take these two speakers starting with JBL M2:

Spin%2B-%2BJBL%2BM2%2B%2528full%2Bspin%2529.png


And here is Revel Salon 2:

Spin%2B-%2BRevel%2BUltima2%2BSalon2%2B%2528re-measured%2Bin%2B2017%2529.png


The Salon 2 was tested in an informal blind test of audiophiles on AVS: https://www.audiosciencereview.com/...ootout-between-jbl-m2-and-revel-salon-2.1844/

The result was that the Salon 2 was preferred. This is so even though its response is less perfect than JBL M2.

What would you like to happen in our forum? That we only measure and pick the M2 as being better than Salon 2? Clearly this runs foul of the above blind test whose thread included likes of Dr. Toole and Kevin Voecks from Harman.

So I listen. It is a double check on the measurements. That the speaker you bought doesn't pass my listening test is an issue you have to get over. Your personal bias and detest for what I do is not our issue. Learn to deal with them by not reading the reviews or the whole forum for that matter as it was just nicely noted. Heaven knows if I going to listen to someone, it won't be an angry mob running with talking points of research they have not read or understood.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
In fact t
Brand loyalty is ONE of th biais (As far as I know we all have prefered brands)
It is and the core one examined in this study. To test other factors, they would have had to pick listeners with no relationship to the speakers. That is my case here so the evidence you have put forward doesn't count.

It shouldn't have anyway because I know all about the study, its author and its motivation and still do what I do. I have thought this through but you clearly have not.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
They used 4 speakers, yes 3 were Harman
The only non-harman speaker was "T""

BlindVsSightedMeanLoudspeakerRatings.png


Its rating on the right did not change. So the study implicitly shows that when the relationship to a speaker was taken away, the preference score remained the same. Sighted or blind.

Now, this is too small of a sample to draw the conclusion to the limit but directionally, that is what it says. And with it, damns any reason for you to have put it forward. I suggest reading research papers in the future instead of running with talking points.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,385
Location
Seattle Area
For experienced listeners, until you have a way to prove your claims, it's pure speculations.
Prove to whom? An unreasonable, inexperienced person (by his own declaration)? Not going to happen, nor is it a goal for me.

It is not like you have common sense and can skip over my subjective testing. You are instead so fixated on it, you are about to slash your wrist over it.

Not reading my subjective impressions is a simple solution you are not taking to end your suffering.

I am going to listen to speakers. I am going to try to correlate measurements with what I hear. I am going to modify their response with EQ to determine if flaws can be fixed for the customers that own these products or will buy them. And I am going to give a verdict as to whether I would personally buy such a speaker or not.

If this kind of thing bothers you too much, I can show you the door.
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
I was quoting the blog post:

View attachment 76794

Ah, thanks for clarifying that. Definitely would need to go with the "you would be well advised to test blind" found in the original academic paper then (not the internet blog post).

The preference scale is very small. So 1-2 units is a pretty large effect. The question is, “Will two speakers that measure as similarly as the Revel and SVS likely vary 2 units?” Saying “yes” is a pretty strong claim, given the scale. Going from “recommenced” to “not recommended” is even larger. It may be true! But I think more than sighted listening might be necessary to demonstrate it.

To be clear, the effect of "sight" bias varied between 0 and 2 on the preference scale. For some loudspeakers and conditions, there was 0 bias caused by being unblinded. And the average across tests was more like 1. Moving from, say, a 5 to a 6 is a "noticeable improvement," but it's not THAT big of a deal on the scale. But here's the thing - the 0-2 unit "bias effect" was created under extreme levels of bias (i.e. if you like your job here, you know what the answer is). So when generalizing the results over to Amir, where the bias is less extreme, I would expect the influence of bias to, in turn, be LESS than 1, on average. Personally, I think that's really good!

But I get the point you're trying to make. You're saying that for you, even the possibility of a 1-2 unit difference is not acceptable when company sales can be affected. I don't necessarily agree, but I hear you. And I'm saying that listening impressions are still valid and usable even when they're not blinded - they're just not quite as authoritative or reliable.
 
Last edited:
OP
P

patate91

Active Member
Joined
Apr 14, 2019
Messages
253
Likes
137
More about Experienced listeners

"Experience is one of those variables among listeners that is very difficult to quantify. For example, musicians are experienced listeners but,
is experience in focusing on musical attributes equivalent to that of focusing on timbral and spatial attributes?
Some evidence suggests that it is not. Gabrielsson found that musicians who were not also audiophiles,were not especially good judges of sound quality[4]. The famous pianist Glenn Gould
came to appreciate the insights of non musicians[5]. Our own tests have confirmed this. So, listeners with different backgrounds could be expected to have differing abilities or preferences in subjective evaluations. This is a nenormously broadtopic, but we thought that it would be interesting to take a first step towards understanding the importance of this variable."
 

Racheski

Major Contributor
Forum Donor
Joined
Apr 20, 2020
Messages
1,116
Likes
1,699
Location
Chicago
For your first question you'll have to read recent speakers review.
I have read them and I can't find such a claim, that is why I am asking. Instead of assuming I haven't read the reviews, it would be helpful if you could point out this specific claim instead of dismissing my question entirely.
 
Top Bottom