respice finem
Major Contributor
- Joined
- Feb 1, 2021
- Messages
- 3,727
- Likes
- 7,188
But one could call it reliably biased...Calling sighted listening reliable is actually quite extreme...
But one could call it reliably biased...Calling sighted listening reliable is actually quite extreme...
(Bolding is mine)But often enough people here really are making statements that read as fairly extreme sceptical positions on sighted listening - where implicit or explicit in the statements is the proposition that sighted listening perception is inevitably polluted by biases.
People are misreading and/or misinterpreting the meaning of a reliable observation.
I do not see that as an extreme position at all, maybe mildly sceptical in the way science has to be.
It is a result of scientific research that listening perception is easily (and regularly) polluted by biases, and that without controls in place (being sighted is an obvious lack of control) the outcome is highly unreliable.
This "inevitably" is unnecessary. all sighted listening impressions are unreliable because bias can never be ruled out. so sometimes sighted listening impressions may be accurate, but without a reliable way to know when or if that's the case, they are inherently dubious. In other words, while sighted listening impressions are not as random as, say, picking lottery numbers, the outcome is similar: they can't be relied upon to be accurate in any given case.So I might change it to: the proposition that any sighted listening impressions are inevitably the result of biases.
There certainly is correlation, as otherwise - as you pointed out - listening would be random.The data clearly shows that results of blind and sighted listening are correlated, as basic common sense demands.
If there was zero correlation, it would mean that blind tests were useless for evaluating speakers that are to be listened to sighted in the home. It would mean that sighted bias is so overwhelming that the actual sound has no impact at all on sighted listeners' perception of the sound. It would mean that, for maximum enjoyment, speakers should be purchased on the basis of non-sonic factors such as looks and cost only. Measurements (of the sound) and blind testing would both be useless for determining how listeners reacted to the speakers when they make sound, only non-sonic factors would matter.
Of course this absurd situation is not the case.
Why are you patting him on the back for using a straw man argument? Cheap debating tactics. "Well said" is almost the opposite of what he deserves for his game of malicious misrepresentation.Well said @MarkS. It's a pretty stark contrast between the quote from Toole here and implying that sighted listening is essentially like playing the lottery (which obviously is highly exaggerated).
Why are you patting him on the back for using a straw man argument? Cheap debating tactics. "Well said" is almost the opposite of what he deserves for his game of malicious misrepresentation.
Do you openly endorse straw man debating tactics? Discussions in bad faith?
Why are you patting him on the back for using a straw man argument?
Hi Mark, I see it differently. The "visually identical" speakers seemed to correlate well with the blind listening tests. We don't know whether they measure similarly or not, but seemingly correlated sighted vs blind. The "US" vs Sub/sat do not appear correlated in my view, the sighted test showed significant preference for the US, the listening test a slight preference for the Sub/sat. How is that correlated? I think your example actually shows the fallibility of sighted listening if I'm reading the limited data correctly. I wouldn't consider sighted listening as "useless", but rather "fallible". Based on this sample at least.The limited data presented in Flyod Toole's book clearly shows a correlation between blind and sighted results. Here is one relevant figure illustrating this:
View attachment 523357
The limited data presented in Flyod Toole's book.........<big snip>
I notice the same thing. Since "impact" uses "time" in the denominator are you sure the "different feeling" is due to higher SPL or could it be due to better time domain performance (shorter time in the denominator). My guess is it is both.There is one instance that things get messy and you only learn to interpret them with both listening* and measuring.
A glorious example, impact.
My factory default x-over for the semi-actives I use is at about 240Hz.
At this setting I can have all the impact I want, etc.
Now, I like playing with analog el. x-overs (and also building some) so I can test other x-over points as well.
And here comes the experience, as keeping the same FR, EQ for lows, etc, the works, lowering the x-over point also reduces impact, to the point there's none at 80Hz or so (F3 is at 31Hz or so)
REW shows the same at the usual sweeps but measuring the same song at the exact same conditions with the SPL chart, differences at MAX and peaks are obvious. And it makes perfect sense, one can't use a 7" mid to push air.
Anything can be useful if you put it to work for you and not the other way around.
*should I say "feeling" instead? Impact is physical, you can't escape it or fake it![]()
I have some thoughts but are conflicting, to be entirely honest I don't know what to think.I notice the same thing. Since "impact" uses "time" in the denominator are you sure the "different feeling" is due to higher SPL or could it be due to better time domain performance (shorter time in the denominator). My guess is it is both.
www.audiosciencereview.com
I agree and find it puzzling that MarkS and others would claim that the data are favorable evidence for the reliability of sighted tests. If I remember correctly, Floyd Toole said that loudspeakers 1 and 2 differed only in their crossover circuit. Depending on the details of how the tests were conducted, the "sighted" tests may have been effectively blind anyway between those two speakers.Hi Mark, I see it differently. [...]
I agree and find it puzzling that MarkS and others would claim that the data are favorable evidence for the reliability of sighted tests. If I remember correctly, Floyd Toole said that loudspeakers 1 and 2 differed only in their crossover circuit. Depending on the details of how the tests were conducted, the "sighted" tests may have been effectively blind anyway between those two speakers.
The significant increase in preference for speakers 1 and 2, and moderate increase for 4 (the expensive, likely impressive-looking ones) along with the significant decrease for 3 (the cheap, small one) in sighted tests clearly demonstrates bias.
Imprecision of language, perhaps. I don't think my comment is a strawman. The claim was that the data presented demonstrate meaningful positive correlation between sighted and blind tests. I don't agree with that interpretation of the data and stated why.It is interesting how difficult this
Imprecision of language, perhaps. I don't think my comment is a strawman. The claim was that the data presented demonstrate meaningful positive correlation between sighted and blind tests. I don't agree with that interpretation of the data and stated why.