• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

[YouTube] The Big Measurement & Listening Mistake Some Hi-Fi Reviewers Make - SoundStage! Real Hi-Fi

Status
Not open for further replies.

dfuller

Major Contributor
Joined
Apr 26, 2020
Messages
3,335
Likes
5,050
I personally find the pre-measurement listening to be good information. It's a decent way to correlate (or not) measurements to what someone heard - after all, we are listening to speakers at the end of the day, not reading the measurements. Obviously we have to factor in the listeners biases for those kind of subjective tests, but it's still useful information, at least to me - as long as the flowery language is kept to a minimum.
 

MarkS

Major Contributor
Joined
Apr 3, 2021
Messages
1,062
Likes
1,502
What do you do then when the reviewer says it sounded fine but measurements show it was bright? What do you believe and why?
If it's a reviewer whose opinions I had previously found to correlate with my own, then I would find that to be an interesting data point. Perhaps the measurements are showing something that is not as audible as I might have previously believed, or perhaps the reviewer's room set-up is having a bigger effect than I might have previously believed. I would definitely want to hear the speaker for myself.
 

TLEDDY

Addicted to Fun and Learning
Forum Donor
Joined
Aug 4, 2019
Messages
631
Likes
858
Location
Central Florida
Since my 80 y/o abused hearing cuts off at (maybe) 9k... well, do not trust my subjective evaluation. If I have decent measuring gear, trust that more.

Listen - measure; measure - listen...WTF as long as it makes sense.
 
Last edited:

Robin L

Master Contributor
Joined
Sep 2, 2019
Messages
5,208
Likes
7,587
Location
1 mile east of Sleater Kinney Rd
Same question. What happens if said measurements don't correlate with prior listening test results? And how do you know prior test was not influenced by person not liking horns, flat panels, ported vs not, bookshelf vs stand mount, price, etc.?
How do we know that the reviewer, knowing that the speakers are from Klipsch, is positively prejudiced in their favor? How do we know that the reviewer, knowing that the speakers are from JBL, is negatively prejudiced against them?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
How do we know that the reviewer, knowing that the speakers are from Klipsch, is positively prejudiced in their favor? How do we know that the reviewer, knowing that the speakers are from JBL, is negatively prejudiced against them?
What is your answer to your questions?
 

jae

Major Contributor
Joined
Dec 2, 2019
Messages
1,208
Likes
1,508
saying measurements is a source of bias is like saying knowing a dish's ingredients can bias a person's opinion of the taste.
I essentially thought the same thing. If I apply this same logic to medicine, his argument would be that I should not tell a patient how a medication works or its potential side effects in order to avoid them having an adverse psychosomatic or nocebo reaction to its administration. Who is to say whether or not said reaction was natural or induced if the result is to be treated the same? He simultaneously has his own assumptive biases while proselytizing in criticism of another.

The scientific method itself has an inherent mandatory bias to function and be a useful tool- the act of formulating a hypothesis implies one has either observed something under a set of biased conditions or that they suppose something may be true or likely based on the bias of previous scientific study or understanding. The goal is to eliminate as much bias as reasonably possible to make consistent and repeatable conclusions, and I can argue that an extreme focus on objective measures and conventions is one way to do this and will almost always be the lesser of all evils if the goal is to eliminate biases.

Most people that even care about these measurements in the first place will probably not even bother wasting their time trying, let alone buying a product unless there's at least some kind of measurement available, so to whom is his advice useful? His opinions seem aimed at reviewers, yet those that actually know what to do with these measurements don't really need an input of a reviewer's subjective opinion to make decisions anyway. So, this seems like a personal gripe of his more than actual advice for the average hobbyist.

Notice how none of these reviewers releasing these "measurement talk" videos can demonstrably prove they know anything about taking or interpreting them beyond the superficial, cite peer-reviewed publications or even books on psychoacoustics or sound reproduction, show how to manually correct a room around them etc. Shame on all of these youtubers with "fence sitter" opinions dismissively mentioning studies related to the unreliability/limitations of human hearing, while totally ignoring the wealth of information we do have, that is reproducible and proven by decades of rigour. They don't dare invoke any science that disproves their babble yet will freely cite it when it does and call foul.

The trend I'm seeing on these youtube videos is everyone coming out of the woodwork with these "apologist" videos. "Yeah yeah..measurements do matter...BUT..." and then offer absolutely nothing of value or some nonsense opinion. Their viewerbase (or new viewers) that are interested in measurements will think "hey this guy is experienced! he's opened minded and looks at both approaches to audio!", and those that only care about subjective experiences or think measurements don't matter are all but completely vindicated. They think "See, even someone that thinks measurements matter says objective reviewers are biased and my subjective opinion is the only thing that matters!".

The end result is that they appease the largest audience possible and do not particularly offend anyone. Good for viewership and to promote whatever it is they are selling.
 
Last edited:

sigbergaudio

Major Contributor
Audio Company
Forum Donor
Joined
Aug 21, 2020
Messages
2,639
Likes
5,396
Location
Norway
I didn't even watch the video, but if we discuss reviews based on listening sessions in isolation, and how to provide such a review as unbiased as possible, that would be easier to do without knowing how the speaker measures.

A chef knowing the ingredients of a dish would certainly be affected by that knowledge when tasting it.

Not sure how the patient analogy works, the job of the patient isn't to review the drug.
 

DVDdoug

Major Contributor
Joined
May 27, 2021
Messages
2,916
Likes
3,831
I like BOTH and I appreciate the way Amir does it. In a perfect world it would be nice to have a panel blind-listen but nobody does that routinely and it's just not economical.

If I have to choose I'll take the measurements... With measurements, at least you can get consistent, comparable, results (assuming the same test set-up) and you can compare results. You can measure things that you can't hear, which can be a good thing but it can also be "misused".

Subjective reviews could be OK if they were done blind by a consistent-trusted reviewer, or better yet a panel of trusted reviewers reporting independently. But even with blind listening, the reviewer might be in good mood or a bad mood, etc. But there are so many problems with real-world subjective reviews... They are never done blind and they almost never give a "rank" or a "score", and they will almost never admit that that a lower price item beats a higher-priced item. And they usually use "audiophile terminology" instead of words that have actual meanings (noise, distortion, frequency response). The magazines rarely give a negative review. Sometimes they'll say "good for the money" but I've NEVER read "not worth the price," although it might be implied... Some websites might give negative reviews but I don't visit many audio review websites (and like everybody else, I no longer read magazines).
 

John Atkinson

Active Member
Industry Insider
Reviewer
Joined
Mar 20, 2020
Messages
165
Likes
1,022
That’s one thing that always bugged the hell out of me about Stereophile reviews. The subjective reviewer praises a speaker to high Heaven only to have JA come in and show through measurements that it’s actually a hot mess.

I have written many times in the magazine that Stereophile's reviewers only see the measurements after they have submitted their review text to the editor, a protocol that continues to be practised by my successor as editor, Jim Austin. We want the magazine's reviewers to describe what they hear, not what they think they should have heard.

JA should call out the reviewer but he seems to demur and waffle.

In almost all cases, I have not listened to the product so have no reason to gainsay the reviewer. When I have auditioned the product, I write a followup review, as with the Audio Research preamplifier in the August issue.

John Atkinson
Technical Editor, Stereophile
 
Last edited:

Robin L

Master Contributor
Joined
Sep 2, 2019
Messages
5,208
Likes
7,587
Location
1 mile east of Sleater Kinney Rd
What is your answer to your questions?
The answer is we don't know how much brand allegiance goes into reviews of audio gear. Most reviewers were predisposed to say very nice things about Advent speakers when they were new. Stereophile favors Wilson Audio speakers no matter what they measure like. I've seen this sort of thing going on ever since I first got interested in audio. Some might [already have] said the same thing about you and Topping products. Mind you, the good measurements of Topping are doubtless the reason you favor those components, they consistently measure well. But I seen subjective reviews where brand allegiance is obvious and not related to measurable performance. Call it sighted bias.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
I have written many times in the magazine that Stereophile's reviewers only see the measurements after they have submitted their review text to the editor, a procol that continues to be practised by my successor as editor, Jim Austin. We want the magazine's reviewers to describe what they hear, not what they think they should have heard.
Sadly we don't know what they hear. We know what they are writing. And possibly what they think they are hearing. These are different than the actual sound that arrived at their ear drum.

Now if you put them through a test that showed correlation between what they think they hear and reality, then we would have something to go by. As it is, we have an essay written by them with no idea of its reliability.
 

John Atkinson

Active Member
Industry Insider
Reviewer
Joined
Mar 20, 2020
Messages
165
Likes
1,022
Now if you put them through a test that showed correlation between what they think they hear and reality, then we would have something to go by.

See my essay on this subject at https://www.stereophile.com/content/who-watches-watchers . in which I quoted myself from a 1980 article: "The problem confronting the magazine reviewer when organizing the necessary listening tests to accompany/reinforce the measured behavior of a device under test is complex. . . . Unlike the reaction of an oscilloscope, that of a listener involves interaction: what he is hearing; what he had been expecting to hear; the identity of the equipment; the emotional effect of the music program; the emotional effect of other competing stimuli (a recent cup of coffee, a not-so-recent visit to the toilet); the apparent expectations of his fellow listeners; the ultimate purpose of the test; the desire for self-consistency and hence self-esteem; all these can—but needn't always—color the listener's assessment. Obviously, this will affect the reliability of any conclusion, both when used to predict the same listener's reaction to the same piece of equipment, and when used to predict other people's reactions."

John Atkinson
Technical Editor, Stereophile
 

goat76

Major Contributor
Joined
Jul 21, 2021
Messages
1,269
Likes
1,385
So, who watches a single review of a speaker and runs to the store and buys it in a heartbeat, with or without measurements?

I highly value all user opinions I can come by no matter if they come from reviewers, current happy users, or people who have had the speakers but got rid of them for some reason. I know exactly what I like and don't like about loudspeakers and pay extra attention to stuff that repeatedly comes up both good and bad.
 

MaxBuck

Major Contributor
Forum Donor
Joined
May 22, 2021
Messages
1,515
Likes
2,116
Location
SoCal, Baby!
The responses in this thread surprised me. In the same way that we can be biased by the brand, the way the speakers look, what someone else has told us, etc, we can obviously also be biased by having seen measurements of said speakers.

The measurements will stay the same regardless of whether we view them before or after listening, while the listening experience may be affected by the knowledge of the measurement results. So if you want the listening session to be as unbiased as possible, you'd want as little information as possible, including about how they measure.
I'm almost entirely unconcerned about whether I may be "biased" by objective measurements when I go to listen to speakers. What I care about is whether I like the sound they produce. (Actually, I also care a great deal about whether my wife likes their sound.)
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
See my essay on this subject at https://www.stereophile.com/content/who-watches-watchers . in which I quoted myself from a 1980 article: "The problem confronting the magazine reviewer when organizing the necessary listening tests to accompany/reinforce the measured behavior of a device under test is complex. . . . Unlike the reaction of an oscilloscope, that of a listener involves interaction: what he is hearing; what he had been expecting to hear; the identity of the equipment; the emotional effect of the music program; the emotional effect of other competing stimuli (a recent cup of coffee, a not-so-recent visit to the toilet); the apparent expectations of his fellow listeners; the ultimate purpose of the test; the desire for self-consistency and hence self-esteem; all these can—but needn't always—color the listener's assessment. Obviously, this will affect the reliability of any conclusion, both when used to predict the same listener's reaction to the same piece of equipment, and when used to predict other people's reactions."
All true and the only conclusion is to stop using purely subjective reviews. If our lives depended on them being right, who would rely on them? No one I assume.

Or alternatively give them the measurements in advance so that they have some solid basis on which to base their reviews on. They can still disagree but then they better try hard to prove to themselves that this is so.

It is just remarkable to me that we sometimes can be proud of the fact that we tell someone to go some place in a town they don't know without a GPS. Consumers want reliable data, not how incredible the reviewer can be to figure out a speaker performance, literally blind.

This kind of thing creates constant conflict with your measurements with you being at the losing side of all of those disagreements. This shouldn't be. The measurements need to be king and the reviewers job to prove of disprove them. Not to opine something random for all the reasons you mention.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
All of this assumes that it is so easy to be biased by measurements. Most of the time measurements don't paint a simple black and white picture to act this way. By studying them first, you learn what to look for and also learn what is or is not an audible problem.
 

Koeitje

Major Contributor
Joined
Oct 10, 2019
Messages
2,292
Likes
3,880
He evaluates a speaker knowing fully the brand, color, design, manufacturer, etc. but they don't count but knowing the measurements does? If bias is such a bad thing in speaker evaluations, then he should never review a product sighted.

Somehow, measurements which provide reliable indication of sound are a bad bias, but the others are not.
Just because you cannot control for bias in terms of brand, price, whatever doesn't mean you also shouldn't control for the bias coming from the measurements. I believe @hardisj always listens before measuring because of this reason.

It also doesn't stop you from listening again after looking at the measurements, apart from it costing extra effort.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
Just because you cannot control for bias in terms of brand, price, whatever doesn't mean you also shouldn't control for the bias coming from the measurements. I believe @hardisj always listens before measuring because of this reason.
And I think he is dead wrong. There is zero useful information in that evaluation prior to measurements lest you show me his certificate and qualifications in producing reliable information in uncontrolled, random listening that way.

Once again, proper measurements are not an improper bias. They are the most reliable information we have to predict listener preference. Throwing them out is like telling your doctor to first diagnose you with just your verbal description without any diagnostic tests. You wouldn't, right? Why with speaker measurements?

The only answer would be that you don't think measurements are good or useful. If so, then what Erin is doing is a waste of time measuring.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,381
Location
Seattle Area
It also doesn't stop you from listening again after looking at the measurements, apart from it costing extra effort.
You just said without measurements there is less bias. Why would you then be OK with more biased assessment with another round of listening? Which generated more reliable information? If it is the first, then you shouldn't suggest a second round. If it is the second round, then why did you say the first round was better?
 
Status
Not open for further replies.
Top Bottom