• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

I cannot trust the Harman speaker preference score

Do you value the Harman quality score?

  • 100% yes

  • It is a good metric that helps, but that's all

  • No, I don't

  • I don't have a decision


Results are only viewable after voting.
OP
sarumbear

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,323
Location
UK
You shouldn't focus on preference scores. It was only intended to help naive people interpret spinorama measurements and reduce it to a single number.
There you are the man said it. Do not focus on the score. Stop treating it as gospel, stop sorting speakers according to it, only.
 

Purité Audio

Master Contributor
Industry Insider
Barrowmaster
Forum Donor
Joined
Feb 29, 2016
Messages
9,162
Likes
12,432
Location
London
Who treats it as gospel it is just one useful metric.
Keith
 

Vuki

Senior Member
Joined
Apr 8, 2018
Messages
343
Likes
393
Location
Zagreb, Croatia
Stereo is 1950s, and the research focus now is immersive audio.
dr-emmett-brown_back-to-the-future_gallery_5c4db94a12c0f.jpg
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
There is no difference between the first two options of the poll. "Its a good metric that helps, but that is all" means its 100% valuable.

This whole thread is a exercise in quixotism. The OP is slaying dragons that just aren't there.

You are confused, no hay dragones in Don Quixote...
 

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,065
That alone only makes me more covinced that you took the easy way by choosing mono. You eliminated the variability of stereo ratings but in doing that you're throwing away the baby with the bath water.



I have given 3 reasons for that: possible incorrect positioning of speaker to boundaries, possible incorrect listening axis, possible incorrect postion of listener.
I can see several other problems with listening to a single speaker located left or right.



I have seen photos and schematics of several people performing listening test simultaneously. They can't all be sitting on axis, or in the best acoustical place in the room.

I have performed in-room measurements with both speakers and mic/listener at different locations that I can upload if you wish.
They show that woofer distance to front wall, to side wall, to floor, to ceiling and to listener will affect the response in the bass and sub-bass.
Not taking that into account is a mistake.



The most obvious example would be Danish manufacturer Dali, whose speakers are designed to be positioned with no toe-in. When listened-to on axis they will sound "coloured" (see cursor postion at 45° below):

315DAR8fig06.jpg

https://www.stereophile.com/content/dali-rubicon-8-loudspeaker-measurements

Dipoles are designed to make use of the front wall, corner horns must go in corners, on-wall speakers on walls.



Again you're taking the easy way by remove a variable. I understand that this makes the testing more feasible but it also makes it invalid.



That deals with my earlier comment regarding people listening off axis.
Controlling variables in experiments is important to make valid conclusions between the measured effects and their causation. Designing tests and measurements to provide sensitive, discriminating and reproducible results is also important. Mono tests provide that. Stereo tests do not.

We found that speakers that score well in mono generally score well in stereo, and that there is little to gain from doing routine tests in stereo, and very much to lose. It wasn't a question of taking the "easy way" out. It was a question of making the best decision from a purely scientific rationale based on the data we had. You have a different opinion. I can live with that.

I am well aware that the bass will change as you move a speaker in a room. No need to send me measurements. I have made thousands myself. So tell me, how you would design a controlled listening test to compare four loudspeakers based on your in-room measurements?. What criteria would you use for their optimal location? Would your criteria be a potential experimenter bias? Is your criteria the same as what the designer intended, and does the room you are using match the room for which the speaker was intended? Is it worse to treat all speakers the same by putting them in the exact same position (assuming they are monopoles in the bass) versus putting them different positions based on some arbitrary criteria you have come up based on measurements? :)

How would you account for positional biases in the blind tests from fact that the listener can now identify the different speakers purely based on different localization cues? And now by virtue of the different positions you have also created a different set of reflection patterns that may enhance or degrade the perceived sound quality of the loudspeaker? How would you deal with that?

It seems to me that your proposed methodology would have so many uncontrolled experimental biases it would never get past the first stage of scientific peer review -- but that's just my opinion.
 
Last edited:

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
543
Likes
1,618
There you are the man said it. Do not focus on the score. Stop treating it as gospel, stop sorting speakers according to it, only.
If this was the contention at the start of this thread, it would have been a very brief set of mutual agreements - nobody is advocating using the predicted preference score as the sole assessment element of speakers. However, what it actually started with was...
The more I look into it the less I can trust the Harman speaker quality score. IMHO it is a totally meaningless metric. I know the background, I read all the papers even before Harman was involved, and their patent. However, it works so badly that IMHO it is a meaningless metric.
...which is incorrect. The preference rating has a meaningful correlation with speaker preference. It's not sufficiently predictive to be the sole parameter we assess. It is, however, also not meaningless.
 

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
543
Likes
1,618
75.8% of the (naïve) voters say that they value the score. I'll remember that next time someone calls subjectivists gullible or biased...
This is simply the nirvana fallacy. The score is somewhat valuable. Its imperfection does not make it without value - which is good, because all the models, explicated (like the predicted preference rating) or internal to our heads (we're assessing these measurements ourselves with some conception of how they correlate with perception, aren't we?), are flawed.
 

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,065
Last edited:

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,065
There is no difference between the first two options of the poll. "Its a good metric that helps, but that is all" means its 100% valuable.

This whole thread is a exercise in quixotism. The OP is slaying dragons that just aren't there.
Agreed. HARMAN does not use the metric. That is all you need to know. It is a metric that is helpful but it does not tell all.
 

MattHooper

Master Contributor
Forum Donor
Joined
Jan 27, 2019
Messages
7,322
Likes
12,270
We've tested both large and smaller Martin Logan dipoles in the same room and got similar results. The listening test results are completely predictable by the anechoic spinorama measurements: poor octave-to -octave balance, resonances that are visible in many of the curves. The speaker is very directional and the balance changes as you move off-axis. It seems to sound worse as you sit more on-axis (it is too bright or harsh in the upper mids) but even off-axis, it sounds colored.

That was always my problem with the ML speakers. I always found them a bit too aggressive sounding in the upper mids. (Back when I did more extensive auditioning of electrostatics, I ended up with Quad ESL 63s which to me were more comfortable to listen to).
 

ROOSKIE

Major Contributor
Joined
Feb 27, 2020
Messages
1,936
Likes
3,525
Location
Minneapolis
Agreed. HARMAN does not use the metric. That is all you need to know. It is a metric that is helpful but it does not tell all.
Thank you for saying that. I (and others) have tried to tell people that my money is on you not using that score and I am glad you do not.

---------------
For fun.
I voted
"It is a good metric that helps, but that's all" and I crossed that part out as it doesn't seem required and "but" turns me off here.

So far it seems to track well enough - especially given confirmation that it is intended to be a solid & simple starting point for a novice buyer - not something advanced users should stop at nor something Harman is targeting.
I have had a few speakers generate a higher score than I actually preferred them subjectively yet most of those speakers I could totally see why others might jump in.

Really the speakers I have tried with over a "5"+ and with a subwoofer score mid "7-7.5"+ has been a reasonably good to extremely good speaker and when they have fallen short I generally have not been unable to see why.
Along with the full data set and spin, do think listening is going to be very important for the folks here at ASR as there has been a lot of variation and as understood by many the score ignores some really important things and some others things just have to be assessed by the specific listener.
I think the casual buyer would be happy 99% of the time with a fairly well ranked speaker here with a good spin.

I have had 1 (or maybe two major anomaly's) with the JBL 4309 being largest one in that I love it as much as the best speakers that rate in the "6" range yet it scores fairly low.

Really though if you smooth that rating out and take it lightly then a casual buyer is going to get a decent speaker. Audiofools like you & me are going to be digging much much deeper.
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,926
Likes
7,643
Location
Canada
BTW, this is how that room looks like now.
Curious -- what's with the center rear surround? I was under the impression that a speaker placed directly behind the listener could trigger front/back reversal and basically sound like it was in front.

Really though if you smooth that rating out and take it lightly then a casual buyer is going to get a decent speaker. Audiofools like you & me are going to be digging much much deeper.

The value of the score to me has always been the fact that I don't have time to read hundreds of spinoramas. So if I'm looking for a speaker according to some criteria(budget, size, whatever) then the score helps narrow the most likely choices. It doesn't relieve you from the requirement to actually make a choice yourself, or to read spinoramas of closely ranked speakers, or to consider other factors like SPL output. People who compare individual speakers by scores to the decimal point are missing the point entirely. And yes, some people do this on the forum.

But it does save a lot of time, and will save even more when the review list is a thousand speakers, which it will be some day.
 
Last edited:

MattHooper

Master Contributor
Forum Donor
Joined
Jan 27, 2019
Messages
7,322
Likes
12,270
Agreed. HARMAN does not use the metric. That is all you need to know. It is a metric that is helpful but it does not tell all.

Sean, I understand that a Spinorama measurement would give the most detailed picture of a loudspeaker, and hence allow for more reliable predictions in listener preference. But how much can you (or "we") infer from something like Stereophile's measurements? Do they give you some good enough indications as to how a speaker would tend to score in blind tests?

As a wonky example: Here are the Devore O/96 measurements from Stereophile. When I auditioned a ton of different speakers (including Revel, Kef, Paradigm and others), in regular ol' sighted tests I really enjoyed the Devores. I don't think I'm "special" in regards to what I would likely choose in blind tests, though. :)

(They seem fairly flat on axis, but came in for quite a bit of criticism in the comment sections for the rest of the measurements)

 

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,065
I would add to the claim that a speaker that tests well in mono will test well in stereo,
by saying that a speaker that listens well in mono will listen well in stereo.

I do a lot of testing and listening in mono, stereo, and LCR using three identical speakers.
In addition, I test and listen to mono outdoors very frequently, and stereo occasionally.
Here's a snip of the presets currently is use for such:
View attachment 192148


There is no question in my mind that critical speaker evaluation is best done in mono, and best done outdoors.


I don't see how stereo can ever be fairly evaluated, due to the vagaries of rooms, recordings, individual presences for ambiance vs imaging, and radiation patterns of major types (point source, dipole, planar, open baffle, omni, line, etc.,
My guess is the evaluation impossibility will always exist for stereo, and multi-channel too, until a particular standard listening room design, ....size/acoustics/et al, is somehow adopted by the marketplace.

Hat's off to the work you guys accomplished evaluating mono.
The principles you came up with i trust fully :)
Preference algorithms/scores that have been pulled from them, can't say the same.
Of course testing outdoors doesn't test the off-axis response because there are no reflections.. It solves loudspeaker positional biases but some people will argue it the results cannot be extrapolated to results in rooms.
 

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,065
Do you have recommendations for distortion beyond what @Amir and @hardisj provide? They look at distortion by frequency at 86 and 96 db.
The problem is that it is not perceptually based and doesn't take into account masking.

While it's indicative of a problem with the speaker it's not very good at predicting audibility or effect on sound quality. A speaker with higher THD can sound better than one with lower THD as has been demonstrated by researchers like Alex Voishvillo.
 
Top Bottom