• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Preference rating: What score speaker do you use? What can you tolerate? Where do you start to deviate? Where can you hear until?

Simple numbers are for simple-minded people

Ouch!

To be kinder, the posters I see putting a lot of weight on preference scores often seem to be new to the hobby and may not have had a lot of exposure to different speakers in real life, especially since the decline of brick and mortar audio salons and audio shows.
 
To be kinder, the posters I see putting a lot of weight on preference scores often seem to be new to the hobby and may not have had a lot of exposure to different speakers in real life, especially since the decline of brick and mortar audio salons and audio shows.
It makes me very sad to think that newbies must rely on derivative indices without the motivating thrills (or disappointments) of live experience. I wonder if I would have entered into a lifetime of music/audio pleasures if I was starting out today.
 
First, would someone be so kind as to explain what the preference rating is? I have seen it used here, but never understood where it comes from.

My main speakers are Dynaudio Excite 14 that have a preference rating of 4.7 without subs and tested mediocre with Amir. I also have a pair of the JBL A130s that Amir raved about and that have a preference rating of 5.1 without subs. I like both, but I think I prefer the Dynaudios, and I am quite happy with them.

Which brings up another question: these two speakers seem close in preference rating, but performed quite differently in Amir's testing. How do these two measures reconcile?

I kind of view the JBL as the "reference" based on measurements and the Dynaudio as my personal preference. But I don't really know how to use this information if I were looking for another set of speakers.
 
We need a sticky explaining preference score, and the difference between measurements and a recommendation.
 
Which brings up another question: these two speakers seem close in preference rating, but performed quite differently in Amir's testing. How do these two measures reconcile?

I kind of view the JBL as the "reference" based on measurements and the Dynaudio as my personal preference. But I don't really know how to use this information if I were looking for another set of speakers.

Buy what you like. Learn how this translates into design attributes that carry over across different brands.

I have both JBL 308P and Dynaudio LYD 5 monitors.

The LYD 5 remain at my DAW, paired with the matching 18S sub. I much preferred the LYD 5 to the 308P for near field mixing work.

Despite having nearly equivalent preference score to the LYD 5, the JBL 308P are relegated to my garage gym. They're definitely the better garage gym speaker vs the Dynaudios.

Also:

Realize Amir's subjective tastes may not match to yours. I've read enough of his reviews of speakers and headphones that I own (and tried his EQ curves) to know we have different tastes. That's fine -- his ears are his, mine are mine.
 
Last edited:
First, would someone be so kind as to explain what the preference rating is? I have seen it used here, but never understood where it comes from.

My main speakers are Dynaudio Excite 14 that have a preference rating of 4.7 without subs and tested mediocre with Amir. I also have a pair of the JBL A130s that Amir raved about and that have a preference rating of 5.1 without subs. I like both, but I think I prefer the Dynaudios, and I am quite happy with them.

Which brings up another question: these two speakers seem close in preference rating, but performed quite differently in Amir's testing. How do these two measures reconcile?

I kind of view the JBL as the "reference" based on measurements and the Dynaudio as my personal preference. But I don't really know how to use this information if I were looking for another set of speakers.
The development of the method is described in these 2 papers, and patented by Harman.

To answer your second question, here is the graph showing the correlation between the predicted preference vs the data points used in the study.

olive_preference_predicted_vs_measured.png
 
To answer your second question, here is the graph showing the correlation between the predicted preference vs the data points used in the study.

I think the graph is only helpful for designs that deviate grossly from the mean score.

I don't think preference score is very useful at all for speakers that score similar.

Example using two speakers I own:

Dynaudio LYD 5: 5.67
JBL 308P: 5.64

One might think, "Oh, the score difference is only .03, they must sound similar."

Yet, that is not remotely the case -- as one would expect given the common sense observation that one uses a 5" driver and other an 8" driver.
 
One might think, "Oh, the score difference is only .03, they must sound similar."

Yet, that is not remotely the case -- as one would expect given the common sense observation that one uses a 5" driver and other an 8" driver.
Agreed. Take, for example, all the speakers Olive tested with predicted (i.e. calculated using the Olive formula) preference score of ~5. Their actual preference ratings "as measured" (in blind listening tests) can range from ~2 to ~7.

index.php
 
Well the question is, if I don't use the score and I look at the spinorama and other measurements myself, can I personally make a better decision? Bearing in the fact that I can't do a blind test between all these speakers, and that measurements themselves aren't 100% perfect for blind tested preference anyways.
 
Well the question is, if I don't use the score and I look at the spinorama and other measurements myself, can I personally make a better decision? Bearing in the fact that I can't do a blind test between all these speakers, and that measurements themselves aren't 100% perfect for blind tested preference anyways.

I can.

Because I have reference points by looking at the spins for speakers I currently own and have owned in the past.

So I know how things I did or didn't like about those speakers match to some of the measurements.

This is also one of the reasons I pretty much only buy speakers, these days, from manufacturers who have the facilities to do spin-like measurements, which rules out a lot of boutique brands.

Now, if one has only listened to or owned a very small sample size of speakers in life, or owned things that weren't measured, that gets a lot harder.
 
Mine (or the general type) rank lowest in this test:

Everybody hated them.


I've had them since 1998 and feel no need to part with them.

Left and right, with and without flat EQ, 1/3 smoothing, in room, at 10 feet (listening position):

View attachment 174159

View attachment 174160
Martin Logan Sequels. Used to own a pair. Replaced them with Aerial Acoustic 10t's. Current speakers are LS 50 Metas, EQ's with SB2000 sub and Dirac Live. Score is 5.7 with no eq, sub or Dirac. Goes way higher with all three.

The sub and Dirac make a huge difference. With the sub dynamic range expands considerably and it really punches hard down low. Dirac takes away a 15 db peak in the 60-130 range, so it really makes a huge difference as well. Frankly the Metas sound pretty good just as is, but the sub+Dirac take it to another level.
 
Last edited:
Martin Logan Sequels. Used to own a pair. Replaced them with Aerial Acoustic 10t's. Current speakers are LS 50 Metas, EQ's with SB2000 sub and Dirac Live. Score is 5.7 with no eq, sub or Dirac. Goes way higher with all three.

I'm also a refugee from the Sequel owner club, and also had the smaller electromotion whatevers.

I'm also now running 2 smaller stand mount speakers (Dynaudio Heritage Special) with 2 subwoofers (ML Dynamo 1100X).
 
Yep. It doesn't know about bass Q tuning.

It's also doesn't know about near field issues like desk reflections.
Amir asked me a while back about making a score for near-field/desk setup. He mentioned the types of questions we would have to put out for people to answer, like seating distance, speaker elevation, speaker tilt, etc.

Its just too much of a hassle. Look at the normalized vertical off-axis plots I use (you can of course use the vertical polar plot as well), for +/-20° maybe, the more close it is to 0° the better, especially better having narrow dips and not wide ones.

___________
I wish the data from 70 speakers used to derive the model were available; I’ve made my own formula based on the original that doesn’t take dispersion into account (the current formula favors narrow dispersion based on how linear regression works), but I of course don’t know if it’s better or not.

I’ve said it a lot, but I really want Amir/ASR to do something similar to what Harman did for human trials to make a better formula.
 
Last edited:
Look at the normalized vertical off-axis plots I use (you can of course use the vertical polar plot as well), for +/-20° maybe, the more close it is to 0° the better, especially better having narrow dips and not wide ones.

Been doing that for years.

Which comes back to my stance -- I get more value out of looking at spins and polars than I do out of the preference score.
 
We need a sticky explaining preference score, and the difference between measurements and a recommendation.

Explaining what is vs how to use it in real life, and its limitations.

Honestly, I only find it useful to weed out things that are pathologically bad.

The fact that two speakers I own, one with a 5" driver and one with an 8" driver, score within .03 points of each other points out how it becomes a bit useless to differentiate between competent speakers.
 
Well, I'm not in position to do any of the things described in the posts above, except look at vertical directivity plots.

For looking at FR maybe something that'd be helpful is focusing on the 'more important' parts of the FR that are off and can't easily fixed? Since preference rating penalizes deviations equally across FR correct?
 
This is also one of the reasons I pretty much only buy speakers, these days, from manufacturers who have the facilities to do spin-like measurements, which rules out a lot of boutique brands.
Agreed but it pisses me off that so many of them still will not release the results of their measurements. What if they bought the equipment but never unpacked it? :eek:
 
Agreed but it pisses me off that so many of them still will not release the results of their measurements.
The ususal reasoning is that the measurements will often get misinterpreted anyway which I would think is true most of the times... though times are changing and even lay persons may have gained enough active knowledge from sites like this and others how to interpret things, preferably for speakers they own and which have been measured here.
 
Agreed but it pisses me off that so many of them still will not release the results of their measurements. What if they bought the equipment but never unpacked it? :eek:

Well, Schiit (after market enough pressure) started publishing their AP test results.

So maybe there is hope if there is enough editorial praise given to those who do publish results.

The pro audio market seems to be better -- even companies that have both pro and consumer lines (e.g. Dynaudio, Focal, Harman) will often deign to publish results for their pro monitors, even if they're more reluctant on the consumer lines.

I'd nudge those guys first.
 
Agreed but it pisses me off that so many of them still will not release the results of their measurements. What if they bought the equipment but never unpacked it? :eek:

On the other hand, the lack of measurements leaves a lot of work for you and Sterophile to do. If all manufacturers published accurate measurements, there would be much less demand for your(and asr) services.
 
Back
Top Bottom