• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Preference rating: What score speaker do you use? What can you tolerate? Where do you start to deviate? Where can you hear until?

On the other hand, the lack of measurements leaves a lot of work for you and Sterophile to do. If all manufacturers published accurate measurements, there would be much less demand for your(and asr) services.

There would always be work to verify the results.

Schiit publishing AP test results hasn't stopped ASR from doing similar tests.
 
There would always be work to verify the results.

Schiit publishing AP test results hasn't stopped ASR from doing similar tests.

That's why I qualified it with "accurate measurements."

I do find it quite odd that some manufacturers, whose speakers measure quite well, don't publish their results. Dynaudio is the first that comes to mind, they definitely have good testing facilities, and their products test well(some exceptionally so). Yet, it can be a struggle to find any measurements for their products.
 
That's why I qualified it with "accurate measurements."

I do find it quite odd that some manufacturers, whose speakers measure quite well, don't publish their results. Dynaudio is the first that comes to mind, they definitely have good testing facilities, and their products test well(some exceptionally so). Yet, it can be a struggle to find any measurements for their products.

I own two sets of Dynaudio speakers right now (LYD 5, Heritage Special), and I've owned 4 pairs in my life.

The Pro line up actually has published plots available in the user docs for some of the speakers.

And, yes, if you've seen the Jupiter room you know they have the facilities to do measurements.

I have to assume they don't want the consumer products to compete on measurements.
 
Well, Schiit (after market enough pressure) started publishing their AP test results.

So maybe there is hope if there is enough editorial praise given to those who do publish results.
I will make a note of that. ;)
The pro audio market seems to be better -- even companies that have both pro and consumer lines (e.g. Dynaudio, Focal, Harman) will often deign to publish results for their pro monitors, even if they're more reluctant on the consumer lines.
I have tried to note that in print when relevant.
On the other hand, the lack of measurements leaves a lot of work for you and Sterophile to do. If all manufacturers published accurate measurements, there would be much less demand for your(and asr) services.
It's for the public good. Besides, it always needs corroboration.
Anyway, my tenure will long have ended before that comes to pass. :cool:
 
That's why I qualified it with "accurate measurements."

I do find it quite odd that some manufacturers, whose speakers measure quite well, don't publish their results. Dynaudio is the first that comes to mind, they definitely have good testing facilities, and their products test well(some exceptionally so). Yet, it can be a struggle to find any measurements for their products.
Present these to an audio salesperson, and ask her/him to explain which is better, by how much, why, and what the correlation is between asking prices and these graphs. Why would they want to expose themselves this way?

Data from: https://pierreaubert.github.io/spinorama/
f208.jpg


F228Be.jpg

F328Be.jpg


studio2.jpg


salon2.jpg
 
Present these to an audio salesperson, and ask her/him to explain which is better, by how much, why, and what the correlation is between asking prices and these graphs. Why would they want to expose themselves this way?

I'm probably an outlier. But, in my case, I did not purchase a pair of Contour 60's because I couldn't find published measurements(much later I was pointed towards some). I was absolutely blown away by the speakers - which were properly setup in a treated room. As a matter of principal, I passed on them.

The F208s were later purchased at about 30% the cost. But, the big thing for me was that the published measurements closely matched the independent measurements. Trust in a manufacturer goes a long way for me. And thats a hard thing to establish when I feel like they are hiding something from me.
 
I have not seen my Polk R100 calculated yet, but the next model up Polk R200 is 8.3 with Sub. Given the measurements are similar except bass extension. I assume my R100 with Sub will be about the same.

For $550, pretty much a steal. [sitting at MLP]

With 4 decades of experience, PS is certainly not perfect. However, if your personal preferences are aligned with the formula, it's also not usually bad.
 
Last edited:
It makes me very sad to think that newbies must rely on derivative indices without the motivating thrills (or disappointments) of live experience. I wonder if I would have entered into a lifetime of music/audio pleasures if I was starting out today.
Guess that is true if hifi equipment is the main hobby target and then a decades long search may be something fun and rewarding, especially looking at it after with the pink glasses of the good old times. On the other hand personally I and few other music (reproduction) fans I know wish today we had the knowledge sources like the book of Toole and forums like this when we started back then, it would have saved us from many wasted expenses and decades of tortuous Odyssey paths.
 
Been doing that for years.

Which comes back to my stance -- I get more value out of looking at spins and polars than I do out of the preference score.
Of course, the preference is just the attempt of a minimal single number value representation of a 3D-data which of course is limited and has some problems. Still for people just beginning with the hobby its better than nothing, wish I had it several decades ago when starting myself.
 
I have not seen my Polk R100 calculated yet, but the next model up Polk R200 is 8.3 with Sub. Given the measurements are similar except bass extension. I assume my R100 with Sub will be about the same.

For $550, pretty much a steal. [sitting at MLP]

With 4 decades of experience, PS is certainly not perfect. However, if your personal preferences are aligned with the formula, it's also not usually bad.

Polk R100 spin available here: https://www.erinsaudiocorner.com/loudspeakers/polk_r100/
 
Referring to MZKM's database. I like the scatterplot:


I know that my own personal preference generally tracks the rating until around a score of 3 to 4. After that it goes everywhere i.e. I can think that a speaker of 3+ rating sound better than a speaker at 4.5 (JBP 305P). Above this rating of 3 to 4 I also find that speakers, while still detectably sounding different, are generally all acceptable after EQ and nearly identical after EQ. I doubt I can find a reason for intentionally going above 5 unless it is for bass, then yea definitely go as low as possible.

So here is a question for all:

1) What is the rating of the speaker you are using?
- Unfortunately the speakers I use (yea, plural) have not been tested here but Edifier S2000 Pro seems like a close match and that is a 4.7. I also bass boost the ____ out of my speakers to reach 40Hz tho so that should give me a 5 at least simply due to that? Distortion notwithstanding.

2) Looking at that chart, what is the minimum rating you can accept for long term use without feeling discontent?
- So I look at the graph, take note the speakers that I know I will feel "yuck" about, then move up the score axis and see where the "yuck" stop appearing. For me that is around 4, with 5 making it really safe.
- If I don't have a choice i.e. space or money constraint, I will still go for at least a 3 probably.

3) Where do you start deviating from the rating?
- Like, you like a speaker with score=4 more than a speaker with score=6. So you start deviating at 4.
- Don't need to remove non-ideal effects such as placement / room / nearfield / speaker size etc. Because it is interesting to see when these effect start becoming more important vs anechoic performance. If a score=4 sounds better than a score=6 on your table, that is a data. Or datum, do people still use that word?
- I would say I start deviating at around 3+ or 4+, depending on how bad my favorite speakers measure.

4) With EQ, where do you think you can differentiate until?
- I have to mention EQ because even at a score of 5, BS22 and ELAC 6.2 have clear differences in bass volume.
- I believe I can hear until around 5. As in. the speakers are >5 after EQ. Then I can't tell which is which. I do these experiments when comparing two speakers directly.

5) What aspects do you find important and what do you not care about?
- For me, treble can fluctuate and even get shelfed low as long as the general trend is good I am not really bothered. On-axis not really important since I listen off-axis. Bass extension, I'm very particular.
I do not have much of an ability to answer many of these questions, because I look at the various data in the spins for specific information, but I will try...

1. The speakers (Revel F206) in my media room have not been measured, but if I average the F208 and the M106, I get 6.2. The speakers (KEF Q100) in my living room rate 5.0. The speakers (Philharmonic BMR) in my office have not been scored to my knowledge.

2. This is tough, since the preference score is not terribly important to me, but judging by speakers I have owned and loved, probably 4.5.

3. I have not previously thought of things in these terms, but I did love my old B&W 805s for >20 years with a score of something near 4.5 (see question 2).

4. If a speaker has good directivity, EQ can make a huge difference, so this is impossible for me to answer. I EQ all my speakers to some degree, either with DSP or at least crude passive controls.

5. Low distortion; even, wide directivity; smooth FR response ~15 degrees off axis; clean, tight, extended bass.

Again, I do not rate the preference score very highly. I find it to be a crude tool that can help to weed out poor performers, but above a certain score, apparently 4.5-ish, individual characteristics and (gasp!) subjective impressions carry more weight.
 
Last edited:
I would never buy any speaker based on how it rates in the Harman preference scale before I listened, just as I would not do so from reading a review in a magazine or on hype.

I would and do buy based on how it performs over a comprehensive set of measurements. For some years now I have been making an effort to achieve good correlation between measurements and my expectations regarding objective performance and personal preference, and I now feel reasonably confident in shortlisting speakers worth listening to that way (using measurements).
 
-first thing I look at is distorsion. my speakers need to be able to play loud.
- passing this filter I look at the dispersion caractaristics.
-FR I almost ignore completly. only crazy spikes make me suspitious; and odd behaviour at crossover. or let me say like this: there simply should be nothing too crazy. for this reason I also completly ignore preference rating
 
Example using two speakers I own:

Dynaudio LYD 5: 5.67
JBL 308P: 5.64

One might think, "Oh, the score difference is only .03, they must sound similar."
Because that's not what the preference score is about. The preference score just takes into account different characteristics of a speaker and then "sums them up". The result may be similar, but the properties of the speakers that led to the result cab be widely different, hence they sound very different as well.

It also just compares how likely a speaker would be preferred by most people compared to another speaker. So in this case it probably wouldn't make much difference at all, according to the formula. Probably half of the listeners would prefer Dynaudio speaker and another half the JBL one. Their choice, for sure, would be conditioned to their personal taste in sound.

So the preference score is useful if you want to establish the lower border of quality for your speaker choice. But above the chosen value it is essentially up to the listener's taste and experience. If they understand the spinorama, it also can guide their choice, of course.
 
Because that's not what the preference score is about. The preference score just takes into account different characteristics of a speaker and then "sums them up". The result may be similar, but the properties of the speakers that led to the result cab be widely different, hence they sound very different as well.

Right --- but the naive reader doesn't know that.

If all the score means is that two speakers with similar scores have a similar level of aggregate "quality", but sound very different, is that useful?

That means I need to look at other data and/or listen to get the full picture -- and why I pay little attention to it. It doesn't tell me anything about how the speaker sounds.

Probably half of the listeners would prefer Dynaudio speaker and another half the JBL one. Their choice, for sure, would be conditioned to their personal taste in sound.

So the preference score is useful if you want to establish the lower border of quality for your speaker choice. But above the chosen value it is essentially up to the listener's taste and experience. If they understand the spinorama, it also can guide their choice, of course.

At which point, other than weeding out designs that are bad, what's the use of the score for a consumer?

Like minute differences in SINAD, I think preference score gets more attention than it deserves on ASR.
 
The development of the method is described in these 2 papers, and patented by Harman.

To answer your second question, here is the graph showing the correlation between the predicted preference vs the data points used in the study.

View attachment 174558
Agreed. Take, for example, all the speakers Olive tested with predicted (i.e. calculated using the Olive formula) preference score of ~5. Their actual preference ratings "as measured" (in blind listening tests) can range from ~2 to ~7.

index.php

Thanks for these. I was not able to read the full AES papers due to not being a member, but I think I understand what Mr. Olive did.

Is @MZKM calculating the preference ratings for each reviewed speaker from Amir's published measurement data?


I think this has all been covered here, but my summary might be:
- There are only 4 variables in the formula cited by NTK (I'll note the patent application has 5, as it uses two variants of Smoothness (SMon and SMsp). I initially assumed the preference ratings would be using many more
- With 4 variables (or however many are in MZKM's calculations), it seems that one could possibly use the components of the preference rating to identify characteristics matching one's known preferences, similar to how one could use the spin data
- With regard to Figure 5 in NTK's post cited above: without knowing how the Measured Preference Rating is arrived at, the Predicted Preference Rating could encompass wide deviations around the regression line

- So where does the Preference Rating fit in?
- I suppose if one were proficient at reading the spin data and knew how to apply it to their own preferences, then the Preference Rating is just a redundant, summarized version with loss of detail
- But if you were starting from scratch in identifying a pool of speakers to evaluate, the Preference Rating could narrow the choices
- And there is still the matter of the 4 or 5 variables that Mr. Olive has identified as being most important to people's preference for speakers. So maybe the value of the Preference Rating is in those variables, meaning one could narrow down the data from the spins into those characteristics to make their choice? (i.e.: he is saying that those are the most meaningful characteristics, and any others can be safely ignored.)

I think it is still worth learning how to read the spins and of course having access to the measurements.

I won't be offended by any comments... ;)
 
Last edited:
Guess that is true if hifi equipment is the main hobby target and then a decades long search may be something fun and rewarding, especially looking at it after with the pink glasses of the good old times.
You missed my point. The subjective experience of listening to music as offered by high quality accurate reproduction is a truly motivating factor. Specific choices, imho, require knowledge and understanding of the science and engineering underpinned by objective measurements. However, without that motivation, why would you bother with the investment of time, cost and effort?
On the other hand personally I and few other music (reproduction) fans I know wish today we had the knowledge sources like the book of Toole and forums like this when we started back then, it would have saved us from many wasted expenses and decades of tortuous Odyssey paths.
Duh. I have been a fan (and friend) of Floyd Toole for many years and an avid follower of the evolution of electronic and acoustical measurement technology throughout my life. Of course, it would have been better to know more sooner.
 
Back
Top Bottom