• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master Preference Ratings for Loudspeakers

OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
I'd assume you'd get far less predictive accuracy than the 0.86 correlation factor with the preference rating metric once you have several speakers above a certain number, probably around 8 or so.
Once above a certain point other factors will come into play and sway expected results all over the place, I assume.
The correlation is for double-blind listening sessions. Bias (reviews, price, looks, brand, etc.) will no doubt influence the sighted preference rating of listeners; we can’t plan for bias though, one person may like a speaker better because of the looks, and another may dislike the same speaker because of the looks; one person may like a speaker more as it’s expensive, another may more scrutinize a speaker due to its high price.
 

sweetchaos

Major Contributor
The Curator
Joined
Nov 29, 2019
Messages
3,927
Likes
12,158
Location
BC, Canada
Any way to colour code the scatter plot with 2 colours?
Ex:
Red = active
Blue = passive

Just a quick thought...short of separating measurements into 2 scatter plots (active, passive), seeing them on one plot works for now.
If not, no biggie.
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,148
Likes
8,721
Location
NYC
@MZKM Curious, have you at any point tried testing how the preference rating might change using an "ideal" axis?

I think you mentioned something along these lines at one point, but in light of the LS50 review, I'm curious about speakers that are deliberately designed to be smoother off-axis, how much their preference might vary. I assume it'd be a fraction of a point for all but the biggest variations, but it's something perhaps pertinent to coaxials with the usual on-axis oddities.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
Any way to colour code the scatter plot with 2 colours?
Ex:
Red = active
Blue = passive

Just a quick thought...short of separating measurements into 2 scatter plots (active, passive), seeing them on one plot works for now.
If not, no biggie.
Just got it:
Screen Shot 2020-01-28 at 8.45.10 PM.png
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
@MZKM Curious, have you at any point tried testing how the preference rating might change using an "ideal" axis?

I think you mentioned something along these lines at one point, but in light of the LS50 review, I'm curious about speakers that are deliberately designed to be smoother off-axis, how much their preference might vary. I assume it'd be a fraction of a point for all but the biggest variations, but it's something perhaps pertinent to coaxials with the usual on-axis oddities.
The formula already heavily prioritizes the predicted in-room response, which the on-axis has only a small contribution.
I have an alternate scoring that uses the Listening Window response instead of the on-axis for speakers designed for no toe-in or for center channels where it's meant for many listeners. This is just my tinkering with the formula, so it for sure is not as accurate as the actual raw score using speakers meant for on-axis listening.

For the Revel center channel:
Screen Shot 2020-01-28 at 8.54.53 PM.png
 
Last edited:

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,148
Likes
8,721
Location
NYC
The formula already heavily prioritizes the predicted in-room response, which the on-axis has only a small contribution.
I have an alternate scoring that uses the Listening Window response instead of the on-axis for speakers designed for no toe-in or for center channels where it's meant for many listeners. This is just my tinkering with the formula, so it for sure is not as accurate as the actual raw score using speakers meant for on-axis listening.

For the Revel center channel:
View attachment 47812

Seems about what I expected, thanks! Good to know for any speakers with massive on-axis deviations. Of course, the LW will almost always smooth a curve by virtue of averaging anyway, so that's to be expected.
 

SDX-LV

Active Member
Joined
Jan 11, 2020
Messages
135
Likes
144
Location
Sweden
Great work @MZKM! This list is absolutely fantastic, and I look forward to getting even more data in this format over time :)

Even if the formula is not perfect, even if frequency response data is not everything we need to know and so on - this list will be an awsome first filter to discard stinkers from potential purchases :) And if the formula does prove to be great, then there is even more value to this :)
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
Thanks a lot for organizing this data @MZKM. The performance vs. price graphs in particular are amazing and incredibly insightful. It's going to be so much easier to choose speakers to buy!

I have two suggestions. The first one is: how about coming up with a brand new unit for the preference score itself? As far as naming goes I vote for "Olive", to honour the researcher who came up with the formula. So, for example, the KH 80 is 5.3 Olives.

Second suggestion: the idea of providing a "with sub" score ignoring LFX is great. How about taking it one step further and also providing a score that not only assumes you're using a sub, but also assumes that you have room correction. In other words, a formula that assumes that everything below <300 Hz is perfect because it will be EQ'd. This would effectively forgive all speakers for any deviations in bass, under the rationale that any imperfections there can easily be equalized away.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
@MZKM Curious, have you at any point tried testing how the preference rating might change using an "ideal" axis?
If you meant seeing if 10° off axis would be more neutral than say 0°; I did try that, but since tweeters are so directional, for the speaker I tried it with which had excess energy on-axis in the treble, it still gave a better score to the on-axis even though 10° was a bit smoother, because of the treble roll-off, and reducing the upper limit to say 10kHz may have gave the edge to 10°, but then that is altering the formula.

So, unless we got even more precise (5° measurements), it’s best to just look at the Horizonal Directivity graph as well as the Listening Window line in the Spinorama to determine what toe-in should be used. If a speaker has excess on-axis treble but decays quickly off-axis such that the Listening Window is ideal, then that’s different than a speaker that has wide dispersion but excess energy on-axis.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,080

This is great, thanks. Would it be possible though to make each column of the 'List' table numerically/alphabetically sortable with a click? And add two more columns of the actual performance per dollar (score divided by price) for both the full score and score assuming an ideal subwoofer. As although the price to performance scatter plots are great for seeing general trends, it's not that easy to visually rank the speakers' value for money by looking at them.

Also, I think you should use the price for a pair for all speakers, even center speakers like the Revel C52, especially as they can be used as front stereo or rear/side surrounds (as Amir himself does). Otherwise the price to performance is unfairly skewed in comparison to pairs of speakers.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
This is great, thanks. Would it be possible though to make each column of the 'List' table numerically/alphabetically sortable with a click? And add two more columns of the actual performance per dollar (score divided by price) for both the full score and score assuming an ideal subwoofer. As although the price to performance scatter plots are great for seeing general trends, it's not that easy to visually rank the speakers' value for money by looking at them.

Also, I think you should use the price for a pair for all speakers, even center speakers like the Revel C52, especially as they can be used as front stereo or rear/side surrounds (as Amir himself does). Otherwise the price to performance is unfairly skewed in comparison to pairs of speakers.
I don't believe I can add sorting.
I tried score/$ and it just looks way too skewed, so it's deceptive.
I'll change centers to pair pricing.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,080
I don't believe I can add sorting.
I tried score/$ and it just looks way too skewed, so it's deceptive.
I'll change centers to pair pricing.

Thanks. Why do you think that score/price chart is deceptive though? I think it just shows the facts - that there is a large variation in value for money of those speakers, due to them having a large range in price (a factor of ~40 between cheapest and most expensive), yet a relatively small maximum variation in score (a factor of ~3). I do think either a bar chart like those for the scores or a simple table with values would work better than a line chart though - you could call it Performance per $ (w/ sub as well) to differentiate from the Price : Performance scatter charts. Oh and just one little thing - looks like the scale at the top of the score charts has gotten messed up somehow.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
Thanks. Why do you think that score/price chart is deceptive though? I think it just shows the facts - that there is a large variation in value for money of those speakers, due to them having a large range in price (a factor of ~40 between cheapest and most expensive), yet a relatively small maximum variation in score (a factor of ~3). I do think either a bar chart like those for the scores or a simple table with values would work better than a line chart though - you could call it Performance per $ (w/ sub as well) to differentiate from the Price : Performance scatter charts. Oh and just one little thing - looks like the scale at the top of the score charts has gotten messed up somehow.
Because how much better is a 7 from a 6?

6/$200 is better than 7/$300? Or $100/4 is the same as $50/2?

As for the scale formatting error, are you on mobile? It looks correct for me when using Chrome for Mac, but Chrome for iOS screws it up.
 
Last edited:

Billy Budapest

Major Contributor
Forum Donor
Joined
Oct 11, 2019
Messages
1,863
Likes
2,797
What exactly is “master preference rating”? I am just not getting it.
 

Billy Budapest

Major Contributor
Forum Donor
Joined
Oct 11, 2019
Messages
1,863
Likes
2,797
“Master” in this context just means the main thread (master copy) for these scores, instead of going post by post to find them.
Thanks. What does “preference rating” measure and how is it derived?
 
Top Bottom