• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master Preference Ratings for Loudspeakers

OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
@MZKM does the Estimated In-room Response tilt/trend/straightness affect the score?

If so, should we expect speakers to be penalised for their narrow directivity if there's a significant change in directivity from the bass upwards?
For example:

SFSfig5.jpg

source: https://www.stereophile.com/content/sonus-faber-stradivari-homage-loudspeaker-measurements
Yes, it does take that into account. Research has shown there is a preferred amount of directivity, so it is not unwarranted. However, the formula as-is favors a flat PIR, which is not ideal. I made an offset to account for this but we don’t know if it should be used, I contacted Sean Olive is Twitter a while ago and he said he’ll look into it and get back to me, but that was before all the coronavirus lockdown happened.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
the formula as-is favors a flat PIR, which is not ideal

Does it, though? I wouldn't be so sure. I spent the last few hours thinking about this, rewriting this post several times, and I came up with a somewhat counter-intuitive conclusion. Here goes…

Recall what @daverosenthal's wrote some time ago:

The SM_* "smoothness" model feature appears to use the Pearson 'r' regression correlation coefficient in a way that's, well, charitably, counter-intuituve. To wit: A speaker that had a highly-flat-and-smooth (i.e. desirable) frequency response would have a very low "smoothness" by this measure whereas a speaker that had a bumpy response with a distinct frequency-dependent tilt would score highly. The stats intuition here is that r is high when the variation in dB is well explained by the variation in frequency. In layman's terms, given a fixed amount of natural "wobble" in frequency response, the "smoothness" number will be much higher if there is a non-flat slope to the frequency response. Weird.

(In the patent, Olive notices the effect of this in the regression: "Variables that have small correlations with preference are smoothness (SM) and slope (SL) when applied to the ON and LW curves", but doesn't seem to realize the cause--on-axis (ON) and listening window (LW) frequency responses tend to be flat, not downward sloping, so the 'r' coefficient disappears.)

The final model is fit from many features with mutual correlation so the use of this weird SM feature doesn't invalidate the model, it just means that we shouldn't think of it as measuring smoothness(!). My guess is that the more fundamental effect of SM_PIR in the final score is steering preference to speakers with gradually downward-tilting response. Finally, the "NBD_* feature captures a similar concept but appears to be better engineered, which is perhaps why the "smoothness" factor plays only a small role in the final model.

That's already an amazing insight in and of itself, but I'm not sure we carefully considered all the implications.

Let's reflect on this for a moment. It helps to go back to the definition of r². To avoid confusion over the term "flat", in the following I use the term "horizontal" to mean a response that shows no overall trend/tilt and the term "well-behaved" to mean a response with no local (high/medium-Q) deviations.

Let's enumerate the possible cases:

  • The response is perfectly horizontal and perfectly well-behaved. In this case the measurement is equal to the regression line and the mean itself, there is zero variance either way, and therefore SM=r²=0/0, i.e. undefined.
  • The response is perfectly horizontal but not well-behaved. In this case the regression line is horizontal, matching the mean of the measurement. The regression doesn't explain any of the variance and we end up with SM=r²=0.
    • This is an egregious example where "smoothness" as defined by Olive is very different from "well-behaved" as I defined it, which is misleading. This is why @daverosenthal said "we shouldn't think of it as measuring smoothness", and he's absolutely right.
  • The response is tilted and perfectly well-behaved. In this case the regression line is the same as the measurement itself, explains all the variance and we have SM=r²=1.
  • The response is tilted and not well-behaved. In this case SM will be somewhere between 0 and 1. If the local deviations are large and the tilt small, SM will move closer to 0. If the local deviations are small and the tilt large, SM will move closer to 1.

So far most people here have assumed that this is too weird to have been designed that way on purpose, and Olive must have gotten confused as to the implications of defining SM as r². But let's assume, for the sake argument, that the behaviour I just described is completely working as intended. (After all, SM made it into the model over a whole bunch of other variables, so the variable must be doing something right.) Why, then define a variable that way?

Here's my hypothesis: the effect of SM is to counteract NBD in the presence of a slope.

To see how, let's run through the cases again but this time we'll look at NBD at the same time:

  • The response is perfectly horizontal and perfectly well-behaved: SM undefined, perfect NBD.
  • The response is perfectly horizontal but not well-behaved. SM zero, bad NBD.
  • Thr response that is tilted and perfectly well-behaved. Perfect SM, somewhat bad NBD.
  • The response that is tilted and not well-behaved. This case is more interesting. The more tilted the response, the worse NBD becomes. But at the same time, local deviations being kept the same, an increased level of tilt improves SM! Which, depending on weights, might cancel out the worsening of NBD.

In light of this I'm wondering if SM, instead of being called "smoothness" should instead be called "tilt compensation factor" or something like that. Its effect (whether intentional or not) is to compensate for the effect of tilt on other variables, most notably NBD.

This leads me to think that it doesn't make sense to look at SM in isolation. Instead, SM should always be considered in combination with NBD. Looking at the overall score weights:
  • NBD_ON is used, but not SM_ON. This suggests listeners preferred speakers with a well-behaved and horizontal direct sound.
  • NBD_PIR is used in combination with SM_PIR. This suggests listeners preferred speakers with a well-behaved but not necessarily horizontal PIR.

The above results are consistent with what we would expect considering what we know about loudspeaker preference in general. It makes a lot of sense.

Here's yet another way to phrase this to ensure I get my point across:
  • NBD can be thought of as something similar to the variance of the measurement.
    • It is not actually the variance because NBD de-emphasises broad trends (though not completely). Think of it as a metaphor.
  • SM quantifies how well a linear regression line, i.e. the slope, explains the variance of the measurements. (Definition of r²)
  • Therefore, SM quantifies how well the slope explains NBD.

Revisiting earlier posts:

NBD_PIR not accounting for the ideal slope and treating it as a deviation (thus flat PIR being treated as being better)

This is technically correct, but it's not the whole story. NBD_PIR treats slope as a deviation but flat PIR is not necessarily treated as being better in the final score because SM_PIR counteracts the effect of NBD_PIR treating the slope as a deviation.

Now this is all assuming the ‘smoothness’ (SM variable) in the spreadsheet is correct, which I suspect it might not be, as has been noted, changing the target slope from -1.75 does nothing to the SM value, nor does changing the slope of the ‘measured’ PIR

It is true that changing target slope (SL) does nothing to the SM value. But that doesn't mean the score doesn't take slope into account. It is taken into account but in a much more subtle, implicit, hidden way: through the respective weights of SM_PIR and NBD_PIR.

Bottom line: there are good reasons to believe that the behaviour of SM in the Olive paper, while counter-intuitive, is valid and not a typo or ommission. Therefore, it can be argued that attempting to "fix" the formula (e.g. by hardcoding an offset) could be doing more harm than good.

(Whether Olive actually intended for SM to behave that way is a good question. The paper states "Smoothness (SM) values can range from 0 to 1, with larger values representing smoother frequency response curves", which definitely reads like Olive does not understand what SM actually is. But that doesn't really matter - Principal Component Analysis doesn't care about intent, and it's the resulting model and its performance that matters, not what went through Olive's brain when he defined the variables.)

I believe this might also explain why it's quite hard to figure out why a given speaker obtained a given score by looking at the score components. Indeed, NBD_PIR and SM_PIR mean nothing in isolation - it's the combination of them that matters. This also means that the "breakdown" chart that @MZKM publishes is difficult to interpret when it comes to the PIR components. One way to solve that problem could be to add SM_PIR and NBD_PIR together and present it as a single variable on the radar chart.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
Does it, though? I wouldn't be so sure. I spent the last few hours thinking about this, rewriting this post several times, and I came up with a somewhat counter-intuitive conclusion. Here goes…

Recall what @daverosenthal's wrote some time ago:



That's already an amazing insight in and of itself, but I'm not sure we carefully considered all the implications.

Let's reflect on this for a moment. It helps to go back to the definition of r². To avoid confusion over the term "flat", in the following I use the term "horizontal" to mean a response that shows no overall trend/tilt and the term "well-behaved" to mean a response with no local (high/medium-Q) deviations.

Let's enumerate the possible cases:

  • The response is perfectly horizontal and perfectly well-behaved. In this case the measurement is equal to the regression line and the mean itself, there is zero variance either way, and therefore SM=r²=0/0, i.e. undefined.
  • The response is perfectly horizontal but not well-behaved. In this case the regression line is horizontal, matching the mean of the measurement. The regression doesn't explain any of the variance and we end up with SM=r²=0.
    • This is an egregious example where "smoothness" as defined by Olive is very different from "well-behaved" as I defined it, which is misleading. This is why @daverosenthal said "we shouldn't think of it as measuring smoothness", and he's absolutely right.
  • Let's assume a response that is tilted and perfectly well-behaved. In this case the regression line is the same as the measurement itself, explains all the variance and we have SM=r²=1.
  • Let's assume a response that is tilted and not well-behaved. In this case SM will be somewhere between 0 and 1. If the local deviations are large and the tilt small, SM will move closer to 0. If the local deviations are small and the tilt large, SM will move closer to 1.

So far most people here have assumed that this is too weird to have been designed that way on purpose, and Olive must have gotten confused as to the implications of defining SM as r². But let's assume, for the sake argument, that the behaviour I just described is completely working as intended. (After all, SM made it into the model over a whole bunch of other variables, so the variable must be doing something right.) Why, then define a variable that way?

Here's my hypothesis: the effect of SM is to counteract NBD in the presence of a slope.

To see how, let's run through the cases again but this time we'll look at NBD at the same time:

  • The response is perfectly horizontal and perfectly well-behaved: SM undefined, perfect NBD.
  • The response is perfectly horizontal but not well-behaved. SM zero, bad NBD.
  • Let's assume a response that is tilted and perfectly well-behaved. Perfect SM, somewhat bad NBD.
  • Let's assume a response that is tilted and not well-behaved. This case is more interesting. The more tilted the response, the worse NBD becomes. But at the same time, local deviations being kept the same, an increased level of tilt improves SM! Which, depending on weights, might cancel out the worsening of NBD.

In light of this I'm wondering if SM, instead of being called "smoothness" should instead be called "tilt compensation factor" or something like that. Its effect (whether intentional or not) is to compensate for the effect of tilt on other variables, most notably NBD.

This leads me to think that it doesn't make sense to look at SM in isolation. Instead, SM should always be considered in combination with NBD. Looking at the overall score weights:
  • NBD_ON is used, but not SM_ON. This suggests listeners preferred speakers with a well-behaved and horizontal direct sound.
  • NBD_PIR is used in combination with SM_PIR. This suggests listeners preferred speakers with a well-behaved but not necessarily horizontal PIR.

The above results are consistent with what we would expect considering what we know about loudspeaker preference in general. It makes a lot of sense.

Here's yet another way to phrase this to ensure I get my point across:
  • NBD can be thought of as something similar to the variance of the measurement.
  • SM quantifies how well a linear regression line, i.e. the slope, explains the variance of the measurements.
  • Therefore, SM quantifies how well the slope explains NBD.

Revisiting earlier posts:



This is technically correct, but it's not the whole story. NBD_PIR treats slope as a deviation but flat PIR is not necessarily treated as being better in the final score because SM_PIR counteracts the effect of NBD_PIR treating the slope as a deviation.



It is true that changing target slope (SL) does nothing to the SM value. But that doesn't mean the score doesn't take slope into account. It is taken into account but in a much more subtle, implicit, hidden way: through the respective weights of SM_PIR and NBD_PIR.

Bottom line: there are good reasons to believe that the behaviour of SM in the Olive paper, while counter-intuitive, is valid and not a typo or ommission. Therefore, it can be argued that attempting to "fix" the formula (e.g. by hardcoding an offset) could be doing more harm than good.

(Whether Olive actually intended for SM to behave that way is a good question. The paper states "Smoothness (SM) values can range from 0 to 1, with larger values representing smoother frequency response curves", which definitely reads like Olive does not understand what SM actually is. But that doesn't really matter - Principal Component Analysis doesn't care about intent, and it's the resulting model and its performance that matters, not what went through Olive's brain when he defined the variables.)

I believe this might also explain why it's quite hard to figure out why a given speaker obtained a given score by looking at the score components. Indeed, NBD_PIR and SM_PIR mean nothing in isolation - it's the combination of them that matters. This also means that the "breakdown" chart that @MZKM publishes is difficult to interpret when it comes to the PIR components. One way to solve that problem could be to add SM_PIR and NBD_PIR together and present it as a single variable on the radar chart.

We already know that Olive’s formula does not take theoretical perfections into account, as a perfect LFX score is ~14Hz if I recall and not lower.

Obtaining an undefined smoothness is just not realistic, so it’s possibility shouldn’t have influence.

Also, why would SM counteract NBD_PIR? As we know, slope has no influence on Smoothness.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
Obtaining an undefined smoothness is just not realistic, so it’s possibility shouldn’t matter.

I just mentioned it for completeness and to clearly show where the boundary conditions lie. I agree such a case cannot occur in practice because there is no such thing as a perfectly horizontal, perfectly well-behaved response.

As such, why would SM counteract a realistic horizontal PIR?

Good question. If PIR is perfectly horizontal and almost perfectly well-behaved, NBD_PIR will be excellent, but SM_PIR will be zero. There are two possible ways to interpret this result:
  • It's a case that the model does not handle well because not many speakers have an horizontal PIR in practice.
    • Actually, if a speaker has an horizontal PIR because it's too bright and has a tilted up ON, the model will penalize the speaker based on NBD_ON, so it might work out a reasonable result in that case. One could even argue the zero SM_PIR is reinforcing the correlation.
    • If a speaker has a horizontal ON and a horizontal PIR, that's a constant directivity speaker. The model might not be able to deal with such speakers well because perhaps there weren't that many CD speakers in the test sample.
  • Or: the model does handle that case correctly, and purposefully penalizes the speaker for having an horizontal PIR, because listeners have expressed a preference for tilted PIR. In which case everything is working exactly as intended. Take the same speaker and tilt the PIR down (keeping local deviations the same): NBD_PIR will worsen slightly but SM_PIR will improve tremendously. I would expect the overall score to improve, though I'll admit I have not run the numbers.
 
Last edited:

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
You edited your post, so responding to the new one:

We already know that Olive’s formula does not take theoretical perfections into account, as a perfect LFX score is ~14Hz if I recall and not lower.

I don't see how that relates to the point I was trying to make in any way.

Obtaining an undefined smoothness is just not realistic, so it’s possibility shouldn’t have influence.

Agreed, and I did not suggest it should. The model breaks down if fed perfect data, but that doesn't matter, because perfect data does not exist in practice.

Also, why would SM counteract NBD_PIR?

Have you actually read my post? I tried to explain that in at least 3 different ways. I'm not sure how I could have made it any clearer. It would help if you could point out exactly what sections of my post you're struggling with.

As we know, slope has no influence on Smoothness.

That's not quite correct, IMHO. Assuming my hypothesis is correct, if you take any horizontal response (other than a perfect line, to avoid hitting the boundary condition), and then simply tilt it, while keeping local deviations the same, you will find that SM will improve. And the more you tilt, the more it will improve. That's because, as you keep tilting the response, the linear regression explains more and more of the variance of the response, which, by definition, means higher r². Hence: slope does have an influence on SM.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
That's not quite correct, IMHO. Assuming my hypothesis is correct, if you take any horizontal response (other than a perfect line, to avoid hitting the boundary condition), and then simply tilt it, while keeping local deviations the same, you will find that SM will improve. And the more you tilt, the more it will improve. That's because, as you keep tilting the response, the linear regression explains more and more of the variance of the response, which, by definition, means higher r². Hence: slope does have an influence on SM.
Ah, you are right, If I adjust it so that the target slope is closer to the actual, SM decreases, and it increases when I alter the target to be more steep.

Now I see why you say it’s balancing it out.

• Less steep than target: SM is lower, NBD is higher
• More steep than target: SM is higher, NBD is lower.

Even after reading that Wiki article I’m having difficulty understanding why though. As when you tilt it, the data points should still be the same distance from the regression, as the slope of the regression should change identically.
 
Last edited:

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
Even after reading that Wiki article I’m having difficulty understanding why though. As when you tilt it, the data points should still be the same distance from the regression, as the slope of the regression should change identically.

Yes, this is quite head-banging and why it took me hours to write that post - I ended up fooling myself multiple times as to the precise meaning of r². @daverosenthal had it exactly right from the beginning but it took me a while to fully understand what he was saying.

The crucial point is this: 1-r² is the variance around the model (i.e. around the linear regression line) (SSres in the Wikipedia article), divided by the original variance of the data (SStot in the Wikipedia article). This is the crucial part. Interpretations of SM thus far were wrong (except @daverosenthal's, of course) because they didn't take that divisor into account. (It's quite possible Olive made the exact same mistake, but again, intent does not really factor into the model.)

Now, consider this: if you increase the slope of the response, the variance of the response itself (i.e. without regression) increases. That's because variance is dumb and just sees slope as points being farther apart, hence more variance. But, assuming you keep local deviations the same, the variance around the new regression line stays the same of course. Therefore, SSres stays the same, but SStot increases, and thus, r² increases. QED.

Another way to look at this is: r² describes how well a model fits the data compared to a baseline model that is just a straight horizontal line through the mean. From that perspective, it's quite apparent that if the points are already evenly distributed around the mean (i.e. an horizontal response) in the first place, a linear regression will not improve on that baseline model (it won't explain the variance), therefore r²=0. But if there is a slope, well, in that case a sloped linear regression line fits the data better than a straight horizontal line through the mean, thus, r² > 0.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
Yes, this is quite head-banging and why it took me hours to write that post - I ended up fooling myself multiple times as to the precise meaning of r². @daverosenthal had it exactly right from the beginning but it took me a while to fully understand what he was saying.

The crucial point is this: 1-r² is the variance around the model (i.e. around the linear regression line) (SSres in the Wikipedia article), divided by the original variance of the data (SStot in the Wikipedia article). This is the crucial part. Interpretations of SM thus far were wrong (except @daverosenthal's, of course) because they didn't take that divisor into account. (It's quite possible Olive made the exact same mistake, but again, intent does not really factor into the model.)

Now, consider this: if you increase the slope of the response, the variance of the response itself (i.e. without regression) increases. That's because variance is dumb and just sees slope as points being farther apart, hence more variance. But, assuming you keep local deviations the same, the variance around the new regression line stays the same of course. Therefore, SSres stays the same, but SStot increases, and thus, r² increases. QED.

Another way to look at this is: r² describes how well a model fits the data compared to a baseline model that is just a straight horizontal line through the mean. From that perspective, it's quite apparent that if the points are already evenly distributed around the mean (i.e. an horizontal response) in the first place, a linear regression will not improve on that baseline model (it won't explain the variance), therefore r²=0. But if there is a slope, well, in that case a sloped linear regression line fits the data better than a straight horizontal line through the mean, thus, r² > 0.
Yeah, I just asked on Reddit and got explained that a shallow slope means less linear relationship between axes and a steep slope means a strong linear relationship between axes. This made it very easy for me to understand. Your last paragraph also helped.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
Amir has asked that I add speaker sensitivity to my posts. I have already been adding these to my spreadsheets, using unweighted 300Hz-3kHz of the on-axis, as this is what SoundStage uses (Stereophile uses B-weight, not sure of frequency range), but does anyone have any other suggestions?

• Frequency Range (NBD uses 100Hz-12kHz, SM uses 100Hz-16kHz)
• Weighting (none vs A vs B vs etc.), I've already made calculations for both (thankfully Wikipedia had formulas)
• Response used (on-axis vs listening window vs offset predicted in-room)
If predicted in-room, as I asked here, use the target slope or it's actual slope? For sensitivity though, I would need to pick a point or something to reference from the original, maybe 100Hz?

I think there is an IEC protocol, but I cannot find a free copy.
 
Last edited:

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,337
Likes
6,709
Amir has asked that I add speaker sensitivity to my posts. I have already been adding these to my spreadsheets, using unweighted 300Hz-3kHz of the on-axis, as this is what SoundStage uses (Stereophile uses B-weight, not sure if frequency range), but does anyone have any other suggestions?

• Frequency Range (NBD uses 100Hz-12kHz, SM uses 100Hz-16kHz)
• Weighting (none vs A vs B vs etc.), I've already made calculations for both (thankfully Wikipedia had formulas)
• Response used (on-axis vs listening window vs offset predicted in-room)
If predicted in-room, as I asked here, use the target slope or it's actual slope? For sensitivity though, I would need to pick a point or something to reference from the original, maybe 100Hz?

I think there is an IEC protocol, but I cannot find a free copy.

Awesome. Sensitivity is one of the most important metrics for me, and while it's easy enough to look up, it's not always consistent.
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,449
Likes
4,818
I am probably going to say something completely idiotic, so apologies in advance. But isn't this just some form of patent obfuscation? I fail to see why doing things this way is better than assessing the trend to see if it has what has been defined as the desirable characteristic, then detrending before computing other required characteristics such as the variance?

If we are at the point where we are wondering what the author understood and what he did not then end up saying it doesn't matter anyway...
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
I fail to see why doing things this way is better than assessing the trend to see if it has what has been defined as the desirable characteristic, then detrending before computing other required characteristics such as the variance?
Well, I asked Olive about the slope, he said he would look into it (hasn’t gotten back, though I don’t blame him considering the climate). But, we see now that we don’t need to alter the formula, it works as intended.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,792
Likes
37,693
Amir has asked that I add speaker sensitivity to my posts. I have already been adding these to my spreadsheets, using unweighted 300Hz-3kHz of the on-axis, as this is what SoundStage uses (Stereophile uses B-weight, not sure of frequency range), but does anyone have any other suggestions?

• Frequency Range (NBD uses 100Hz-12kHz, SM uses 100Hz-16kHz)
• Weighting (none vs A vs B vs etc.), I've already made calculations for both (thankfully Wikipedia had formulas)
• Response used (on-axis vs listening window vs offset predicted in-room)
If predicted in-room, as I asked here, use the target slope or it's actual slope? For sensitivity though, I would need to pick a point or something to reference from the original, maybe 100Hz?

I think there is an IEC protocol, but I cannot find a free copy.
Didn't Toole et al match speakers over the 200-400hz range for their blind testing. Something to do with this being the octave most people instinctively used as a reference to decide if a speaker was bright or dull, lightweight or full sounding. That might be worth considering as the comparison point of reference. OTOH, (I need to look it up), I think Toole suggests using 500-3000 hz for level matching speakers in his book. This is close to the 300 to 3 k you mention above.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
@MZKM does the harmonic distortion measurement have an impact on the "Preference Rating" (no sub)?

Is the difference between "Preference Rating" and "Preference Rating w/ sub" about how a speaker performs in the sub-150Hz region with or without the help of a sub?

I also wonder if the reflex tuning (under-, critically- or over-damped) shouldn't impact the "Preference Rating" (no sub).
Group delay should too but only for more demanding listeners.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
@MZKM does the harmonic distortion measurement have an impact on the "Preference Rating" (no sub)?

Is the difference between "Preference Rating" and "Preference Rating w/ sub" about how a speaker performs in the sub-150Hz region with or without the help of a sub?

I also wonder if the reflex tuning (under-, critically- or over-damped) shouldn't impact the "Preference Rating" (no sub).
Group delay should too but only for more demanding listeners.
Distortion does not.
w/sub simply assumes an ideal bass response
group delay does not and is not something Amir measures.
 

Pjetrof

Active Member
Joined
Feb 10, 2020
Messages
281
Likes
115
Location
Belgium, Antwerp
Can we filter the file? So the highest preference score on top?
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
Can we filter the file? So the highest preference score on top?
Sadly, Google Sheets doesn’t allow that for published sheets. I can make a new tab that’s sorted by score, but that would be clutter. However, the bar graphs (Preference Rating & Preference Rating w/sub) are sorted by score.

You can sort/filter by score here
 
Top Bottom