• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Speaker Equivalent SINAD Discussion

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
@dukanvadet raised a very important point about these preference ratings in the original speaker announcement thread that hasn't been addressed so far:

One problem is that I think [these preference ratings] would give an advantage to speakers that can be used fullrange. A smaller speaker that are meant or likely to be crossed over to subwoofer may get a bad score that would turn out really good once you use it with a sub?

To take this into account, I suggested having two separate scores for speakers - one that encompasses the full 20Hz to 20kHz, and another that has a greater lower-limit (the standard subwoofer crossover frequency of 80Hz seems like a good choice), so their performance in both 2.0 and 2.1 systems can be accurately judged.

As the other three variables in Sean Olive's algorithm ignore data below 100 Hz, this can easily be done by doing a second calculation of the preference rating with a slight modification only to the LFX (Low Frequency Extension) variable as follows:

1. If the -6 dB point in the Sound Power Curve relative to the mean Sound Power SPL of the speaker being measured is 80 Hz or below, then use a value of 1 Hz for the LFX calculation i.e. assume it's paired with an ideal subwoofer. So in this case LFX = log10(1) = 0, and no deduction to the score in the formula for the speaker's preference rating will be made for this lack of extension below 80 Hz, as a sub would handle these frequencies.

2. If the -6 dB point in the Sound Power Curve relative to the mean Sound Power SPL is above 80 Hz, then use this actual -6 dB point for the LFX calculation, in order to penalise speakers that would require the subwoofer crossover to be increased above 80 Hz, which may result in some bass from the sub being directional and so impact imaging.

Very few speakers, even full-range floor-standers, provide full extension down to 20 Hz. A separate subwoofer would be needed in most situations to produce true full-range extension, and so it would be these 2.1 systems that would likely score the highest in real listening tests. So I think this set-up needs to be fairly represented by these preference ratings, especially as @amirm has said he will also be measuring subwoofers in the future.
 
Last edited:

carlosmante

Active Member
Joined
Apr 15, 2018
Messages
210
Likes
161
I thought I'd create a thread where suggestions/comments can be made on if Amir were to rank speakers based on measurements, how would he do so.

As of right now, since Amir wants his rating to be based on listening tests and not just opinions, Sean Olive's Predicted Preference rating is likely what will be used for now as it is decently accurate (probability 0.86). I think before suggesting alterations/alternatives, that we should first understand Olive's algorithm; he has filed a patent for it and is available as a searchable PDF (warning: may download instead of view).

Please bear with me.
His algorithm is such:

View attachment 45321

Here is it as a percentage:
View attachment 45322


NBD (Narrow Band Directivity): Average Narrow Band Deviation (dB) in each 1/2-octave band from 100 Hz-12 kHz
NBD_ON: On-axis
NBD_PIR: Predicted In-room Response
View attachment 45327
  • y-bar is the average amplitude value within the 1/2-octave band n
  • y-sub(b) is the amplitude value of band b within the 1/2-octave band n
  • N is the total number of 1/2-octave bands between 100Hz-12kHz
  • NBD can be a good metric for detecting medium and low Q resonances.
LFX (Low Frequency Extension): Log10 of the -6dB point (below 300Hz) in the Sound Power curve, relative to average Listening Window (300Hz-10kHz) SPL.
Easy to understand; turning this into a formula:​
As noted, for rear-firing speakers, making the -6dB point in the Sound Power curve be relative to the average Sound Power SPL may be better than the average Listening Window SPL.​

SM_PIR (Smoothness of Predicted In-room Response): Smoothness in SPL based on a linear regression line (at least 1 square error) thru 100Hz-16 kHz.
This is the only aspect that is complex:​

  • n is the number of data points used to estimate the regression curve
  • X and Y represent the measured versus estimated amplitude values of the regression line.
  • A natural log transformation is applied to the measured frequency values so that they are linearly spaced. Smoothness (SM) values can range from 0 to 1, with larger values representing smoother frequency response curves.

___________________________

Observations & Critiques

  1. THD is not included. Rating THD performance is hard to do as we would have to agree on what is audible. I'll start with a suggestion of a downward slope where audibility thresholds is set to 10% THD @ 20Hz and 0.01% THD @20kHz; this is already considerably lower that what I consider audible, but I reduced it to please others.
  2. The preference rating as is does not weight frequencies different (to my knowledge), meaning a dip at 20kHz effects the score the same way an identical dip at 200Hz would, even though the later would be magnitudes more audible.
  3. I think NBD_ON may have too much of an effect on the score, we almost never listen solely on the direct axis. I can't alter the algorithm and see if it improves as I don't have Harman's data, but I think it should be changed to a NBD on the Listening Window, or maybe a +/-5° window to not deviate too much.
  4. Working on the assumption that on-axis is the intended axis, some speakers are designed to have no or little toe-in, so that 15° to 30° off-axis is the reference axis. Same for vertical on-axis.
  5. CSD: All these measurements, including the reflections, are measuring at Time 0, yet the perceived tonal balance may be altered if the speaker does not uniformly/smoothly decay
  6. Group Delay: This algorithm doesn't factor in "loose" bass (i.e. group delay past 1 cycle).


That's all I got for now.

Since Mr. Olive has all the data to check, and it is his work being critiqued (possibly improved), I wonder if he would be wiling to run any altered version to see if it obtains a higher correlation.
I thought I'd create a thread where suggestions/comments can be made on if Amir were to rank speakers based on measurements, how would he do so.

As of right now, since Amir wants his rating to be based on listening tests and not just opinions, Sean Olive's Predicted Preference rating is likely what will be used for now as it is decently accurate (probability 0.86). I think before suggesting alterations/alternatives, that we should first understand Olive's algorithm; he has filed a patent for it and is available as a searchable PDF (warning: may download instead of view).

Please bear with me.
His algorithm is such:

View attachment 45321

Here is it as a percentage:
View attachment 45322


NBD (Narrow Band Directivity): Average Narrow Band Deviation (dB) in each 1/2-octave band from 100 Hz-12 kHz
NBD_ON: On-axis
NBD_PIR: Predicted In-room Response
View attachment 45327
  • y-bar is the average amplitude value within the 1/2-octave band n
  • y-sub(b) is the amplitude value of band b within the 1/2-octave band n
  • N is the total number of 1/2-octave bands between 100Hz-12kHz
  • NBD can be a good metric for detecting medium and low Q resonances.
LFX (Low Frequency Extension): Log10 of the -6dB point (below 300Hz) in the Sound Power curve, relative to average Listening Window (300Hz-10kHz) SPL.
Easy to understand; turning this into a formula:​
As noted, for rear-firing speakers, making the -6dB point in the Sound Power curve be relative to the average Sound Power SPL may be better than the average Listening Window SPL.​

SM_PIR (Smoothness of Predicted In-room Response): Smoothness in SPL based on a linear regression line (at least 1 square error) thru 100Hz-16 kHz.
This is the only aspect that is complex:​

  • n is the number of data points used to estimate the regression curve
  • X and Y represent the measured versus estimated amplitude values of the regression line.
  • A natural log transformation is applied to the measured frequency values so that they are linearly spaced. Smoothness (SM) values can range from 0 to 1, with larger values representing smoother frequency response curves.

___________________________

Observations & Critiques

  1. THD is not included. Rating THD performance is hard to do as we would have to agree on what is audible. I'll start with a suggestion of a downward slope where audibility thresholds is set to 10% THD @ 20Hz and 0.01% THD @20kHz; this is already considerably lower that what I consider audible, but I reduced it to please others.
  2. The preference rating as is does not weight frequencies different (to my knowledge), meaning a dip at 20kHz effects the score the same way an identical dip at 200Hz would, even though the later would be magnitudes more audible.
  3. I think NBD_ON may have too much of an effect on the score, we almost never listen solely on the direct axis. I can't alter the algorithm and see if it improves as I don't have Harman's data, but I think it should be changed to a NBD on the Listening Window, or maybe a +/-5° window to not deviate too much.
  4. Working on the assumption that on-axis is the intended axis, some speakers are designed to have no or little toe-in, so that 15° to 30° off-axis is the reference axis. Same for vertical on-axis.
  5. CSD: All these measurements, including the reflections, are measuring at Time 0, yet the perceived tonal balance may be altered if the speaker does not uniformly/smoothly decay
  6. Group Delay: This algorithm doesn't factor in "loose" bass (i.e. group delay past 1 cycle).


That's all I got for now.

Since Mr. Olive has all the data to check, and it is his work being critiqued (possibly improved), I wonder if he would be wiling to run any altered version to see if it obtains a higher correlation.
looks like Sigma is upside down.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
The best HD tests I've seen rely on steady state stepped sine signals which of course become mingled with room response.

Room won't add harmonics in stepped sine test but linear distortion (non linear FR) would come to play. Instead of closing in the mic I believe precise room EQ at mic location would be sufficient.

Btw, here is another GedLee paper where test of B&C compression drivers have been performed and it turned out that non-linear distortion were not the factor for listeners as only linear distorion were recognised.
 
Last edited:

Absolute

Major Contributor
Forum Donor
Joined
Feb 5, 2017
Messages
1,084
Likes
2,125
Personally I'd look for something far simpler than a speaker version of SINAD because no score/rating system will be relevant for anticipating expected sound quality. That's the spinorama-chart's sole purpose. Stereophile's class A, B, C etc could be an example of a really simple rating system.

If numbers needs to be presented, it could be something like this;

Linearity (defined by freq response, cabinet and driver resonances): 7
Uniformity (defined by the level of even dispersion/DI index) ; 8
Performance (defined by above, THD, cmd of drivers, price, rivals) ; 9

Sum (average of above); 8

The way I see it, the value of the Spinorama is that it's easy to understand because it shows nearly all relevant things in one simple graph. If we aim to educate people on how to read it, it's a moot point to dumb it down again with a simple number-system that has no predictive quality for suitability to any given circumstance. Just my 2 cents :)
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,244
Likes
11,476
Location
Land O’ Lakes, FL
@dukanvadet raised a very important point about these preference ratings in the original speaker announcement thread that hasn't been addressed so far:



To take this into account, I suggested having two separate scores for speakers - one that encompasses the full 20Hz to 20kHz, and another that has a greater lower-limit (the standard subwoofer crossover frequency of 80Hz seems like a good choice), so their performance in both 2.0 and 2.1 systems can be accurately judged.

As the other three variables in Sean Olive's algorithm ignore data below 100 Hz, this can easily be done by doing a second calculation of the preference rating with a slight modification only to the LFX (Low Frequency Extension) variable as follows:

1. If the -6 dB point in the Sound Power Curve relative to the mean Sound Power SPL of the speaker being measured is 80 Hz or below, then use a value of 1 Hz for the LPX calculation i.e. assume it's paired with an ideal subwoofer. So in this case LPX = log10(1) = 0, and no deduction to the score in the formula for the speaker's preference rating will be made for this lack of extension below 80 Hz, as a sub would handle these frequencies.

2. If the -6 dB point in the Sound Power Curve relative to the mean Sound Power SPL is above 80 Hz, then use this actual -6 dB point for the LPX calculation, in order to penalise speakers that would require the subwoofer crossover to be increased above 80 Hz, which may result in some bass from the sub being directional and so impact imaging.

Very few speakers, even full-range floor-standers, provide full extension down to 20 Hz. A separate subwoofer would be needed in most situations to produce true full-range extension, and so it would be these 2.1 systems that would likely score the highest in real listening tests. So I think this set-up needs to be fairly represented by these preference ratings, especially as @amirm has said he will also be measuring subwoofers in the future.
Hardly any speakers have their -6dB point above 80Hz. The most popular budget speaker, the Micca MB42X (4” woofer) is rated down to 60Hz (parameter not given, but even if -10dB, it likely will not hit 80Hz less than -6dB). So, if doing what you suggest, Amir would have to use -3dB instead, which might give false results in more non-linear speakers.

I’ve seen tower brothers of bookshelf speakers that don’t go any deeper, they only use the double woofer to increase sensitivity; thus, anyone thinking of using different formulas by design would also not work.

Another issue that I brought up previously; if Amir starts separating by bass, it won’t end there, people will want him to then start separating by use-case, an example being to put much more emphasis on vertical directivity for near-field monitors.

I would suggest maybe having a single formula, but separating bookshelves vs towers into two charts when doing a SINAD equivalent chart.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
I would suggest maybe having a single formula, but separating bookshelves vs towers into two charts when doing a SINAD equivalent chart.

That won’t solve this issue. Even within a separate bookshelf speaker ranking, a speaker with greater bass extension will be ranked higher than another with less extension (all other metrics being roughly equal), even if they’re both approximately flat down to 80 Hz. As both most likely will be paired with a subwoofer if the user is serious about maximising frequency-range fidelity (which a lot of us on here would be), this unfairly penalises speakers with lack of bass extension, when this extension will be made up for by the subwoofer anyway.

Worse, as the LFX variable has a large weighting in Olive’s algorithm (almost a third), we could very easily arrive at a situation in which one speaker performs better than another in all other metrics of Olive’s formula except bass extension (namely on-axis directivity, and predicted in-room directivity and smoothness), yet scores lower in the ranking, due to less bass extension, which should have no weighting (as long as the speaker reaches down to at least ~80 Hz) in a 2.1 system where the sub can handle frequencies below this.

The same could conceivably also occur with the tower speaker rankings, as there is still quite a variation in their bass extension, with very few having full flat extension down to 20 Hz. So almost all speakers, regardless of type, would likely score higher in real listening tests when paired with a subwoofer, which Olive’s algorithm does not take into account.

2.1 systems are not a niche case, so this is not a minor issue, and needs to be addressed. Fortunately, this would be very easy to do, with no real extra work from @amirm, by as I said posting two scores for each speaker – one for 2.0 set-ups using Olive’s full preference rating formula, and another for 2.1 set-ups by simply ignoring the LFX variable in the formula, if the -6 dB point is at or below 80 Hz (which as you say, will be the case for the large majority of speakers, so in almost all cases the LFX won’t need to be re-computed for 2.1 set-ups, just ignored).
 
Last edited:

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
That won’t solve this issue. Even within a separate bookshelf speaker ranking, a speaker with greater bass extension will be ranked higher than another with less extension, even if they’re both approximately flat down to 80 Hz.

I don't think so. It is relatively easy to tell from these measruements if LF extension is the only problem a particualr speaker has so these would be recognised as serious candidates to provide high SQ when coupled with subwoofers. On the other hand, LF extension by itself doesn't mean much if other measurement parameters are off.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
I don't think so. It is relatively easy to tell from these measruements if LF extension is the only problem a particualr speaker has so these would be recognised as serious candidates to provide high SQ when coupled with subwoofers. On the other hand, LF extension by itself doesn't mean much if other measurement parameters are off.

I meant all other metrics being roughly equal (added to my original post), but even if they're not, Olive's algorithm may score a speaker with worse metrics, but much greater bass extension, higher in the rankings, as the LFX variable has almost a third of the total weighting in his formula, which would be an inaccurate prediction of its preference rating when used with a subwoofer. I explain this in the rest of my post.

And yes, you may be able to tell by carefully looking at the graphs which speaker might provide higher sound quality, but the whole point of this thread is to create an unambiguous, objective score for ranking speakers, without inevitable biases creeping into interpretation of graphs.
 
Last edited:

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
I meant all other metrics being roughly equal (added to my original post), but even if they're not, Olive's algorithm may score a speaker with worse metrics, but much greater bass extension, higher in the rankings, as the LFX variable has almost a third of the total weighting in his formula. I explain this in the rest of my post. And yes, you may be able to tell by carefully looking at the graphs which speaker might provide higher sound quality, but the whole point of this thread is to create an unambiguous, objective score for ranking speakers, without the inevitable biases creeping into interpretation of graphs.

If you have 2 speakers with all other metrics being roughly equal except for LF extension it is fair to give higher score to speaker with better LF extension. In that case choosing between buying a speaker with better LF extension or the other speaker + SW is a personal decision, but as I said, IMHO speaker with better LF extension and all other things being equal should get better score.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,244
Likes
11,476
Location
Land O’ Lakes, FL
And yes, you may be able to tell by carefully looking at the graphs which speaker might provide higher sound quality, but the whole point of this thread is to create an unambiguous, objective score for ranking speakers, without inevitable biases creeping into interpretation of graphs.
Would you pick a DAC with 2dB better SINAD (THD+N @ 1kHz) but it doesn’t have volume control, has worse frequency response, ESS hump, high jitter, and terrible HF distortion? There are always caveats.

If you want Amir to state the value of all 4 individual components in the current formula, that would be a good thing to do; but altering the formula because the end user will have subwoofer integration is not that agreeable, especially since many Hi-Fi 2.1 setups don’t even use a crossover (some, Paul McGowan for instance, insisting that using no crossover results in better fidelity).
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
If you have 2 speakers with all other metrics being roughly equal except for LF extension it is fair to give higher score to speaker with better LF extension. In that case choosing between buying a speaker with better LF extension or the other speaker + SW is a personal decision, but as I said, IMHO speaker with better LF extension and all other things being equal should get better score.

And it will get a better score, under the 2.0 use case. But you can also have a separate score for the 2.1 use case, in which I propose prioritising other sound quality metrics and ignoring the low frequency extension variable (as long as it reaches down to 80 Hz or below), as the subwoofer will handle these frequencies (often better than a speaker, with less distortion etc). I don't see why anyone would be against having these two different scores for qualitatively entirely different set-ups, which would keep everyone happy, especially as many people use a 2.1 system for music listening (usually as part of or doubling as a home theatre set-up), and is actually the only way to achieve full extension down to 20 Hz for most speakers, even towers.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,244
Likes
11,476
Location
Land O’ Lakes, FL
I don't see why anyone would be against having these two different scores for qualitatively entirely different set-ups
As I stated previously, it wouldn’t stop there, what about another version prioritizing vertical performance for near-field purposes?

Also as I just commented, many Hi-Fi 2.1 setups don’t have a crossover, their integrated amps do no bass management.

Crossovers aren’t just cuts, they are gradual, so you still need SPL below the crossover point.

What about those that want 40Hz or 60Hz crossover points?
 
Last edited:

oivavoi

Major Contributor
Forum Donor
Joined
Jan 12, 2017
Messages
1,721
Likes
1,934
Location
Oslo, Norway
Here's an idea that popped into my mind just now: How about doing a kind of informal expert survey among the most prominent researchers and/or practitioners in the field as to what dimensions they think are perceptually relevant in loudspeakers, and about their suggestions on how to measure it? (I'm not sure if it's a good idea, but just mentioning it)

So far we have the Toole/Olive/Harman formula, which - as I understand it - is mainly about flatness/smoothness of the frequency response in the listening window, and even dispersion/smooth off-axis response. That dimension has been validated in the experiments of Toole and Harman.

That's one dimension. Then there are other perceptual dimensions which have not received the same kind of experimental focus, but where there are some indications that it may matter, from some studies at least. Non-linear distortion/noise: Here we have the gedlee metric, along with some other metrics... Samsung researchers have also been working on this recently, I've seen. Klippel himself is among the most prominent experts on this, so I'm sure he has some thoughts.

Beyond this, what does Søren Bech think for example? He's among the most prominent researchers in the field, but I haven't seen him proposing any "metrics" in the same way as Harman. Or prominent speaker designers with access to good measurement and test facilities, like @Bruno Putzeys, @Martijn Mensink, Geoff Martin, and the people in charge at places like Harman, Samsung, Dynaudio, Hedd and Adam for example? I would guess that @KSTR has some good contacts in Germany.

Anyways, I'm not sure it's a good idea, but it could be worthwhile to get input from people who have been measuring speakers for a long time.
 

dwkdnvr

Senior Member
Joined
Nov 2, 2018
Messages
418
Likes
698
Well, for frequency response we are using the analyzer's advanced math and double scan to remove the room influence. In contrast, the distortion measurements are done like you would do with just a scan and no processing. So room modes and reflections get in the way and screw up the results. Imagine if you have a room mode at second harmonic. It could add up to 20 dB to that amplitude.

Okay, now that I've seen the JBL review and how you're measuring, I see why distortion measurement is problematic.

My concern is that given that 'we' (and by 'we' I mean 'you') measure electronics down to -130dB distortion and amps down to -100 or better, simply ignoring the speaker question seems a bit unsatisfying. We know that the very best speakers are unlikely to have distortion much better than -70dB or so. I do realize that much literature suggests that HD itself is not audible until higher than that, but IMD etc still comes into play. It may well be the case that the conclusion is "there isn't enough variation in speaker distortion behavior to make it a differentiating factor compared to directivity", but I'm not sure that's obvious to start with.

Having said that, I have to spend more time on the JBL review - there is so much information there it'll take some time to digest. I hadn't seen a review with this advanced klippel setup before and I can see why you were so interested in it.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
Okay, now that I've seen the JBL review and how you're measuring, I see why distortion measurement is problematic.

My concern is that given that 'we' (and by 'we' I mean 'you') measure electronics down to -130dB distortion and amps down to -100 or better, simply ignoring the speaker question seems a bit unsatisfying. We know that the very best speakers are unlikely to have distortion much better than -70dB or so. I do realize that much literature suggests that HD itself is not audible until higher than that, but IMD etc still comes into play. It may well be the case that the conclusion is "there isn't enough variation in speaker distortion behavior to make it a differentiating factor compared to directivity", but I'm not sure that's obvious to start with.

Having said that, I have to spend more time on the JBL review - there is so much information there it'll take some time to digest. I hadn't seen a review with this advanced klippel setup before and I can see why you were so interested in it.
There is good reason to believe speaker distortion isn't a big variable with of course always unusual cases. IMD can be a factor around 10 db lower than THD in general. Also the overwhelming THD with speakers is 2nd and 3rd harmonic.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Would you pick a DAC with 2dB better SINAD (THD+N @ 1kHz) but it doesn’t have volume control, has worse frequency response, ESS hump, high jitter, and terrible HF distortion? There are always caveats.


If you want Amir to state the value of all 4 individual components in the current formula, that would be a good thing to do; but altering the formula because the end user sill have subwoofer integration is not that agreeable, especially since many Hi-Fi 2.1 setups don’t even use a crossover (some, Paul McGowan for instance, insisting that using no crossover results in better fidelity).

Yes, there are always caveats, and I would never recommend making any purchasing decision based purely on a single metric like SINAD. But this is a thread specifically about the 'speaker equivalent of SINAD', as you call it (which Amir has stated will most likely be based on Sean Olive's Predicted Preference Rating), so let's limit our discussion to that alone. And it is undeniably obvious that this preference rating does not consider a major user base of speakers, who have 2.1 set-ups. If this use case is not taken into account, a large number of visitors to this site will be excluded from using these ratings, or worse, will misinterpret them to be representative of sound quality in their set-up, when the truth is, they only apply accurately to 2.0 systems.

I would be very happy for @amirm to state the exact values for all 4 variables in Olive’s formula for each speaker measured, which would provide a useful breakdown of their performance, and allow creating our own ranking for 2.1 systems. The more transparency and data posted for these measurements the better :)

I still think it would additionally be very useful for people new to this site or hobby who use a subwoofer with their speakers to have an easily presented ranking for their set-up, otherwise as I say, they may falsely think Olive's default ratings apply accurately to their situation. Doing this would also open up these rankings for the whole of the home theatre community to benefit from, which can only be a good thing (there are already measurements and rankings of AV receivers on this site anyway – home theatre speaker set-ups is just a logical extension of that).

As I stated previously, it wouldn’t stop there, what about another version prioritizing vertical performance for near-field purposes?



Also as I just commented, many Hi-Fi 2.1 setups don’t have a crossover, their integrated amps do no bass management.



Crossovers aren’t just cuts, they are gradual, so you still need SPL below the crossover point.



What about those that want 40Hz or 60Hz crossover points?

It would stop there, as unlike the LFX, none of the other variables in Olive’s algorithm would be grossly inaccurate under different set-ups or use cases. The effective LFX value of a 2.1 system could be as low as, for example, 1.16 when say a bookshelf speaker is used with an ideal sub with a -6 dB point of 14.5 Hz (LFX = log10(14.5) = 1.16), but the same speaker when used in a 2.0 system could have an LFX as high as 1.9 (log10(80) = 1.90). Plugging these values into Olive’s formula, using the LFX weighting coefficient of 4.31, we get a preference rating of 12.69 - 0 - 0 - (4.31*1.16) + 2.32 = 10 with the subwoofer (maximum rating) and 12.69 - 0 - 0 - (4.31*1.9) + 2.32 = 6.8 without the sub (assuming all other metrics are ideal i.e. NBD_ON = NBD_PIR = 0 and SM_PIR = 1). That's close to a 50% increase in the predicted preference rating of this speaker when you add a subwoofer, so someone with a 2.1 set-up looking at the 6.8 preference rating would completely underestimate the potential sound quality of the speaker in their situation. None of the other variables in Olive’s algorithm have such wild variations in different use cases.

Most people who have a subwoofer have access to bass management, either via an AV receiver or crossover controls built into the sub. If they don’t, then they should get one of these, if they’re serious about sound quality. (Paul McGowan of PS Audio can insist the contrary as much as he likes, but he rarely provides any evidence to back up his claims – this is a man who has said there can be differences in the signal transmission of cables that are not electrical…)

I’m aware crossovers are gradual – I suggested 80 Hz as the maximum -6 dB point for 'good' (read: not penalised for lack of bass extension in the preference ratings) speakers in 2.1 systems because most subs’ crossovers can go up to around 120 Hz, which would safely leave enough leeway to smoothly blend such a speaker’s bass into the sub’s. Speakers with a -6 dB point higher than 80 Hz might result in a hole in the bass integration, depending on the slope of the roll-off. Of course, the sub’s crossover can be set at any value to best match with the bass roll-off of any particular speaker, but that’s a distinct issue not relevant to Olive’s algorithm or this discussion.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,244
Likes
11,476
Location
Land O’ Lakes, FL
Plugging these values into Olive’s formula, using the LFX weighting coefficient of 4.31, we get a preference rating of 12.69 - 0 - 0 - (4.31*1) + 1 = 9.38 with the subwoofer and 12.69 - 0 - 0 - (4.31*1.9) + 1 = 5.49 without the sub (assuming all other metrics are ideal i.e. NBD_ON = NBD_PIR = 0 and SM_PIR = 1).
Note: You forgot to multiply the SM by 2.32:

10Hz extension: 10.7 score
80Hz extension: 6.82 score

If it is supposed to be capped at 10.0, then a bit lower than 15Hz is the best possible.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Note: You forgot to multiply the SM by 2.32:

10Hz extension: 10.7 score
80Hz extension: 6.82 score

If it is supposed to be capped at 10.0, then a bit lower than 15Hz is the best possible.

Corrected. Doesn't really change any of my arguments though.
 
Top Bottom