Maybe we should start a new thread entitled “How Thick is a Shagpile Rug?’ for people who want to discuss this further. Or we can merge it with the very technical discussion in the ‘How long is a Piece of String’ thread
Anyway, back to speaker rankings. As you mentioned you will be measuring subwoofers, I was also thinking about how you could give a predicted preference rating for them as well. I think the inverse process to that I proposed for the maximum potential speaker rating could be done, giving a potential rating for the sub in a best-case scenario when used with ideal speakers. This can be calculated by using the ideal values of NBD_ON = NBD_PIR = 0 and SM_PIR = 1 in Olive’s formula, and plugging in the actual LFX value of the particular subwoofer in question. However, this is defined as the frequency at “-6 dB relative to the mean level y_LW measured in the listening window (LW) between 300 Hz-10 kHz.” Obviously, this frequency range is outside that of a subwoofer, so I would change the range to 60 Hz-80 Hz, which will be within the peak, flat response region of the vast majority of subwoofers.
As this only uses a single variable (LFX) of the subwoofer though, I suggest an alternative rating, that also takes into account the Low Frequency
Quality (LFQ) variable Olive used in his first preference rating formula for the 13 speakers in the initial listening test. He states:
However, as this was originally defined for use with speakers, and we’re only characterizing a subwoofer here that is not meant to reach flat up to 300 Hz, I suggest a lower upper-bound of 120 Hz be chosen in the LFQ formula, which would just overlap nicely with the lower-bound of 100 Hz in the narrow-band deviation and smoothness variables used for the speaker ratings (all this in addition to using 60-80 Hz as the frequency range for the mean amplitude in the listening window, y_LW, as was done for subwoofer LFX). 120 Hz is also commonly the highest crossover frequency that can be set on a subwoofer, the limit where bass localization can become an issue, and the maximum frequency sent via the ‘.1’ LFE (low-frequency effects) channel in movies, TV series and games, which would be very important to home theatre users / gamers (a huge market for subwoofers), not to mention multichannel audio (music) listeners.
The question then is, what weightings should these two variables, LFX and LFQ, have in a subwoofer rating formula? I would suggest keeping the relative weightings from Olive's first preference rating formula (equation 9 in his paper) i.e. coefficients of -1.28 for LFX and -0.66 for LFQ. Olive states these correspond to a 'proportional contribution to the model' of 6.27% and 18.64% respectively. (This may seem at odds with their coefficients, but calculating percentage contribution of variables in a multiple linear regression model is apparently not straightforward, so let's take his word for these numbers.) So LFQ has approximately a three times greater contribution in predicting preference than LFX. This may seem imbalanced, but consider that LFQ, by the definition given in equation 8, is mathematically quite dependent on LFX (whereas the inverse seems less so).
This can be seen by considering that, all other metrics being equal, a sub with extension down to 15 Hz (low LFX) is mathematically likely to have a lower LFQ value (less mean deviation from the average listening window amplitude) than a sub that only extends to 30 Hz, merely due to having a greater proportion of its frequency response from its -6 dB point to 120 Hz (upper bound of the LFQ variable) closer to the listening window average. This is due to the bass roll-off being counted by the LFQ formula as deviation from the average listening window amplitude. So LFQ is effectively ‘weighted’ by LFX, meaning in practice the contribution low-frequency extension and bass amplitude deviation have to predicting preference is likely more equally weighted in this model than Olive’s stated percentages for LFX and LFQ contribution suggest. (The partial dependency of LFQ on LFX, or their ‘collinearity’, is evidenced by their close proximity in the ‘correlation circle’ in Figure 3 of Olive’s paper, in which variables that are closer are more strongly correlated with each other.)
We can now calculate the 'y-intercept' of the formula, by using the ideal scenario in which the -6 dB point is 14.5 Hz (as previously calculated from setting Olive's preference rating to a perfect 10 in his formula), and so LFX = log10(14.5) = 1.16, and LFQ = 0 i.e. no deviations from the 60-80 Hz listening window average between the -6 dB point and 120 Hz. (Obviously, the latter is not 100% physically possible as the bass-roll off cannot be infinitely steep, but this is an ideal case after all.) Plugging these numbers into our formula for the case of a maximum 10 score, we get a y-intercept of 11.49.
So finally, we arrive at a formula for subwoofer ratings using both LFX and LFQ variables of:
11.49 – 1.28*LFX(sub) – 0.66*LFQ(sub)
Where LFX(sub) is the log10 of the first frequency x_SP below 60 Hz in the sound power curve, that is -6 dB relative to the mean level y_LW(sub) measured in the listening window LW(sub) between 60 Hz and 80 Hz, and LFQ(sub) is the level within each n band of the sound power curve calculated across N bands, from the lowest frequency defined by LFX(sub) up to 120 Hz.
I think this would be a good rating formula to use and present for each subwoofer measured, as it takes into account both low frequency extension and quality (deviation from flat). However, I also think you should post the -6 dB lower frequency point, and the individual calculated values for the LFX(sub) and LFQ(sub) variables for all subs, as well as the four variables in Olive's speaker formula (NBD_ON, NBD_PIR, SM_PIR and LFX) for all
speakers measured, for two reasons. First, posting the values for all variables would give a nice comparative breakdown of a subwoofer or speaker’s performance, so individual areas of their performance can be compared between different speakers / subs, in addition to a total score. Second, having the values of all variables for everyone to see and use would allow them to create formulas for their specific set-up, for example, the ‘maximum potential rating’ for subwoofers used with ideal speakers as I explained in the beginning of this post, or combining any one of the ratings of the speakers you have measured with one of the sub’s ratings, by replacing the LFX value of the speaker in Olive’s rating formula with the LFX(sub) value of the subwoofer. You could add all these variable values to the end of each review, in the form of a table or Excel spreadsheet file for example.