• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master Preference Ratings for Loudspeakers

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,281
Location
Oxford, England
Nothing to do with the preference rating, but I realized a few days ago that I have been calculating +/-dB windows wrong for frequency response.

I was simply taking the highest and lowest SPL in the spec’d range and subtracting them and dividing by 2.
However, +/-3dB doesn’t mean a 6dB window, it means a 7dB window (+/-2dB means 5dB, etc.). So now I have to go back through all of them and change the formula.

[(max SPL - min SPL) -1] /2

I feel for you. Good luck.
 

pierre

Addicted to Fun and Learning
Forum Donor
Joined
Jul 1, 2017
Messages
962
Likes
3,048
Location
Switzerland
Nothing to do with the preference rating, but I realized a few days ago that I have been calculating +/-dB windows wrong for frequency response.

I was simply taking the highest and lowest SPL in the spec’d range and subtracting them and dividing by 2.
However, +/-3dB doesn’t mean a 6dB window, it means a 7dB window (+/-2dB means 5dB, etc.). So now I have to go back through all of them and change the formula.

[(max SPL - min SPL) -1] /2

i am confused. Where does that come from?
 

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,760
Likes
9,442
Location
Europe
Nothing to do with the preference rating, but I realized a few days ago that I have been calculating +/-dB windows wrong for frequency response.

I was simply taking the highest and lowest SPL in the spec’d range and subtracting them and dividing by 2.
However, +/-3dB doesn’t mean a 6dB window, it means a 7dB window (+/-2dB means 5dB, etc.).
Not, it's still a 6dB range but you need 7 markers. It's called "ladder problem".
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
i am confused. Where does that come from?
Dammit, never mind. the calculation is correct.

I was thinking about for something else, @LTig got it right.

+/-3dB is a window covering 7 tick marks on a graph, but subtracting the max & min is still 6. So, I was right about the window, but not about the need to change the formula.
Saves me a lot of trouble.
 
Last edited:

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,760
Likes
9,442
Location
Europe
Dammit, never mind. the calculation is correct.

I was thinking about for something else, @LTig got it right.

+/-3dB is a window covering 7 tick marks on a graph, but subtracting the max & min is still 6. So, I was right about the window, but not about the need to change the formula.
Saves me a lot of trouble.
Very good. Don't blame yourself too much though. You're not the first one who gets hit by the ladder problem, nor the last one. Every coder gets hit at least once.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
So, after finding out that Google Sheets has the LINEST formula to derive variable weights, I'm much more motivated to try and make a new formula. And I did test it out and to get the current scores it generates the same formula as Olive's.

What I will be doing is "normalizing" the curves other than the on-axis so that they have a 0 slope, as this is the only explanation I can come up with (besides bass), as to why the formula for all 70 speakers (which includes 3-way towers) is so different than the one for the 13 bookshelves (all 2-way).

As for what range to calculate slope, I've been doing 100Hz-16kHz, which is what SM & AAD care about.

The question remains though, how many components should I calculate; looking at the correlation for all components on all curves for the original 13 bookshelves that Olive already calculated:
Screen Shot 2020-11-26 at 8.49.30 AM.png


AAD for the curves with little/no slope (on-axis & LW) is better than NBD, so I'm thinking to forget about NBD for now and just do the AAD on the ON, normalized LW, and normalized PIR.

So: AAD_ON, AAD_LW (normalized), AAD_PIR (normalized), LFX, and LFQ.

No clue why LFQ was disregarded for the final formula, as it analyzes the bass roll-off; if 2 have identical Spins from 100Hz and up with the same -6dB point in the bass, they get the same score, regardless if one speaker has a shallower roll-off and thus has more bass before the -6dB point.

Should Smoothness be scraped for now? The only saving grace to a normalized curve is that it is more forgiving of deviations the higher in frequency they appear, so even if log-spaced, should the 10kHz-16kHz region be taken less lightly than the 1kHz-1.6kHz region?

The only psychoacoustic aspect I can't figure out how to deal with is that it is claimed that dips are not as bad as peaks.

____________________
Thoughts?
 
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
So, after finding out that Google Sheets has the LINEST formula to derive variable weights, I'm much more motivated to try and make a new formula. And I did test it out and to get the current scores it generates the same formula as Olive's.

What I will be doing is "normalizing" the curves other than the on-axis so that they have a 0 slope, as this is the only explanation I can come up with (besides bass), as to why the formula for all 70 speakers is so different than the one for the 13 bookshelves.

As for what range to calculate slope, I've been doing 100Hz-16kHz, which is what SM & AAD care about.

The question remains though, how many components should I calculate; looking at the correlation for all components on all curves for the original 13 bookshelves that Olive already calculated:
View attachment 95950

AAD for the curves with little/no slope (on-axis & LW) is better than NBD, so I'm thinking to forget about NBD for now and just do the AAD on the ON, normalized LW, and normalized PIR.

So: AAD_ON, AAD_LW (normalized), AAD_PIR (normalized), LFX, and LFQ.

Should Smoothness be scraped for now? The only saving grace to a normalized curve is that it is more forgiving of deviations the higher in frequency they appear, so even if log-spaced, should the 10kHz-16kHz region be taken less lightly than the 1kHz-1.6kHz region?

The only psychoacoustic aspect I can't figure out how to deal with is that it is claimed that dips are not as bad as peaks.

Thoughts?

Great that you're looking into this :)

Is there a higher "cost", e.g. in terms of processing power, to simply using all the factors and weighting them in terms of the absolute value of their correlation coefficient? Would seem to be the most elegant way to handle it IMO.
Penalize positive deviations twice as hard as negative deviations using the average level of 200-400 Hz as the reference band.
The thing is though, we don’t know how much more peaks are than dips.

Bückheim investigated this in some detail. His findings suggest that a weighting factor of more than two would be in order, I think. Do you have AES access? If not I can summarise the findings, if it would help?
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
Great that you're looking into this :)

Is there a higher "cost", e.g. in terms of processing power, to simply using all the factors and weighting them in terms of the absolute value of their correlation coefficient? Would seem to be the most elegant way to handle it IMO.

With or without "normalizing" the LW/PIR?

As using the data as-is, Olive no doubt did every combination to find out which collection of variables had the highest correlation. Which again, becomes less correlated once you expand the sample to include towers, especially 3-ways.

With normalizing, the only tedious part is the normalization calculation. I have already done the LFQ for all 13 bookshelves and I have the calculation for AAD for the On-axis done, just need to apply it to all 13 sheets.

That's the issue on my end, all the data is isolated, so if I make a change to anything, I have to do that change for each individual data file. If I achieve what I want to do, I'm gonna dread going through the >100 speakers we have measured and applying this change.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
With or without "normalizing" the LW/PIR?

With normalisation, I was thinking. Not necessarily of the LW, as its slope won't tend to differ significantly from the ON, but certainly of the PIR. Would that be a lot more work? :/
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
With normalisation, I was thinking. Not necessarily of the LW, as its slope won't tend to differ significantly from the ON, but certainly of the PIR. Would that be a lot more work? :/
Just would have to normalize it, which isn’t terrible if just for AAD, it’s really not fun if for NBD.


In the Olive paper, even with all 70 speakers, they found the ideal slope of the LW to be -0.2 (same as with the 13 bookshelves), so there is a slope happening, albeit very small.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
Just would have to normalize it, which isn’t terrible if just for AAD, it’s really not fun if for NBD.


In the Olive paper, even with all 70 speakers, they found the ideal slope of the LW to be -0.2 (same as with the 13 bookshelves), so there is a slope happening, albeit very small.

Ok, that's clear.

Yeh, I wouldn't necessarily worry about the LW personally.

Re: AAD vs NBD, it seems that NBD is much more strongly correlated than AAD. Indeed it's only in terms of SP that AAD is really correlated in any significant way, and even in that case its correlation coefficient (absolute value) is lower than that of NBD and SM.

Not that I'm suggesting you have to do all the extra work :) But my feeling is that, if you're going to normalise these data for the sake of increasing the validity of the model, ignoring SM and NBD might be just as great a step backwards as normalisation is a step forwards.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
Yes, but it’s only in respect of ER, PIR and SP that normalisation is likely to be of any appreciable benefit.
The only reason NBD is better correlated for those is due to the slope, as it is basically the same as AAD, but instead of the whole frequency range, it splits it up into portions and then averages those portions.
So, if the data is sloped, of course looking at the whole range at once is not going to fare well.

Intuitively, NBD should be better, but the fact that AAD is better for ON & LW says otherwise. So, if we normalize the other curves (I’m just doing PIR for now), then the normalized AAD should prove more useful than the normalized NBD.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
@MZKM yep, that makes sense to me.

(I had misunderstood how NBD worked. I had thought it looked at deviation within each band, but it seems you’re saying it looks at deviation between each band.)
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
@MZKM yep, that makes sense to me.

(I had misunderstood how NBD worked. I had thought it looked at deviation within each band, but it seems you’re saying it looks at deviation between each band.)
It separates it into bands, calculates abs average deviation in each band from band average, then averages the abs average deviation of all bands used.

Since it’s using bands, the lowest and highest frequency in each band should not be too far apart for a good speaker; but if using AAD on the PIR, 16kHz for sure should be a good deal lower than 100Hz, thus more deviation. Ideally, once normalized, PIR should look identical to ON/LW above 100Hz.
 
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
It separates it into bands, calculates abs average deviation in each band from band average, then averages the abs average deviation of all bands used.

Ah, that is basically how I’d thought it worked. But anyway, you’ve convinced me that AAD would do an adequate job :)

I’d still like to see heavier penalties for peaks than for dips ideally, but can understand if that’s a bridge too far at this stage.
 
Top Bottom