• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Speaker Equivalent SINAD Discussion

Rick Sykora

Major Contributor
Forum Donor
Joined
Jan 14, 2020
Messages
3,613
Likes
7,347
Location
Stow, Ohio USA
I don't think there were any powered speakers in there. Or Pro/nearfield monitors. There was only one dipole (Martin Logan).

My post was envisioning a future speaker (ASR) database with a lot more entries. ;)
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
I don't think there were any powered speakers in there. Or Pro/nearfield monitors.

If you're talking about Sean Olive's study his preference formula is based on, he did say in his AES paper that the speaker sample included active pro near-field monitors.
 

pierre

Addicted to Fun and Learning
Forum Donor
Joined
Jul 1, 2017
Messages
964
Likes
3,058
Location
Switzerland
Great! Not only this confirms the claim about Spinorama being "all you need", but also makes it especially interesting to digitize all the existing Harman Spinoramas using for example https://automeris.io/WebPlotDigitizer/. I will probably try it out a bit later. Right now I am itching to look into making Spinoramas from the https://www.princeton.edu/3D3A/Directivity.html data using Virtuix CAD + Octave :)

Thank you for the excellent Excel work! :)

3d3a does distribute the data and you don’t need to use webplotdigitizer. They give 2 IR files per speakers. After an FFT they are back in freq domain and then you can compute the spin from them. Python code is Here for the spin part and Here for fro part. hope that helps.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
So I'm currently in the process of implementing Olive score calculation in Loudspeaker Explorer and cross-checking with @MZKM's source spreadsheets to make sure I get it right. After re-reading the debates around how to interpret the paper, I'm still confused about the interpretation of NBD:

Capture.PNG


To me, a strict implementation of the formula shown above is the following:
  1. For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
  2. Sum all these absolute deviations together.
  3. Divide by the number of ½-octave bands (N).

However, looking at @MZKM's source spreadsheets it doesn't look like that's what he implemented. Instead, what he implemented is:
  1. For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
  2. Compute the mean of these absolute deviations within each ½-octave band.
  3. Sum all these means together.
  4. Divide by the number of ½-octave bands (N).

There's an additional mean in there that doesn't appear at all in Olive's formula.

@bobbooo seems to agree with @MZKM:

So y-bar minus y-sub(b) is the amplitude deviation of each of these 10 equally log-spaced (in frequency) points from their average amplitude in a particular band. This is then averaged to arrive at a mean deviation of these points from the average in that band. This is done for each band in the 100 Hz to 12 kHz range, then finally the mean of all these average deviations is taken, to arrive at the Narrow Band Deviation metric for the speaker as a whole.
NBD is an average of the mean deviation within each 1/2-octave band

What makes this even more confusing is that, in the explanation of the formula, the paper does mention the concept of "mean absolute deviation within each ½-octave band" (see above), but a strict interpretation of the formula doesn't use that concept. Is this why you guys decided to deviate from the formula? Or are there additional hints that I missed?

(By the way, another way to interpret the formula is to start from my interpretation but then assume the definition of N is wrong and it's actually the total number of points, not the number of ½-octave bands. This interpretation would be practically identical to @MZKM's, because in practice each ½-octave band has an equal number of points.)
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
So I'm currently in the process of implementing Olive score calculation in Loudspeaker Explorer and cross-checking with @MZKM's source spreadsheets to make sure I get it right. After re-reading the debates around how to interpret the paper, I'm still confused about the interpretation of NBD:

View attachment 59224

To me, a strict implementation of the formula shown above is the following:
  1. For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
  2. Sum all these absolute deviations together.
  3. Divide by the number of ½-octave bands (N).

However, looking at @MZKM's source spreadsheets it doesn't look like that's what he implemented. Instead, what he implemented is:
  1. For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
  2. Compute the mean of these absolute deviations within each ½-octave band.
  3. Sum all these means together.
  4. Divide by the number of ½-octave bands (N).

There's an additional mean in there that doesn't appear at all in Olive's formula.

@bobbooo seems to agree with @MZKM:




What makes this even more confusing is that, in the explanation of the formula, the paper does mention the concept of "mean absolute deviation within each ½-octave band" (see above), but a strict interpretation of the formula doesn't use that concept. Is this why you guys decided to deviate from the formula? Or are there additional hints that I missed?

(By the way, another way to interpret the formula is to start from my interpretation but then assume the definition of N is wrong and it's actually the total number of points, not the number of ½-octave bands. This interpretation would be practically identical to @MZKM's, because in practice each ½-octave band has an equal number of points.)
If finding the average you need to divide by the cardinality, and it makes sense with the score. And as you point out it even mentions the mean abs value of each 1/2-octave band. However, if we change it to N= total points, it won’t make a difference.

Dividing by 10 and then dividing by 14 is the same as simply dividing by 140 (well, the software has a precision of at least 10 decimal places).
 
Last edited:

pierre

Addicted to Fun and Learning
Forum Donor
Joined
Jul 1, 2017
Messages
964
Likes
3,058
Location
Switzerland
So I'm currently in the process of implementing Olive score calculation in Loudspeaker Explorer and cross-checking with @MZKM's source spreadsheets to make sure I get it right. After re-reading the debates around how to interpret the paper, I'm still confused about the interpretation of NBD:

View attachment 59224

To me, a strict implementation of the formula shown above is the following:
  1. For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
  2. Sum all these absolute deviations together.
  3. Divide by the number of ½-octave bands (N).

However, looking at @MZKM's source spreadsheets it doesn't look like that's what he implemented. Instead, what he implemented is:
  1. For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
  2. Compute the mean of these absolute deviations within each ½-octave band.
  3. Sum all these means together.
  4. Divide by the number of ½-octave bands (N).

There's an additional mean in there that doesn't appear at all in Olive's formula.

@bobbooo seems to agree with @MZKM:




What makes this even more confusing is that, in the explanation of the formula, the paper does mention the concept of "mean absolute deviation within each ½-octave band" (see above), but a strict interpretation of the formula doesn't use that concept. Is this why you guys decided to deviate from the formula? Or are there additional hints that I missed?

(By the way, another way to interpret the formula is to start from my interpretation but then assume the definition of N is wrong and it's actually the total number of points, not the number of ½-octave bands. This interpretation would be practically identical to @MZKM's, because in practice each ½-octave band has an equal number of points.)

Hi Etienne,
here is how I implemented it, I still do not understand the sample part. I average on all points in the interval and not 10 equally spaced.

for (omin, omax) in octave(2):
# 100hz to 12k hz
if omin < 100:
continue
if omax > 12000:
break
y = dfu.loc[(dfu.Freq >= omin) & (dfu.Freq < omax)].dB
y_avg = np.mean(y)
# don't sample, take all points in this octave
sum += np.mean(np.abs(y_avg-y))
n += 1
return sum/n
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
I still do not understand the sample part. I average on all points in the interval and not 10 equally spaced.

Data published in most @amirm datasets is already in the right format: it's 20 points/octave, equally spaced in log frequency. This is why your code is correct without having to do any resampling. You do "average on 10 equally spaced" points (per ½-octave band) already because that's the shape of the input data.

However, if you're processing data from other datasets, they might require some resampling to get them into the proper format. One could argue that, since everything gets divided by the total number of points in the end anyway, the result will still be correct, but in practice I don't think that's entirely true because fewer points basically means smoothing which likely means smaller deviations which means better score (and vice-versa in the other direction). We have this problem for example with @amirm's early JBL LSR 305P MkII data which is 10 points/octave, not 20, which means NBD could be underestimated. There's also the problem that if points are linearly spaced instead of log spaced, it will bias towards the top end of each ½-octave band. Ideally, everything should be computed at 20 points/octave equally spaced in log frequency to ensure we stick as close as possible to the Olive model.
 
Last edited:

pierre

Addicted to Fun and Learning
Forum Donor
Joined
Jul 1, 2017
Messages
964
Likes
3,058
Location
Switzerland
Data published in most @amirm datasets is already in the right format: it's 20 points/octave, equally spaced in log frequency. This is why your code is correct without having to do any resampling. You do "average on 10 equally spaced" points (per ½-octave band) already because that's the shape of the input data.

However, if you're processing data from other datasets, they might require some resampling to get them into the proper format. One could argue that, since everything gets divided by the total number of points in the end anyway, the result will still be correct, but in practice I don't think that's entirely true because fewer points basically means smoothing which likely means smaller deviations which means better score (and vice-versa in the other direction). We have this problem for example with @amirm's early JBL LSR 305P MkII data which is 10 points/octave, not 20, which means NBD could be underestimated. There's also the problem that if points are linearly spaced instead of log spaced, it will bias towards the top end of each ½-octave band. Ideally, everything should be computed at 20 points/octave equally spaced in log frequency to ensure we stick as close as possible to the Olive model.

resampling has his own issue: depending on your interpolation, result will also change.

why would fewer points smooth the results? it is data dependant. If you pick 1 point only and this point is where you have the maximum of divergence, then you emphasis the result?
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
resampling has his own issue: depending on your interpolation, result will also change. why would fewer points smooth the results? it is data dependant. If you pick 1 point only and this point is where you have the maximum of divergence, then you emphasis the result?

When I said "fewer points basically means smoothing", what I meant was, lower-resolution data typically comes from higher-resolution data that has been smoothed. For example, @amirm makes measurements that have a linear resolution of 2.7 Hz. At high frequencies that's way above 20 points/octave. Evidently the 2.7 Hz linear-spaced data (presumably the result of an FFT) went through smoothing to obtain the 20 pts/octave data.

I agree it's not necessarily true that lower-resolution data is obtained by smoothing higher-resolution data (one could use decimation instead, for example), but in practice I doubt anyone does it any differently. It's an implicit assumption everyone makes.

The problem is that calculating NBD on datasets that have different smoothing strengths is unfair, because the point of NBD is to penalize local variations, but the smoothing process removes local variations.

The Olive model expects data that is smoothed to 20 points/octave, no more, no less. It says so very explicitly in section 3.2 of Part II of the AES paper:

All the anechoic measurements have high frequency resolution (2 Hz) from 2 Hz to 20 kHz with a 1/20-octave smoothing filter applied to the raw data.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
Well, I just noticed that manually calculating the PIR gives different results compared to the data the NFS is spitting out. I let Amir know and he has contacted Klippel.

KEF R3 Example:
chart (35).png


The result was an ~ 0.1 boost in score (including ignoring LFX).
I really am dreading having to go back through the ~50 speakers measured thus far and make corrections, as well as edit by posts in the reviews.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,996
Likes
6,864
Location
UK
Well, I just noticed that manually calculating the PIR gives different results compared to the data the NFS is spitting out. I let Amir know and he has contacted Klippel.

KEF R3 Example:
View attachment 61747

The result was an ~ 0.1 boost in score (including ignoring LFX).
I really am dreading having to go back through the ~50 speakers measured thus far and make corrections, as well as edit by posts in the reviews.
I only just found this thread, interesting! So what was the end result from the contacts, etc, that was early May?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,663
Likes
240,999
Location
Seattle Area
I only just found this thread, interesting! So what was the end result from the contacts, etc, that was early May?
Klippel misunderstood the issue and gave me an answer unrelated to problem at hand. I pointed that out but then did not hear anything back. I have not pestered them since it is a small issue and I try to be careful with what I bring to their attention.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
I only just found this thread, interesting! So what was the end result from the contacts, etc, that was early May?

The Early Reflection curve is not computed correctly, and since the PIR is using the ER curve, it too is not computed correctly. Amir has a manual correction for the ER, but he states it's a pain so he doesn't use it, not sure if it would also fix the PIR, or if it's fixing it after everything has been computed.
Klippel misunderstood the issue and gave me an answer unrelated to problem at hand. I pointed that out but then did not hear anything back. I have not pestered them since it is a small issue and I try to be careful with what I bring to their attention.
And I now manually calculate all the curves.

To show how much a pain it is, here is the formula for the PIR:
Screen Shot 2020-05-22 at 3.44.15 PM.png
 
Last edited:

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,767
Likes
37,626
The Early Reflection curve is not computed correctly, and since the PIR is using the ER curve, it too is not computed correctly. Amir has a manual correction for the ER, but he states it's a pain so he doesn't use it, not sure if it would also fix the PIR, or if it's fixing it after everything has been computed.

And I now manually calculate all the curves.

To show how much a pain it is, here is the formula for the PIR:
View attachment 64836
Looks pretty simple to me. So easy in fact no reason I should do it. You are hardly break a sweat on something colored coded and all.

:p
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
Looks pretty simple to me. So easy in fact no reason I should do it. You are hardly break a sweat on something colored coded and all.

:p
It's not difficult at all, but tedious as hell, especially as I don't have dual monitors so I have to keep scrolling and changing windows/tabs to make sure I'm getting everything right (simple mistakes like grabbing the frequencies and not SPLs for a selected degree).

I hate tedious things, which is why I still haven't updated all the previous measured speakers yet, I think I still have ~30 to go.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
It's not difficult at all, but tedious as hell, especially as I don't have dual monitors so I have to keep scrolling and changing windows/tabs to make sure I'm getting everything right (simple mistakes like grabbing the frequencies and not SPLs for a selected degree).

Well, you're clearly reaching the scalability limits of spreadsheets. Actually most people would argue you went way past these limits a long time ago! I'm impressed by your work, but if I were you I would consider writing some actual code… it's only going to get worse, especially given Amir's extreme review rate.

Personally, the thing that scares me most about spreadsheets is not so much the tediousness and poor readability of the formulas (though it's a big turn-off), it's that it's very easy to introduce subtle bugs. For example, having a range that covers the wrong column, or misses a row, etc. And it's almost impossible to spot them visually because the formula is hidden until you click on the cell. At least when I'm coding I have the whole code right in front of me and it's easy to inspect it and check it.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,767
Likes
37,626
It's not difficult at all, but tedious as hell, especially as I don't have dual monitors so I have to keep scrolling and changing windows/tabs to make sure I'm getting everything right (simple mistakes like grabbing the frequencies and not SPLs for a selected degree).

I hate tedious things, which is why I still haven't updated all the previous measured speakers yet, I think I still have ~30 to go.
Trust me in case it didn't come across, I'm very appreciative of what you have done for us on this. I know how tedious it would be.

Speaking about monitors, my sister has been allowed to work from home in the last few weeks. Her company purchased 12 inch Surface Pro's for her and a few others to use from home. I can't understand why. She is working with 5 to 9 open spreadsheets at any given time. She has three monitors at work. At the very least they could have gotten a 15 inch laptop. I did give up one of my monitors for her to use in the interim.
 

6speed

Active Member
Joined
Nov 7, 2018
Messages
128
Likes
84
Location
Virginia, USA
I have some off axis measurements of my own speakers and would like to see if I can calculate preference ratings for them. Does anyone have any advice on how to get started using the existing ASR process? It seems like I can use VituixCAD to generate the various responses, but the spreadsheets are built assuming a certain number of data points (and VituixCAD saves >2x as many data points to FRD). Is that the only hurdle?
 

Maiky76

Senior Member
Joined
May 28, 2020
Messages
446
Likes
3,754
Location
French, living in China
Hi,

I have been playing with implementing the PPR with Matlab for the Kali IN8:
https://www.audiosciencereview.com/...udio-in-8-studio-monitor-review.10897/page-51

I have tried to reproduce the score form @MZKM (big thanks to him for his help!)
Using the sheet from the thread couple of comments/questions:
  1. The score from the attached NHT sheet is wrong but the score on the Master Preference Ratings for Loudspeakers looks correct so it’s probably an older version.
    The issue is at least with the NBD that are wrong
  2. For the IN8 using the sheet that @MZKM shared with me I can, more or less, reproduce his score
@MZKM NBD_ON = 0.7159592857 vs 0.7160
@MZKM NBD_PIR = 0.3775553044 vs 0.3941
@MZKM NBD_LW = 0.4736059742 vs 0.4736
@MZKM SM_PIR = 0.880 vs 0.8752
@MZKM LFX = 1.55 vs 1.572 (probably first vs closest point lower / higher)
@MZKM Score= 5.12 vs 5.06 or 1.2% lower
I just can’t see why the NBD_PIR deviate beyond rounding error, not sure…

For the NHT @MZKM Score= 2.70 vs 2.70 too

3. for the SM_PIR the target slope it given in chapter 077 in the Patent and should be -1.75

However this means that the slope target is a linear decrease of the PIR of 12d.1dB form 20 to 20000Hz.
It seems that was latter revised to 9-10dB after additional experiments on Room correction so should we use use that instead?
-1.30/-1.38/-1.45 are for 9.0/9.5/10dB.
I does not seem to affect the score though?

Here is the data for the raw IN8:

20200602 Kali IN8 Raw Score.png


And the with EQ I designed

20200602 Kali IN8 EQed Score.png


The score of 6.26 would put more or less on par with the Kef R3 and above the Revel F208

Next step:
include the Predicted rating into the GA optimizer

Cheers
M
 
Last edited:
Top Bottom