I don't think there were any powered speakers in there. Or Pro/nearfield monitors. There was only one dipole (Martin Logan).
My post was envisioning a future speaker (ASR) database with a lot more entries.
Last edited:
I don't think there were any powered speakers in there. Or Pro/nearfield monitors. There was only one dipole (Martin Logan).
I don't think there were any powered speakers in there. Or Pro/nearfield monitors.
Great! Not only this confirms the claim about Spinorama being "all you need", but also makes it especially interesting to digitize all the existing Harman Spinoramas using for example https://automeris.io/WebPlotDigitizer/. I will probably try it out a bit later. Right now I am itching to look into making Spinoramas from the https://www.princeton.edu/3D3A/Directivity.html data using Virtuix CAD + Octave
Thank you for the excellent Excel work!
So y-bar minus y-sub(b) is the amplitude deviation of each of these 10 equally log-spaced (in frequency) points from their average amplitude in a particular band. This is then averaged to arrive at a mean deviation of these points from the average in that band. This is done for each band in the 100 Hz to 12 kHz range, then finally the mean of all these average deviations is taken, to arrive at the Narrow Band Deviation metric for the speaker as a whole.
NBD is an average of the mean deviation within each 1/2-octave band
If finding the average you need to divide by the cardinality, and it makes sense with the score. And as you point out it even mentions the mean abs value of each 1/2-octave band. However, if we change it to N= total points, it won’t make a difference.So I'm currently in the process of implementing Olive score calculation in Loudspeaker Explorer and cross-checking with @MZKM's source spreadsheets to make sure I get it right. After re-reading the debates around how to interpret the paper, I'm still confused about the interpretation of NBD:
View attachment 59224
To me, a strict implementation of the formula shown above is the following:
- For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
- Sum all these absolute deviations together.
- Divide by the number of ½-octave bands (N).
However, looking at @MZKM's source spreadsheets it doesn't look like that's what he implemented. Instead, what he implemented is:
- For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
- Compute the mean of these absolute deviations within each ½-octave band.
- Sum all these means together.
- Divide by the number of ½-octave bands (N).
There's an additional mean in there that doesn't appear at all in Olive's formula.
@bobbooo seems to agree with @MZKM:
What makes this even more confusing is that, in the explanation of the formula, the paper does mention the concept of "mean absolute deviation within each ½-octave band" (see above), but a strict interpretation of the formula doesn't use that concept. Is this why you guys decided to deviate from the formula? Or are there additional hints that I missed?
(By the way, another way to interpret the formula is to start from my interpretation but then assume the definition of N is wrong and it's actually the total number of points, not the number of ½-octave bands. This interpretation would be practically identical to @MZKM's, because in practice each ½-octave band has an equal number of points.)
So I'm currently in the process of implementing Olive score calculation in Loudspeaker Explorer and cross-checking with @MZKM's source spreadsheets to make sure I get it right. After re-reading the debates around how to interpret the paper, I'm still confused about the interpretation of NBD:
View attachment 59224
To me, a strict implementation of the formula shown above is the following:
- For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
- Sum all these absolute deviations together.
- Divide by the number of ½-octave bands (N).
However, looking at @MZKM's source spreadsheets it doesn't look like that's what he implemented. Instead, what he implemented is:
- For each measurement point, compute the absolute deviation between the point SPL and the mean SPL of the ½-octave band it's in.
- Compute the mean of these absolute deviations within each ½-octave band.
- Sum all these means together.
- Divide by the number of ½-octave bands (N).
There's an additional mean in there that doesn't appear at all in Olive's formula.
@bobbooo seems to agree with @MZKM:
What makes this even more confusing is that, in the explanation of the formula, the paper does mention the concept of "mean absolute deviation within each ½-octave band" (see above), but a strict interpretation of the formula doesn't use that concept. Is this why you guys decided to deviate from the formula? Or are there additional hints that I missed?
(By the way, another way to interpret the formula is to start from my interpretation but then assume the definition of N is wrong and it's actually the total number of points, not the number of ½-octave bands. This interpretation would be practically identical to @MZKM's, because in practice each ½-octave band has an equal number of points.)
I still do not understand the sample part. I average on all points in the interval and not 10 equally spaced.
Data published in most @amirm datasets is already in the right format: it's 20 points/octave, equally spaced in log frequency. This is why your code is correct without having to do any resampling. You do "average on 10 equally spaced" points (per ½-octave band) already because that's the shape of the input data.
However, if you're processing data from other datasets, they might require some resampling to get them into the proper format. One could argue that, since everything gets divided by the total number of points in the end anyway, the result will still be correct, but in practice I don't think that's entirely true because fewer points basically means smoothing which likely means smaller deviations which means better score (and vice-versa in the other direction). We have this problem for example with @amirm's early JBL LSR 305P MkII data which is 10 points/octave, not 20, which means NBD could be underestimated. There's also the problem that if points are linearly spaced instead of log spaced, it will bias towards the top end of each ½-octave band. Ideally, everything should be computed at 20 points/octave equally spaced in log frequency to ensure we stick as close as possible to the Olive model.
resampling has his own issue: depending on your interpolation, result will also change. why would fewer points smooth the results? it is data dependant. If you pick 1 point only and this point is where you have the maximum of divergence, then you emphasis the result?
All the anechoic measurements have high frequency resolution (2 Hz) from 2 Hz to 20 kHz with a 1/20-octave smoothing filter applied to the raw data.
I only just found this thread, interesting! So what was the end result from the contacts, etc, that was early May?Well, I just noticed that manually calculating the PIR gives different results compared to the data the NFS is spitting out. I let Amir know and he has contacted Klippel.
KEF R3 Example:
View attachment 61747
The result was an ~ 0.1 boost in score (including ignoring LFX).
I really am dreading having to go back through the ~50 speakers measured thus far and make corrections, as well as edit by posts in the reviews.
Klippel misunderstood the issue and gave me an answer unrelated to problem at hand. I pointed that out but then did not hear anything back. I have not pestered them since it is a small issue and I try to be careful with what I bring to their attention.I only just found this thread, interesting! So what was the end result from the contacts, etc, that was early May?
I only just found this thread, interesting! So what was the end result from the contacts, etc, that was early May?
And I now manually calculate all the curves.Klippel misunderstood the issue and gave me an answer unrelated to problem at hand. I pointed that out but then did not hear anything back. I have not pestered them since it is a small issue and I try to be careful with what I bring to their attention.
Looks pretty simple to me. So easy in fact no reason I should do it. You are hardly break a sweat on something colored coded and all.The Early Reflection curve is not computed correctly, and since the PIR is using the ER curve, it too is not computed correctly. Amir has a manual correction for the ER, but he states it's a pain so he doesn't use it, not sure if it would also fix the PIR, or if it's fixing it after everything has been computed.
And I now manually calculate all the curves.
To show how much a pain it is, here is the formula for the PIR:
View attachment 64836
It's not difficult at all, but tedious as hell, especially as I don't have dual monitors so I have to keep scrolling and changing windows/tabs to make sure I'm getting everything right (simple mistakes like grabbing the frequencies and not SPLs for a selected degree).Looks pretty simple to me. So easy in fact no reason I should do it. You are hardly break a sweat on something colored coded and all.
It's not difficult at all, but tedious as hell, especially as I don't have dual monitors so I have to keep scrolling and changing windows/tabs to make sure I'm getting everything right (simple mistakes like grabbing the frequencies and not SPLs for a selected degree).
Trust me in case it didn't come across, I'm very appreciative of what you have done for us on this. I know how tedious it would be.It's not difficult at all, but tedious as hell, especially as I don't have dual monitors so I have to keep scrolling and changing windows/tabs to make sure I'm getting everything right (simple mistakes like grabbing the frequencies and not SPLs for a selected degree).
I hate tedious things, which is why I still haven't updated all the previous measured speakers yet, I think I still have ~30 to go.