• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Speaker Equivalent SINAD Discussion

OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
Hi,

Formula with 13 speakers as these are the only ones we have data for.
I am trying to verify whether or not the score is correct.

If you want to give it a try with your tool I have compiled all the data scanned by @napilopez from here:
https://www.audiosciencereview.com/...-way-speaker-review.13562/page-16#post-412137
https://www.audiosciencereview.com/...gs-for-loudspeakers.11091/page-21#post-412375
In a xls file.
These are with the correct format (frequency vector identical to the NFS) with my calculation for the PIR.
"Just" copy and paste should work, I never managed to get the example you sent me to work properly.
I thinks it's a compatibility issue with Excel.

Just change the extension from .zip to .xlsx

Cheers
M
I have calculated the scores for those as well, I am not at my computer so I will report back later.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
@Maiky76 I can see a number of reasons why the numbers wouldn't match exactly even if your calculations are correct:

- @napilopez's digitization is likely not perfect. I'm not sure how @napilopez did the digitization; I suspect he did by "hand". You might want to try to redigitize them using something like WebPlotDigitizer (which might provide better accuracy) and double check the results.

- Even if the digitization is accurate, we don't know where the actual 1/20-octave points actually lie. Perhaps one way to find out could be to "slide" the points and recompute until the resulting SM/slope/AAD are the best fit for the various figures.

- For NBD, keep in mind the exact definition of the 1/2-octave bands is arbitrary because we don't know the precise boundaries of the bands Olive used (the text is ambiguous in that regard). Again, perhaps one way to resolve the ambiguity could be to slide the 1/2-octave bands until the NBD results are the best fit for the various figures.

That's interesting work, by the way; it didn't occur to me to go back to the Test 1 raw data and recover the data presented in the various tables and figures to validate our understanding of Olive's calculation. That's clever reverse engineering work right there!
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
I did not convert to 1/20, so it is 1/24 I believe, but here are the 13 scores I got (using the generalized model):

-0.75
0.37
0.81
1.73
1.85
2.67
2.96
3.14
3.59
3.99
5.17
5.64
5.93


For PIR, I did this:

20*log10(sqrt(sum(
0.12*10^(LW/10-8)*5^(LW4/10-10),
0.44*10^(ER/10-8)*5^(ER/10-10),
0.44*10^(SP/10-8)*5^(SP/10-10)))
/0.00002)
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
Here is the large test:
Screen Shot 2020-06-08 at 7.14.14 AM.png


Here is me doing my best to match 11 of my scores to the results (discarding the lowest two):
Screen Shot 2020-06-08 at 7.14.14 AM.png


The scores in the 5-6 range were a total guess as to which bubble it was, too clustered and no resolution.

EDIT: The 1.73 seems too large for that bubble, conforming it to ~20 PPO gives me 1.68. It looks to be around 1.55, I think it can be chalked up to slight errors in data extraction; and of course, which data points I deleted tp get to 20 PPO also is a factor.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,390
Location
Seattle Area
It only has 0..+180° horizontal (other speakers have -180°..+180°).

More importantly, it only has -50..+50° vertical (other speakers have -180°..+180°).
Ah, I see now. How is this one?
 

Attachments

  • NHT M00 Spinorama.zip
    89.3 KB · Views: 104

Maiky76

Senior Member
Joined
May 28, 2020
Messages
440
Likes
3,705
Location
French, living in China
Here is the large test:
View attachment 67829

Here is my doing my best to match 11 of my scores to the results (discarding the lowest two):
View attachment 67832

The scores in the 5-6 range were a total guess as to which bubble it was, too clustered and no resolution.

EDIT: The 1.73 seems too large for that bubble, conforming it to ~20 PPO gives me 1.68. It looks to be around 1.55, I think it can be chalked up to slight errors in data extraction; and of course, which data points I deleted tp get to 20 PPO also is a factor.


Hi,

First I made a mistake on my PIR calculation.
Here is the correct formula:
Code:
PIR_calc1      = 10*log10( 0.12 * 10 .^ (Lwin/10) + ...
                           0.44 * 10 .^ (Erfx/10) + ...
                           0.44 * 10 .^ (Spow/10) ) ;
This gives IDENTICAL results to the NFS PIR.
I used the Ocean Way HR5 data from the NFS which I assumed is corrected for the PIR calculation (?).
In any case, even if the ER is not correctly calculated the formula for the PIR seems fine.

I just cannot understand the formula used by @MZKM, could you explain it?

I am using the figure 10 of this paper which should (?) be the Olive metric for the 13 speakers of test one.
A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part II - Development of the Model
Here is the figure:
FIGURE 10(A).png

Here are the scores I scanned from it:
Data10a = [0.8486, 1.198, 1.223, 1.772, 1.6722, 2.5957, 3.6439, 3.7687, 4.0682, 4.792, 5.6406, 5.6406, 6.1398];

Here are the results with the corrected PIR for my calculation :
the difference between my calculation and @MZKM could be explained by the PIR.
20200609 Fig10a.png


I am not expecting the data from the scanned Spinoramas to produce identical results but we should still be in the same ball park and we are obviously not.
Some points seem to be similar but we are not even sure if these scores correspond to the same speakers as I only rank the unlabelled score from low to high. What is more worrisome is the negative score.

IF all my assumptions are correct the score that we are calculating is not the Olive metric and therefore is of little merit.

@edechamps would you try the speaker #12 on the attached data (change .zip to . xlsx)

Edit:
I believe that the graph you used corresponds to this one in the original paper:
Figure 5.png


If we can get the similar score than that of the this graph:
Figure 4.png

EQ9:
Pref.Rating=6.04−0.67*AAD_ON−1.28*LFX −0.66*LFQ + 4.02*SM_ON+ 3.58*SM_SP;
Scanned data:
data04 = [0.9141, 1.1514, 1.2888, 1.5144, 1.7349, 2.4452, 3.7617, 3.7781, 4.2299, 4.6261, 5.7719, 5.7995, 5.8983];
-> This gives access to SM_x and LFX

then
Get the same results for figure 13 a
Figure 13.png


the Equation are:
Pref.Rating = 2.63 −2.86*NBD_SP +5.15*SM _SP +0.417*SL_SP; EQ12
Scanned data:
data13 = [0.8232, 1.4264, 1.6589, 2.0359, 2.5889, 2.7208, 2.9596, 3.368, 3.8142, 4.9892, 5.0081, 5.737, 5.7873];

-> this gives access SM_x and NBD with SL_x easy to calculate (? invert the two lines from the table) :

Then we should be fine to calculate a more accurate score.

To me if SM is R2 both the manual and the direct Matlab calculation provide same results down to 13 decimals so it should be fine; "only", LFX, LFQ, NBD, AAD and SL need to be verified.

Cheers
M
 

Attachments

  • 20200609 Olive test 1 - 13 Speakers Spinorama copy.zip
    268.7 KB · Views: 115
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
Here are the results with the corrected PIR for my calculation :
the difference between my calculation and @MZKM could be explained by the PIR.
index.php


I am not expecting the data from the scanned Spinoramas to produce identical results but we should still be in the same ball park and we are obviously not.
Those are the scores for the original formula, are they not? Thus my scores using the generalized model should of course not be very close.

And no, the ER/PIR curves from the Ocean Way are not correct (I haven’t checked, but I doubt Amir started doing the manual fix), so I still manually calculate all the curves.

Converting between decibels and pascals requires 20•log10, so I don’t know why you are doing 10•log10.

Are you suggesting we try and match the original formula results, thus allowing us to move forward to get the correct generalized results?

I don’t believe the original formula used the 1/2 octave bands, so that wouldn’t work.

I shared my worksheet with Matthew Poes, and he said he is supposed to have a conference with Olive to discuss AES matters, and that he would bring up my concerns about the 1/2 octave bands and how exactly they are designed.
 
Last edited:

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
I don’t believe the original formula used the 1/2 octave bands, so that wouldn’t work.

Right, I missed that… the Test One formula doesn't use NBD… :( The only metrics that are common between the Test One and generalized formulas are LFX and SM.
 

Maiky76

Senior Member
Joined
May 28, 2020
Messages
440
Likes
3,705
Location
French, living in China
Hi,

Those are the scores for the original formula, are they not? Thus my scores using the generalized model should of course not be very close.

I am not quite sure that the graph 10a is using the original Predicted Preference score hence my question.

Are you suggesting we try and match the original formula results, thus allowing us to move forward to get the correct generalized results?

I don’t believe the original formula used the 1/2 octave bands, so that wouldn’t work.

This is exactly what I am suggesting, whether or not the original score is using 1/2 octave is not important and I am also suggesting using several other formulas.
If we replicate the correct scores from the different formulas with the (only) calibrated results we have from the corresponding graphs, step by step
LFX, SM, NBD will be validated but that requires calculating LFQ, SL, AAD (used in the first experiment and which is also using 1/2 octave bands BTW).
Is it easier, maybe, maybe not but at least that would be trying something...

I shared my worksheet with Matthew Poes, and he said he is supposed to have a conference with Olive to discuss AES matters, and that he would bring up my concerns about the 1/2 octave bands and how exactly they are designed.

That would be great, although I would not hold my breath in the short term.
There are several Northridge dwellers around and so far they did not shed light on the matter (I apologize if I am wrong).

Converting between decibels and pascals requires 20•log10, so I don’t know why you are doing 10•log10.

I use that because I believe it to be correct:
Code:
% SPL = 20*log10((p/Pref)) = 10*log10( (p/Pref)^2)
% therefore 10. ^ (SPL/10) = (p/Pref)^2;
% hence p^2 = 10. ^ (SPL/10) * Pref^2;
% We need to do the energy average not the SPL average (that was my original silly mistake):

PIR_calc_NFS  = 10*log10(( 0.12 * 10 .^ (LW/10) * Pref^2 + ...
                           0.44 * 10 .^ (ER/10) * Pref^2 + ...
                           0.44 * 10 .^ (SP10) * Pref^2 ) / Pref^2 ) ;

% up and down Pref^2 simplifies to the correct formula:

PIR(:,k)          = 10*log10( 0.12 * 10 .^ (LW(:,k)/10) + ...
                                  0.44 * 10 .^ (ER(:,k)/10) + ...
                                  0.44 * 10 .^ (SP(:,k)/10) ) ;

This the correct way to also calculate the ER/SP and LW which I also did.
All but the ER are identical to the NFS data which seems to bode well with what you also discovered.
I can't understand your formula.

Cheers
M
 
Last edited:

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
AAD (used in the first experiment and which is also using 1/2 octave bands BTW).

I don't think so. The paper describes AAD as a mean of 1/20 octave bands - which are basically the original measurement points, since the input data is supposed to come with a resolution of 20 points per octave. The definition of AAD makes no mention of 1/2 octave bands.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
I use that because I believe it to be correct:
Code:
% SPL = 20*log10((p/Pref)) = 10*log10( (p/Pref)^2)
% therefore 10. ^ (SPL/10) = (p/Pref)^2;
% hence p^2 = 10. ^ (SPL/10) * Pref^2;
% We need to do the energy average not the SPL average (that was my original silly mistake):

PIR_calc_NFS  = 10*log10(( 0.12 * 10 .^ (LW/10) * Pref^2 + ...
                           0.44 * 10 .^ (ER/10) * Pref^2 + ...
                           0.44 * 10 .^ (SP10) * Pref^2 ) / Pref^2 ) ;

% up and down Pref^2 simplifies to the correct formula:

PIR(:,k)          = 10*log10( 0.12 * 10 .^ (LW(:,k)/10) + ...
                                  0.44 * 10 .^ (ER(:,k)/10) + ...
                                  0.44 * 10 .^ (SP(:,k)/10) ) ;

This the correct way to also calculate the ER/SP and LW which I also did.
All but the ER are identical to the NFS data which seems to bode well with what you also discovered.
I can't understand your formula.

Cheers
M
SPL=
Code:
20*log10(
    Pascal/0.00002
    )

Pascal =
Code:
e^(
  SPL*(ln(2)+ln(5))/20
  )
/50000
Or,
Code:
2^(
   SPL/20-4
  )
*
5^(
   SPL/20-5
  )
Verification

Power average =
Code:
sqrt(
     average(
             summation of (curves)^2
            )
    )
Technically the abs, but that's not an issue here.

However, we must work in Pascal, so it's:
Code:
20
*
log10^(
       sqrt(
            average(
                       summation of (curves)^2
                       )
              )
      /0.00002
      )
Or,
Code:
10
*
log10^(
       average(
               summation of (curves)^2
               )
       /0.0000000004
       )

Now, I typically calculate PIR by not using the already calculated curves, but by using all individual measurements. The latter cannot be done with these as just the Spinorama curves exist.

The SPL of these curves cannot be used, as we must use Pascal, so first we have to convert to Pascal.

However, the PIR curve does not work by doing a weighted sum of the power averages, you do the power average after doing the weighted sum.

I also changed
Code:
2^(
   SPL/20 - 4
  )
*
5^(
    SPL/20 - 5
   )
to
Code:
2^(
   SPL/10 - 8
  )
*
5^(
   SPL/10 - 10
  )
This is due to having to sqrt for the power average.
Verification

Therefore, PIR when only given the Spinorama curves in SPL is thus:
Code:
20*log10(
  sqrt(
    sum(
      0.12 * 2^(LW/10-8) * 5^(LW/10-10),
      0.44 * 2^(ER/10-8) * 5^(ER/10-10),
      0.44 * 2^(SP/10-8) * 5^(SP/10-10)
      )
    )
  /0.00002)
I checked with the PIR I calculate manually using the measurement points and it's identical.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
SPL=
Code:
20*log10(
    Pascal/0.00002
    )

Pascal =
Code:
e^(
  SPL*(ln(2)+ln(5))/20
  )
/50000
Or,
Code:
2^(
   SPL/20-4
  )
*
5^(
   SPL/20-5
  )
Verification

Power average =
Code:
sqrt(
     average(
             summation of (curves)^2
            )
    )
Technically the abs, but that's not an issue here.

However, we must work in Pascal, so it's:
Code:
20
*
log10^(
       sqrt(
            average(
                       summation of (curves)^2
                       )
              )
      /0.00002
      )
Or,
Code:
10
*
log10^(
       average(
               summation of (curves)^2
               )
       /0.0000000004
       )

Now, I typically calculate PIR by not using the already calculated curves, but by using all individual measurements. The latter cannot be done with these as just the Spinorama curves exist.

The SPL of these curves cannot be used, as we must use Pascal, so first we have to convert to Pascal.

However, the PIR curve does not work by doing a weighted sum of the power averages, you do the power average after doing the weighted sum.

I also changed
Code:
2^(
   SPL/20 - 4
  )
*
5^(
    SPL/20 - 5
   )
to
Code:
2^(
   SPL/10 - 8
  )
*
5^(
   SPL/10 - 10
  )
This is due to having to sqrt for the power average.
Verification

Therefore, PIR when only given the Spinorama curves in SPL is thus:
Code:
20*log10(
  sqrt(
    sum(
      0.12 * 2^(LW/10-8) * 5^(LW/10-10),
      0.44 * 2^(ER/10-8) * 5^(ER/10-10),
      0.44 * 2^(SP/10-8) * 5^(SP/10-10)
      )
    )
  /0.00002)
I checked with the PIR I calculate manually using the measurement points and it's identical.

EDIT: I just checked, ours are the same. Yours certainly is easier to write and indeed easier to follow the development of, the fact that log(x^2) =2log(x) didn't cross my mind, and the fact that you are both multiplying and dividing by 0.00002^2 also works out nicely.
 
Last edited:

Maiky76

Senior Member
Joined
May 28, 2020
Messages
440
Likes
3,705
Location
French, living in China
SPL=
Code:
20*log10(
    Pascal/0.00002
    )

Pascal =
Code:
e^(
  SPL*(ln(2)+ln(5))/20
  )
/50000
Or,
Code:
2^(
   SPL/20-4
  )
*
5^(
   SPL/20-5
  )
Verification

Power average =
Code:
sqrt(
     average(
             summation of (curves)^2
            )
    )
Technically the abs, but that's not an issue here.

However, we must work in Pascal, so it's:
Code:
20
*
log10^(
       sqrt(
            average(
                       summation of (curves)^2
                       )
              )
      /0.00002
      )
Or,
Code:
10
*
log10^(
       average(
               summation of (curves)^2
               )
       /0.0000000004
       )

Now, I typically calculate PIR by not using the already calculated curves, but by using all individual measurements. The latter cannot be done with these as just the Spinorama curves exist.

The SPL of these curves cannot be used, as we must use Pascal, so first we have to convert to Pascal.

However, the PIR curve does not work by doing a weighted sum of the power averages, you do the power average after doing the weighted sum.

I also changed
Code:
2^(
   SPL/20 - 4
  )
*
5^(
    SPL/20 - 5
   )
to
Code:
2^(
   SPL/10 - 8
  )
*
5^(
   SPL/10 - 10
  )
This is due to having to sqrt for the power average.
Verification

Therefore, PIR when only given the Spinorama curves in SPL is thus:
Code:
20*log10(
  sqrt(
    sum(
      0.12 * 2^(LW/10-8) * 5^(LW/10-10),
      0.44 * 2^(ER/10-8) * 5^(ER/10-10),
      0.44 * 2^(SP/10-8) * 5^(SP/10-10)
      )
    )
  /0.00002)
I checked with the PIR I calculate manually using the measurement points and it's identical.

Hi,

Thanks a lot for your explanation. A bit convoluted but OK.
20log((x)^n) = n*20log(x) etc.

Here is my Matlab code.
Code:
load f_NFS.mat
Pref = 0.00002;

Data            = importdata('SPL Horizontal.txt');
freq            = Data.data(:,1);
ON              = interp1(freq,Data.data(:,2),f);
ONp2            = 10 .^ (ON/10)*Pref^2;
H10deg          = interp1(freq,Data.data(:,4),f);
H10degp2        = 10 .^ (H10deg/10)*Pref^2;
Hneg10deg       = interp1(freq,Data.data(:,6),f);
Hneg10degp2     = 10 .^ (Hneg10deg/10)*Pref^2;
% and so on...

% -------------------------------------------------------------------------
% Listening Window
% LW  0,  ±10h, ±20h, ±30h, ±10v do energy average of the 9 curves
% -------------------------------------------------------------------------
LWin_calc     = 10*log10(((1 * ONp2 + ...
                           1 * H10degp2 + ...
                           1 * Hneg10degp2 + ...
                           1 * H20degp2 + ...
                           1 * Hneg20degp2 + ...
                           1 * H30degp2 + ...
                           1 * Hneg30degp2 + ...
                           1 * V10degp2 + ...
                           1 * Vneg10degp2) / 9 ) / Pref^2);
                   
% -------------------------------------------------------------------------
% ER
% Floor:     -20v, -30v, -40v do energy average of the 3 curves                                        
% Ceiling:   +40v, +50v, +60v do energy average of the 3 curves
% Front Wall:  0,  ±10h, ±20h, ±30h do energy average of the 7 curves
% Side Wall: ±40h,  ±50h,  ±60h,  ±70h,  ±80h do energy average  of the 10 curves
% Rear Wall: ±90h   ±100h, ±110h, ±120h, ±130h, ±140h, ±150h, ±160h, ±170h, 180 do energy average of the 19 curves
% -------------------------------------------------------------------------
Floor_bounce      = 10*log10( (Vneg20degp2 + Vneg30degp2  + Vneg40degp2 ) / 3 / Pref^2);
Ceiling_bounce    = 10*log10( (V40degp2    + V50degp2     + V60degp2    ) / 3 / Pref^2);
Front_wall_bounce = 10*log10( (ONp2        + H10degp2     + Hneg10degp2 + Hneg20degp2  + H20degp2  + Hneg30degp2  + H30degp2 ) / 7 / Pref^2);
Side_wall_bounce  = 10*log10( (H40degp2    + Hneg40degp2  + H50degp2    + Hneg50degp2  + H60degp2  + Hneg60degp2  + H70degp2 + Hneg70degp2 + ...
                               H80degp2    + Hneg80degp2 ) / 10 / Pref^2);
Rear_wall_bounce  = 10*log10( (H90degp2    + Hneg90degp2  + H100degp2   + Hneg100degp2 + H110degp2 + Hneg110degp2 + ...
                               H120degp2   + Hneg120degp2 + H130degp2   + Hneg130degp2 + H140degp2 + Hneg140degp2 + ...
                               H150degp2   + Hneg150degp2 + H160degp2   + Hneg160degp2 + H170degp2 + Hneg170degp2 + ...
                               H180degp2 ) / 19 / Pref^2);
Rear_wall_bounce1 = 10*log10( (H90degp2    + Hneg90degp2  + H180degp2 ) / 3 / Pref^2);
                                                   
ER_calc   =  10*log10( (10 .^ (Floor_bounce/10)*Pref^2     + 10 .^ (Ceiling_bounce/10)*Pref^2 + 10 .^ (Front_wall_bounce/10)*Pref^2 + ...
                        10 .^ (Side_wall_bounce/10)*Pref^2 + 10 .^ (Rear_wall_bounce/10)*Pref^2 ) / 5 / Pref^2);
ER_calc1  =  10*log10( (10 .^ (Floor_bounce/10)*Pref^2     + 10 .^ (Ceiling_bounce/10)*Pref^2 + 10 .^ (Front_wall_bounce/10)*Pref^2 + ...
                        10 .^ (Side_wall_bounce/10)*Pref^2 + 10 .^ (Rear_wall_bounce1/10)*Pref^2 ) / 5 / Pref^2);
                 
% -------------------------------------------------------------------------
% SP
% Energy average of the sum(Weight*(P/Pref)^2/)sum(w)
% weight are applied to V and H
% -------------------------------------------------------------------------
W = [
0.0006044860 % 00 & 180deg           - 2
0.0047301890 % ±10 & ±170deg V and H - 8
0.0089550270 % ±20 & ±160deg V and H - 8
0.0123873540 % ±30 & ±150deg V and H - 8
0.0149896110 % ±40 & ±140deg V and H - 8
0.0168681540 % ±50 & ±130deg V and H - 8
0.0181659620 % ±60 & ±120deg V and H - 8
0.0190067440 % ±70 & ±110deg V and H - 8
0.0194777870 % ±80 & ±100deg V and H - 8
0.0196293730 % ±90deg        V and H - 4
];

SP_calc = 10*log10( (W(1)*ONp2 + ... % Axis (only once!)
          W(2)*H10degp2  + W(2)*Hneg10degp2  + W(3)*H20degp2  + W(3)*Hneg20degp2  + W(4)*H30degp2  + W(4)*Hneg30degp2  + ...
          W(5)*H40degp2  + W(5)*Hneg40degp2  + W(6)*H50degp2  + W(6)*Hneg50degp2  + W(7)*H60degp2  + W(7)*Hneg60degp2  + ...
          W(8)*H70degp2  + W(8)*Hneg70degp2  + W(9)*H80degp2  + W(9)*Hneg80degp2  + W(10)*H90degp2 + W(10)*Hneg90degp2 + ...
          W(9)*H100degp2 + W(9)*Hneg100degp2 + W(8)*H110degp2 + W(8)*Hneg110degp2 + W(7)*H120degp2 + W(7)*Hneg120degp2 + ...
          W(6)*H130degp2 + W(6)*Hneg130degp2 + W(5)*H140degp2 + W(5)*Hneg140degp2 + W(4)*H150degp2 + W(4)*Hneg150degp2 + ...
          W(3)*H160degp2 + W(3)*Hneg160degp2 + W(2)*H170degp2 + W(2)*Hneg170degp2 + ... % Horizontal
          W(1)*H180degp2 + ...  % Back (only once!)
          W(2)*V10degp2  + W(2)*Vneg10degp2  + W(3)*V20degp2  + W(3)*Vneg20degp2  + W(4)*V30degp2  + W(4)*Vneg30degp2  + ...
          W(5)*V40degp2  + W(5)*Vneg40degp2  + W(6)*V50degp2  + W(6)*Vneg50degp2  + W(7)*V60degp2  + W(7)*Vneg60degp2  + ...
          W(8)*V70degp2  + W(8)*Vneg70degp2  + W(9)*V80degp2  + W(9)*Vneg80degp2  + W(10)*V90degp2 + W(10)*Vneg90degp2 + ...
          W(9)*V100degp2 + W(9)*Vneg100degp2 + W(8)*V110degp2 + W(8)*Vneg110degp2 + W(7)*V120degp2 + W(7)*Vneg120degp2 + ...
          W(6)*V130degp2 + W(6)*Vneg130degp2 + W(5)*V140degp2 + W(5)*Vneg140degp2 + W(4)*V150degp2 + W(4)*Vneg150degp2 + ...
          W(3)*V160degp2 + W(3)*Vneg160degp2 + W(2)*V170degp2 + W(2)*Vneg170degp2) / ... % Vertical
          ( 2*W(1) + 8*W(2) + 8*W(3) + 8*W(4) + 8*W(5) + 8*W(6) + 8*W(7) + 8*W(8) + 8*W(9) + 4*W(10) ) / Pref^2); % Sum of the weights
   
% -------------------------------------------------------------------------
% PIR: 0.12LW + 0.44ER + 0.44SP do energy average of the 3 curves
% -------------------------------------------------------------------------
% PIR_MZKM = 20*log10(sqrt(sum(
% 0.12*10^(LW/10-8)*5^(LW4/10-10),
% 0.44*10^(ER/10-8)*5^(ER/10-10),
% 0.44*10^(SP/10-8)*5^(SP/10-10)))
% /0.00002)


% PIR from the NFS data
PIR_calc_NFS  = 10*log10(( 0.12 * 10 .^ (Lwin/10) * Pref^2 + ... % LW
                           0.44 * 10 .^ (Erfx/10) * Pref^2 + ... % ER
                           0.44 * 10 .^ (Spow/10) * Pref^2 ) / Pref^2 ) ; % SP

% PIR from the direct calculation data
PIR_calc_calc = 10*log10(( 0.12 * 10 .^ (LWin_calc/10) * Pref^2 + ...
                           0.44 * 10 .^ (ER_calc/10)   * Pref^2 + ...
                           0.44 * 10 .^ (SP_calc/10)   * Pref^2 ) / Pref^2 ) ;

Once completed and verified, I'll publish a GNU Octave (in a nutshell an open source Matlab compatible SW which now have a GUI and is much more user friendly...) version for everyone to play with. The difficult (i.e. time consuming) part will be to write the routine that loads the data from the NFS.

"My calculation" (classic energy average) vs NFS and @MZKM's

EDIT: I am glad we are on the same page
EDIT: I kept the /*Pref^2 in the average equation for notation consistency sake:
H10degp2 = 10 .^ (H10deg/10)*Pref^2; it is easy to understand as the squared Pressure of the Horizonal 10deg angle response
I did not have a good name for the 10 .^ (H10deg/10) only quantity... and it make the calculation clearer
 

Attachments

  • SP_verification.png
    SP_verification.png
    85.2 KB · Views: 112
  • ER_corrected.png
    ER_corrected.png
    93.5 KB · Views: 103
  • LW_verification.png
    LW_verification.png
    88.7 KB · Views: 91
  • PIR_correct.png
    PIR_correct.png
    129.2 KB · Views: 83
  • f(x).png
    f(x).png
    25.2 KB · Views: 93
Last edited:

Maiky76

Senior Member
Joined
May 28, 2020
Messages
440
Likes
3,705
Location
French, living in China
I don't think so. The paper describes AAD as a mean of 1/20 octave bands - which are basically the original measurement points, since the input data is supposed to come with a resolution of 20 points per octave. The definition of AAD makes no mention of 1/2 octave bands.

My bad, indeed AAD is not using the 1/2 octave bands, I should know I published the code I have implemented a few post ago.
It's actually one more good reason to try to reproduce the other ratings as they will not depend on what seems to be unclear!

Talking more generally about the rating, I am still of the opinion that the test one is of interest as:
- the correlation is higher than for the generalized score when it comes to bookself /compact speakers which represents a large portion of the current tests
- it is not more of a stretch than assuming a score with a "perfect subwoofer" which also assumes a "perfect set up" (that part is not trivial for me)
- two scores: one in the grand scheme of things (generalized score) and in in the particular category of the small speakers makes sense
- Stereophile, an other source of objective measurements, also provides two categories "full range" and "Bass limited" which makes sense to me

Cheers
M
 

Maiky76

Senior Member
Joined
May 28, 2020
Messages
440
Likes
3,705
Location
French, living in China
Here is the large test:
View attachment 67829

Here is me doing my best to match 11 of my scores to the results (discarding the lowest two):
View attachment 67832

The scores in the 5-6 range were a total guess as to which bubble it was, too clustered and no resolution.

EDIT: The 1.73 seems too large for that bubble, conforming it to ~20 PPO gives me 1.68. It looks to be around 1.55, I think it can be chalked up to slight errors in data extraction; and of course, which data points I deleted tp get to 20 PPO also is a factor.

Hi,

I tried to do the same thing here but a bit differently.
First I used several graphs to get the measured preference for the 13 speakers of test One.

13 Loudspeaker Measured Preferrence.png


Then I scanned the data from figure 5 a few times to get the predicted score from the EQ10 i.e. THE score.
There are 75 data points not 70 and it seems that at least one data point (the 1.15 measured from the 13 speakers) does not show up.

Finally, after trying to match the measured score of the 13 speakers of test One I got a list of measured vs predicted scores.
I could not determine the predicted score for all the 13 speakers as there are several measured scores that could fit the same measurements values and each is associated with a different predicted score
Predicted_scn_fig5_rank_meas_pred = [
1.00 0.76 1.529
% 2.00 1.13
3.00 1.22 1.710
% 4.00 1.68 0.770 4.250 1.340
% 5.00 1.69 0.770 4.250 1.340
6.00 2.62 3.170
7.00 3.64 3.920
8.00 3.79 3.910
% 9.00 4.00 3.590 2.970
% 10.0 4.80 4.820 4.240
11.0 5.66 5.230
% 12.0 5.70 6.400 5.490
13.0 6.16 6.240];

predicted no assumption.png

Even after trying to bend the data for a best match there are some significant dissimilarities:
predicted best.png


Cheers
M
 
Top Bottom