• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master Preference Ratings for Loudspeakers

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
True, but having a theoretical max of 10 is nice to have :)

Honestly, I'm sceptical at the idea of a 10 maximum score. The model likely doesn't "know" what the maximum score would be, because it's calibrated based on actual ratings, and listeners seldom use extreme ratings. Besides, you can't actually get a score of 10 with a perfect speaker with a LFX of 14.5 Hz, because it's mathematically impossible for a speaker to have NBD_PIR=0 and SM_PIR=1 at the same time, something that was not well understood back when @bobbooo came up with the idea of using that 14.5 Hz figure.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
Honestly, I'm sceptical at the idea of a 10 maximum score. The model likely doesn't "know" what the maximum score would be, because it's calibrated based on actual ratings, and listeners seldom use extreme ratings. Besides, you can't actually get a score of 10 with a perfect speaker with a LFX of 14.5 Hz, because it's mathematically impossible for a speaker to have NBD_PIR=0 and SM_PIR=1 at the same time, something that was not well understood back when @bobbooo came up with the idea of using that 14.5 Hz figure.
Ah, forgot that NBD_PIR can’t be 0 while SM is 1.

Oh well, I’m too lazy to fix it.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Honestly, I'm sceptical at the idea of a 10 maximum score. The model likely doesn't "know" what the maximum score would be, because it's calibrated based on actual ratings, and listeners seldom use extreme ratings. Besides, you can't actually get a score of 10 with a perfect speaker with a LFX of 14.5 Hz, because it's mathematically impossible for a speaker to have NBD_PIR=0 and SM_PIR=1 at the same time, something that was not well understood back when @bobbooo came up with the idea of using that 14.5 Hz figure.

My main motivation for wanting to use a value below around 15 Hz was it allows for a possible flat extension all the way down to 20 Hz, taking any roll-off into account, meaning the 'w/ sub' score is actually a 'w/ ideal sub' score (ideal for audible sound at least, visceral sound can of course be felt at lower frequencies). This allows the w/ sub score to act as an indication of the maximum auditory potential of any speaker when used with an 'ideal' subwoofer. A -6 dB point of 20 Hz however would obviously not allow for an ideal flat response across the entire audible range like this, and seems like a bit of an arbitrary choice.

14.5 Hz giving a possible perfect 10 score (as we thought at the time) was a bonus that gave the scores an intuitive comprehensible scale. If the LFX frequency needed for a score of 10 could be recalculated knowing what we now know about NBD_PIR and SM_PIR, maybe that would be a better value to use.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
My main motivation for wanting to use a value below around 15 Hz was it allows for a possible flat extension all the way down to 20 Hz, taking any roll-off into account, meaning the 'w/ sub' score is actually a 'w/ ideal sub' score (ideal for audible sound at least, visceral sound can of course be felt at lower frequencies). A -6 dB point of 20 Hz however would obviously not allow for an ideal flat response across the entire audible range like this.

14.5 Hz giving a possible perfect 10 score (as we thought at the time) was a bonus that gave the scores an intuitive comprehensible scale. If the LFX frequency needed for a score of 10 could be recalculated knowing what we now know about NBD_PIR and SM_PIR, maybe that would be a better value to use.
I mean, I guess a simulated perfect speaker (using whatever slope gets a perfect 1 for Smoothness) could be made and we could see what the max value for NBD_PIR would be.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
I mean, I guess a simulated perfect speaker (using whatever slope gets a perfect 1 for Smoothness) could be made and we could see what the max value for NBD_PIR would be.

Yeah sounds like that would work. Are you volunteering? ;)
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
Apologies up front if this has been covered or if I'm coming across as not appreciative of the great work here, certainly not my intention. In fact, I love the efforts to collect and present this data, and it's absolutely effective! And if this is the wrong thread, I will not be offended by removal :facepalm:

Couple questions:
1. Is there any inclination to include the # of drivers, size, material of drivers etc (more attributes) into this data table to draw more inferences?
2. The visualization of the data is always appreciated, but I also love to sort and filter data quickly in tabular form. The way this is presented (https://www.audiosciencereview.com/forum/index.php?pages/Audio_DAC_Performance_Index/ ) makes it somewhat difficult to navigate the data quickly as the filters take time to setup and the bar chart doesn't necessarily convey as much information to the user as the space it's using would indicate. Whereas in a classic table with sorts and filters you can quickly manipulate it and it's fairly efficient with screen real-estate. Has presenting it in tabular form with basic sort/filter controls been considered? I do see the Google Sheets list, however there doesn't appear to be sort/filter that I can see.

1 offer:
If the ideas have come up but effort isn't available, I would be glad to throw my time in to help out standing up a flexible schema/db, table, or otherwise.
-- I've used the following page for ideas around tables in the past: https://medium.com/nextux/design-better-data-tables-4ecc99d23356

Thanks for all the great work!
Regarding 2, have you seen this? https://www.audiosciencereview.com/forum/index.php?pages/SpeakerTestData/
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
My main motivation for wanting to use a value below around 15 Hz was it allows for a possible flat extension all the way down to 20 Hz, taking any roll-off into account, meaning the 'w/ sub' score is actually a 'w/ ideal sub' score (ideal for audible sound at least, visceral sound can of course be felt at lower frequencies). This allows the w/ sub score to act as an indication of the maximum auditory potential of any speaker when used with an 'ideal' subwoofer. A -6 dB point of 20 Hz however would obviously not allow for an ideal flat response across the entire audible range like this, and seems like a bit of an arbitrary choice.

I understand your point. I'll just point out that 15 Hz is also arbitrary if you don't know the roll-off slope. For example, if your "ideal" (20 Hz extension) speaker rolls off at 6 dB/octave, then its -6 dB point is 10 Hz, not 15. Meanwhile, a speaker that uses some advanced DSP to brickwall at 20 Hz will have its -6 dB point at 20 Hz.

Since most FR data stops at 20 Hz, and 20 Hz is undoubtedly closer to the LFX of speakers used to calibrate the model (whereas the model almost surely never "saw" an LFX of 15 Hz), I still think 20 Hz makes the most sense.

14.5 Hz giving a possible perfect 10 score (as we thought at the time) was a bonus that gave the scores an intuitive comprehensible scale.

I don't buy into this fixation into the number 10. Yes, it's probably the end of the rating scale that was given to the listeners. But it's quite conceivable that the listeners themselves don't have a clear idea as to what would constitute a "10". It might even be that a "10" speaker is merely an abstract concept in the listener's mind and that no real stimulus could prompt such a rating. I suspect the number 10 was only used to anchor the scale, and is meaningless otherwise.

I mean, I guess a simulated perfect speaker (using whatever slope gets a perfect 1 for Smoothness) could be made and we could see what the max value for NBD_PIR would be.

Mmm, thinking about it some more, I may have spoken too quickly when I said you can't get both NBD=0 and SM=1. That's technically true, but you can get infinitely close to it (i.e. it's an asymptote). To do that, you can use a curve that is a perfectly straight line of infinitesimal slope. (You can't use a zero slope because then r² is undefined.) The resulting NBD will be infinitesimal, and r², i.e. SM, will be 1 (because the slope completely explains the infinitesimal deviation). The problem, of course, is that this is an incredibly contrived, unrealistic example that is very obviously exploiting defects in the model and is not representative of an "ideal" speaker at all. (Ah, that bonkers SM definition is really the gift that keeps on giving, isn't it.)
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
I understand your point. I'll just point out that 15 Hz is also arbitrary if you don't know the roll-off slope. For example, if your "ideal" (20 Hz extension) speaker rolls off at 6 dB/octave, then its -6 dB point is 10 Hz, not 15. Meanwhile, a speaker that uses some advanced DSP to brickwall at 20 Hz will have its -6 dB point at 20 Hz.

Since most FR data stops at 20 Hz, and 20 Hz is undoubtedly closer to the LFX of speakers used to calibrate the model (whereas the model almost surely never "saw" an LFX of 15 Hz), I still think 20 Hz makes the most sense.



I don't buy into this fixation into the number 10. Yes, it's probably the end of the rating scale that was given to the listeners. But it's quite conceivable that the listeners themselves don't have a clear idea as to what would constitute a "10". It might even be that a "10" speaker is merely an abstract concept in the listener's mind and that no real stimulus could prompt such a rating. I suspect the number 10 was only used to anchor the scale, and is meaningless otherwise.



Mmm, thinking about it some more, I may have spoken too quickly when I said you can't get both NBD=0 and SM=1. That's technically true, but you can get infinitely close to it (i.e. it's an asymptote). To do that, you can use a curve that is a perfectly straight line of infinitesimal slope. (You can't use a zero slope because then r² is undefined.) The resulting NBD will be infinitesimal, and r², i.e. SM, will be 1 (because the slope completely explains the infinitesimal deviation). The problem, of course, is that this is an incredibly contrived, unrealistic example that is very obviously exploiting defects in the model and is not representative of an "ideal" speaker at all. (Ah, that bonkers SM definition is really the gift that keeps on giving, isn't it.)

As I understand, ported speakers/subs naturally roll off at ~24 dB per octave, and sealed at ~12 dB per octave. So it looks like 'worst' ideal case (!) is an LFX frequency of around 14 Hz, only just below the 14.5 Hz figure. But yeah, it's all pretty arbitrary, although fixing it to give 10 as the maximum score makes it less so in my eyes. I suspect a score of 10 probably subconsciously acted as an 'asymptotic' maximum for most of the listeners due to contraction bias, but I'm ok with that. A 10 could be seen as actually listening to the music live, which obviously no (current) speaker designs can fully replicate. But who knows, in 100 years maybe we'll have holographic speakers that can recreate the full soundfield of a live orchestra indistinguishable from the real thing ;)

I don't think the model has to necessarily have 'seen' a particular value in order to predict it though; it's necessarily an extrapolation from the limited data set it's devised from, so the fact there were no speakers with LFX frequencies below 20 Hz I don't think has much relevance. But maybe it's good to have both yours and @MZKM 's different subwoofer scores anyway - his for an ideal sub, and yours for a more average (and cheaper) one :)
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,631
Location
Seattle Area
I don't buy into this fixation into the number 10. Yes, it's probably the end of the rating scale that was given to the listeners. But it's quite conceivable that the listeners themselves don't have a clear idea as to what would constitute a "10". It might even be that a "10" speaker is merely an abstract concept in the listener's mind and that no real stimulus could prompt such a rating. I suspect the number 10 was only used to anchor the scale, and is meaningless otherwise.
I took the test at Harman and was given the score sheet. Here is how I voted (sorry the picture is blurry, the room was almost totally dark):

Harman Voting.jpg


As you see the highest score I gave was 6. When I heard the first speaker it was a real puzzle as to how to score it from 1 to 10. It became more clear as I heard more samples. But still, I didn't hear anything that made my jaw drop with realism to give a score of 10.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
I took the test at Harman and was given the score sheet. Here is how I voted (sorry the picture is blurry, the room was almost totally dark):

View attachment 72073

As you see the highest score I gave was 6. When I heard the first speaker it was a real puzzle as to how to score it from 1 to 10. It became more clear as I heard more samples. But still, I didn't hear anything that made my jaw drop with realism to give a score of 10.
What do you think you would rate your Salon2’s?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,631
Location
Seattle Area
What do you think you would rate your Salon2’s?
The content they play is not the best recordings in the world so the impression was not jaw dropping. In that regard, I don't know that I would have gone to 10 with it.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
@MZKM @pierre Heads-up: while implementing some data consistency checks in Loudspeaker Explorer, I noticed a weird discrepancy in @amirm's data export, and I don't remember it being mentioned before. And no, I'm not referring to the well-known ER/PIR weighting issue.

As people dealing with this data know, @amirm's zipfiles contain the following three files: "Horizontal Reflections.txt", "Vertical Reflections.txt", and "Early Reflections.txt".

At first glance it looks like the data in "Early Reflections.txt" is simply a copy of the data in the other two files (in other words, they're redundant with each other).

In practice, I have observed that the data matches exactly for most of the columns, but there is one notable exception: the Rear reflection data is inconsistent between "Early Reflections.txt" and "Horizontal Reflections.txt". And the difference is far from negligible - here's an example:

Loudspeaker Explorer chart(24).png


To figure out which one is correct, I have recomputed the spatial average following CTA-2034A from the raw angle data:

Code:
def rear_wall_reflection(speaker_fr):
    # As defined in CTA-2034A §5.2
    return speaker_fr.loc[:, ('Sound Pessure Level [dB]', 'SPL Horizontal', [
        '-90°', '90°', '-100°', '100°', '-110°', '110°', '-120°', '120°',
        '-130°', '130°', '-140°', '140°', '-150°', '150°', '-160°', '160°',
        '-170°', '170°', '-150°', '150°', '-160°', '160°', '-170°', '170°',
        '180°',
    ])].pipe(lsx.fr.db_power_mean, axis='columns')

This generated data that is identical to "Horizontal Reflections.txt".

This led me to conclude that the correct curve is the "Rear" column in "Horizontal Reflections.txt". The "Rear Wall Bounce" column in "Early Reflections.txt" is wrong and should not be used.

This can even be seen directly in @amirm's reviews, for example on the same speaker. Notice how the curves match exactly… except "Rear":

Ascend Acoustics CMT-340 SE Center Home Theater Speaker CEA-2034 Spinorama Horizontal Audio Me...png

Ascend Acoustics CMT-340 SE Center Home Theater Speaker CEA-2034 Spinorama Early Window Audio ...png


@amirm: if this wasn't already reported, you might want to tell Klippel about this issue?
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
@MZKM @pierre Heads-up: while implementing some data consistency checks in Loudspeaker Explorer, I noticed a weird discrepancy in @amirm's data export, and I don't remember it being mentioned before. And no, I'm not referring to the well-known ER/PIR weighting issue.

As people dealing with this data know, @amirm's zipfiles contain the following three files: "Horizontal Reflections.txt", "Vertical Reflections.txt", and "Early Reflections.txt".

At first glance it looks like the data in "Early Reflections.txt" is simply a copy of the data in the other two files (in other words, they're redundant with each other).

In practice, I have observed that the data matches exactly for most of the columns, but there is one notable exception: the Rear reflection data is inconsistent between "Early Reflections.txt" and "Horizontal Reflections.txt". And the difference is far from negligible - here's an example:

View attachment 76121

To figure out which one is correct, I have recomputed the spatial average following CTA-2034A from the raw angle data:

Code:
def rear_wall_reflection(speaker_fr):
    # As defined in CTA-2034A §5.2
    return speaker_fr.loc[:, ('Sound Pessure Level [dB]', 'SPL Horizontal', [
        '-90°', '90°', '-100°', '100°', '-110°', '110°', '-120°', '120°',
        '-130°', '130°', '-140°', '140°', '-150°', '150°', '-160°', '160°',
        '-170°', '170°', '-150°', '150°', '-160°', '160°', '-170°', '170°',
        '180°',
    ])].pipe(lsx.fr.db_power_mean, axis='columns')

This generated data that is identical to "Horizontal Reflections.txt".

This led me to conclude that the correct curve is the "Rear" column in "Horizontal Reflections.txt". The "Rear Wall Bounce" column in "Early Reflections.txt" is wrong and should not be used.

This can even be seen directly in @amirm's reviews, for example on the same speaker. Notice how the curves match exactly… except "Rear":

View attachment 76123
View attachment 76124

@amirm: if this wasn't already reported, you might want to tell Klippel about this issue?
Like how some companies give monetary rewards for finding bugs, Klippel should give Amir a free module :)

I have been computing my Early Reflections graph from the horizontal & vertical measurements because I was worried about an issue like this.
 

pierre

Addicted to Fun and Learning
Forum Donor
Joined
Jul 1, 2017
Messages
964
Likes
3,054
Location
Switzerland
@MZKM @pierre Heads-up: while implementing some data consistency checks in Loudspeaker Explorer, I noticed a weird discrepancy in @amirm's data export, and I don't remember it being mentioned before. And no, I'm not referring to the well-known ER/PIR weighting issue.

As people dealing with this data know, @amirm's zipfiles contain the following three files: "Horizontal Reflections.txt", "Vertical Reflections.txt", and "Early Reflections.txt".

At first glance it looks like the data in "Early Reflections.txt" is simply a copy of the data in the other two files (in other words, they're redundant with each other).

In practice, I have observed that the data matches exactly for most of the columns, but there is one notable exception: the Rear reflection data is inconsistent between "Early Reflections.txt" and "Horizontal Reflections.txt". And the difference is far from negligible - here's an example:

View attachment 76121

To figure out which one is correct, I have recomputed the spatial average following CTA-2034A from the raw angle data:

Code:
def rear_wall_reflection(speaker_fr):
    # As defined in CTA-2034A §5.2
    return speaker_fr.loc[:, ('Sound Pessure Level [dB]', 'SPL Horizontal', [
        '-90°', '90°', '-100°', '100°', '-110°', '110°', '-120°', '120°',
        '-130°', '130°', '-140°', '140°', '-150°', '150°', '-160°', '160°',
        '-170°', '170°', '-150°', '150°', '-160°', '160°', '-170°', '170°',
        '180°',
    ])].pipe(lsx.fr.db_power_mean, axis='columns')

This generated data that is identical to "Horizontal Reflections.txt".

This led me to conclude that the correct curve is the "Rear" column in "Horizontal Reflections.txt". The "Rear Wall Bounce" column in "Early Reflections.txt" is wrong and should not be used.

This can even be seen directly in @amirm's reviews, for example on the same speaker. Notice how the curves match exactly… except "Rear":

View attachment 76123
View attachment 76124

@amirm: if this wasn't already reported, you might want to tell Klippel about this issue?

For scores related computations, I use only spl H & V but I do display the other curves as-is. Good eyes.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
@MZKM @pierre @Maiky76 FYI, my consistency checks on the various CTA2034 spatial averages reveal that, aside from previously known issues, the average curves in @amirm's zipfiles are accurate to around ±0.001 dB, with the largest error (across all published data thus far) on the Floor Reflection of PreSonus Eris E5 XT at 17753.2 Hz, where "Vertical Reflections.txt" says 105.163 dB, but I find 105.1639 dB. It's accurate down to rounding error, basically.

There are some interesting subtleties with the Sound Power calculation. At first I did the following (for each frequency point):
  1. Convert from dB to Pascals
  2. Square
  3. Multiply by the weights in CTA-2034A Appendix C
  4. Sum
  5. Square root
  6. Convert from Pascals back to dB
Even getting to that point was a bit tricky, because it wasn't entirely clear when to multiply by the weights (before or after squaring?) and of course there's the subtle trap of counting 0° and 180° twice (since they appear on both planes).

However, I was a bit confused because, with the above calculation, I would sometimes get discrepancies with the data in CEA2034.txt that, while still very small, couldn't be explained by rounding error alone. For example, on PreSonus Eris E5 XT at 4594.48 Hz, CEA2034.txt says 100.895 dB, but I find 100.879 dB - a ~0.016 dB difference.

It turns out the reason is because the 70 weights, when copied from CTA-2034A Appendix C, almost add up to 1, but not exactly. They add up to ~0.996. Guess what -0.016 dB corresponds to as a power factor? 0.996. Mmm…

So I added an additional step between #4 and #5 where I divide by the sum of the weights - in other words I scaled the weights so their sum is normalized to 1. That did the trick and we're back to an error within ±0.001 dB. So whatever mistake the authors of CTA-2034A made, at least Klippel, to their credit, didn't fall for that one!
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
@MZKM @pierre @Maiky76 FYI, my consistency checks on the various CTA2034 spatial averages reveal that, aside from previously known issues, the average curves in @amirm's zipfiles are accurate to around ±0.001 dB, with the largest error (across all published data thus far) on the Floor Reflection of PreSonus Eris E5 XT at 17753.2 Hz, where "Vertical Reflections.txt" says 105.163 dB, but I find 105.1639 dB. It's accurate down to rounding error, basically.

There are some interesting subtleties with the Sound Power calculation. At first I did the following (for each frequency point):
  1. Convert from dB to Pascals
  2. Square
  3. Multiply by the weights in CTA-2034A Appendix C
  4. Sum
  5. Square root
  6. Convert from Pascals back to dB
Even getting to that point was a bit tricky, because it wasn't entirely clear when to multiply by the weights (before or after squaring?) and of course there's the subtle trap of counting 0° and 180° twice (since they appear on both planes).

However, I was a bit confused because, with the above calculation, I would sometimes get discrepancies with the data in CEA2034.txt that, while still very small, couldn't be explained by rounding error alone. For example, on PreSonus Eris E5 XT at 4594.48 Hz, CEA2034.txt says 100.895 dB, but I find 100.879 dB - a ~0.016 dB difference.

It turns out the reason is because the 70 weights, when copied from CTA-2034A Appendix C, almost add up to 1, but not exactly. They add up to ~0.996. Guess what -0.016 dB corresponds to as a power factor? 0.996. Mmm…

So I added an additional step between #4 and #5 where I divide by the sum of the weights - in other words I scaled the weights so their sum is normalized to 1. That did the trick and we're back to an error within ±0.001 dB. So whatever mistake the authors of CTA-2034A made, at least Klippel, to their credit, didn't fall for that one!
Or, it’s the issue I pointed out in post #337, where it was another list of weights, which is exactly the same as 2034, yet the 0° & 180° axes are different, and it adds to 1.000000004.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
Or, it’s the issue I pointed out in post #337, where it was another list of weights, which is exactly the same as 2034, yet the 0° & 180° axes are different, and it adds to 1.000000004.

Interesting, thanks for reminding me of your post. I just tried it with the 0°=180° weight set to 0.002417944 instead of 0.000604486. (Normalizing the weights doesn't seem to make a difference in that case, since it's so close to 1 already.)

Using that new weight, the discrepancy with Klippel became way worse: now the worst error is on Emotiva Airmotiv 6s at 19999.9 Hz, where CEA2034.txt says 81.618 dB, but the calculation using the new weight says 82.033, a whooping 0.415 dB difference. Clearly Klippel is not using this alternative value for 0°/180°.

Loudspeaker Explorer chart(25).png


So it does look like the way to match the Klippel data is to use CTA-2034A weights, but normalize them first so that the sum of the 70 weights equal 1. (In other words, a true weighted mean, not a weighted sum.)

It would be nice if someone could double-check what the mathematically correct weights are. This doc that @pierre found is supposed to describe it, but I'm not that good at math :oops:
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
Interesting, thanks for reminding me of your post. I just tried it with the 0°=180° weight set to 0.002417944 instead of 0.000604486. (Normalizing the weights doesn't seem to make a difference in that case, since it's so close to 1 already.)

Using that new weight, the discrepancy with Klippel became way worse: now the worst error is on Emotiva Airmotiv 6s at 19999.9 Hz, where CEA2034.txt says 81.618 dB, but the calculation using the new weight says 82.033, a whooping 0.415 dB difference. Clearly Klippel is not using this alternative value for 0°/180°.

View attachment 76160

So it does look like the way to match the Klippel data is to use CTA-2034A weights, but normalize them first so that the sum of the 70 weights equal 1. (In other words, a true weighted mean, not a weighted sum.)

It would be nice if someone could double-check what the mathematically correct weights are. This doc that @pierre found is supposed to describe it, but I'm not that good at math :oops:
Ah, I just noticed that those values come from multiplying the 0/180 by 4, I guess counting the extra passes the measurement mic makes.

Who is correct though?
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
It would be nice if someone could double-check what the mathematically correct weights are. This doc that @pierre found is supposed to describe it, but I'm not that good at math :oops:

If the measurements basically covered a sphere, we really wouldn‘t need to weight, but that’s not what it does (even if it did, we are only dealing with 70 measurement points for the Spinorama).
quadarea.png

Along the Zone (horizontal measurements), the shape, and thus area, of each Quadrangle stays the same, so no weighting is needed if just caring about that, you would only need weighting for the Lune (vertical measurements). The Spinorama is just 1 10° Zone and 1 10° Lune, all those unshaded portions in our image are not included, but that is exactly what Sound Power is looking to describe (SPL emitted in all directions).

Since each quadrangle should be treated equally, you have to find the area of each quadrangle (and each spherical triangle at the north/south polars).

In order to calculate this on your own, you would need to simulate a sphere with 8 quadrangles & 2 triangles in a 10° Lune. So yeah, not fun at all. Finding the area is simple, formulas are known, I just don’t know how to arrive at the coordinates for the quadrangles.
 
Last edited:
Top Bottom