Man, I tell you guys one thing... this -B spec is plain weird. They apply weighting to each frequency's output. I have no idea why. But they do. The -A spec does not do this.
Klippel replied to me with a sheet on how they take the data and process it for final reporting per the -B spec. What I mean is, you have the actual measured max SPL (Peak SPL in the second column below). That's where -A would stop. But then -B takes those values and applies weighting to them per Paragraph 7.3 of the spec. The below is a screenshot
example of how you get from the
actual measured peak to the CTA-2010B
reported values.
View attachment 83062
Even though I am perplexed as to why they apply the weighting, I won't be changing anything in my processing because the purpose of a spec is to be just that. So, I will be providing my results in the format prescribed by CTA-2010B. But for those of you interested in how it works, there you go.
The reason for the weighting might be to factor in the equal-loudness curves. I have no idea, but here is a re-statement of what I read in the file you attached:
(1.) Each 1/3 octave band shall be weighted per the table.
(2.) The "Average Weighted SPL" is the average of the 1/3 octave bands from 40 to 80 Hz, inclusive. Since this is a single octave, there are three weighted bands that are averaged together to produce the Average Weighted SPL. * See note below.
(3.) If any of the weighted 1/3 octave bands exceeds the Average Weighted SPL by more than 3 dB, it shall be replaced by the Average Weighted SPL plus 3 dB. In other words, no weighted 1/3 octave band will be more than 3 dB greater than the Average Weighted SPL. Upon performing this step, the individual 1/3 octave band values are both weighted and limited.
(4.) The individual weighted-and-limited values are then individually replaced by the value of p^2, using the formula p^2 = 10^(SPL/10)). In other words, divide each of the weighted-and-limited values by 10, then take, for each of these quotients, the base-10 antilog (10^x). The resulting values are the squared-pressure values for the individual 1/3 octave bands.
(5.) & (6.) & (7.) Take the sum of the squared-pressure values, then take the base-10 log of the sum. Multiply this by 10, then subtract 10 dB. This is the Broadband Peak SPL.
*
This is splitting a fine hair, but the way this is specified implies that there are average values for the three 1/3 octave bands 40 - 50, 50 - 63, and 63 - 80. If you have values for the frequencies 40, 50, 63 and 80, the value for the 1st band would be the average of the values for 40 Hz and 50 Hz, the value for the 2nd band would be the average of the values for 50 Hz and 63 Hz, and the value for the last band would be the average of the values for 63 Hz and 80 Hz. When you take the average of these three average values, the formula will be:
[ (Spl40 + Spl50)/2 + (Spl50 + Spl63)/2 + (Spl63 + Spl80)/2 ]/3. This is algebraically equivalent to:
[ Spl40 + 2xSpl50 + 2xSpl63 + Spl80 ]/6
This may seem odd since the values for 50 Hz and for 63 Hz are weighted twice as strongly as the values for 40 Hz and 80 Hz. But even though it may seem odd, it is mathematically correct. If you had taken average SPL values for each of the three 1/3 octave bands and then taken their average, the result would be more heavily influenced by the values at 50 Hz and at 63 Hz compared to the values at 40 Hz and 80 Hz. The "Average Weighted SPL" for your data should be 113.9 dB. This is splitting hairs, especially given that
this has no effect on the results, because none of the weighted values were modified by applying the limiting value. All the "Average Weighted SPL" does is established the limiting value for the individual weighted values.
(This is reminiscent of the way the function values are weighted when performing numerical integration using Simpson's rule, where you find a weighting pattern that goes 1, 2, 1, 2, ... 2, 1, 2, 1. This also is puzzling to many people, because it just doesn't seem to make sense that every other one of the function values would be weighted twice as strongly as the interleaving function values, especially since the interval spacing is constant. But it is nevertheless correct. If you divide up the domain of a function (the portion of the domain over which you wish to numerically approximate the integral) into constant, fixed intervals, then evaluate the function at the interval boundaries and apply this alternating pattern of weighting, then divide the sum of these weighted values by the number of intervals (and maybe by another small integer, I've forgotten), what you get is a numerical approximation for the integral, notwithstanding that the weighting pattern seems peculiar.)