• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Speaker Equivalent SINAD Discussion

-Matt-

Addicted to Fun and Learning
Joined
Nov 21, 2021
Messages
679
Likes
569
Sorry if this has already been suggested...

I was trying to think of a simple test for speaker distortion. Could you simply do an in-room measurement (eg mic at 1m) at many different volume levels?

Rather than a frequency sweep, just play a standard test piece of music (10s long).

By subtracting the expected waveform from the measured one you could quantify the errors. You could also do fft of the errors to get frequency information.

Importantly, by repeating at multiple volumes you can then make a second subtraction. Subtract the lowest volume recorded waveform, from each higher volume recorded waveforms (normalised to peak level). By doing this you will be able to plot how errors increase with volume.

By doing this second subtraction I suggest that you will remove errors introduced by the room response. (This assumes that the room response remains linear, whilst the speaker response becomes non-linear with increasing volume).

Any thoughts?
Could something like this be feasible?

Edit:
1) You would need to ensure that the microphone remains linear for all volumes, perhaps it would need variable gain?

2) I guess a frequency sweep could work just as well as a standard test piece of music.
 
Last edited:

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,505
Likes
25,335
Location
Alfred, NY
Sorry if this has already been suggested...

I was trying to think of a simple test for speaker distortion. Could you simply do an in-room measurement (eg mic at 1m) at many different volume levels?

Rather than a frequency sweep, just play a standard test piece of music (10s long).

By subtracting the expected waveform from the measured one you could quantify the errors.

Importantly, by repeating at multiple volumes you can then make a second subtraction. (Subtract the lowest volume recorded waveform, from each higher volume recorded waveform). (Normalised to peak level). By doing this you will be able to plot how errors increase with volume. You could also do fft of the errors to get frequency information.

By doing this second subtraction I suggest that you will remove errors introduced by the room response. (This assumes that the room response remains linear, whilst the speaker response becomes non-linear with increasing volume).

Any thoughts?
Could something like this be feasible?

Edit:
1) You would need to ensure that the microphone remains linear for all volumes, perhaps it would nead variable gain?

2) I guess a frequency sweep could work just as well as a standard test piece of music.
We do this with Farina chirps.
 

-Matt-

Addicted to Fun and Learning
Joined
Nov 21, 2021
Messages
679
Likes
569
In the above the nice easily comparable metric for consumers could be a 2d image plot of the error magnitude (colour) vs frequency (horizontal) vs volume (vertical). (With a standardised colourmap).

You could then add a horizontal line to show where a certain error threshold was exceeded. I.e. for speaker A, errors exceed 0.5% at 75dB@1m. (This would become the headline figure for speaker comparison).

...optionally also vertical lines to show the frequency range with acceptably low errors.
 
Last edited:

-Matt-

Addicted to Fun and Learning
Joined
Nov 21, 2021
Messages
679
Likes
569
Search failed me for a while but in the end I found this...
Screenshot_20220128-161547_Adobe Acrobat.jpg

From here.

But this isn't equivalent to what I described above. It doesn't capture the change in performance with volume. (The how loud can they go factor)!

You would need a sequence of these graphs to do that. Each horizontal row of pixels in the 2d plot I'm suggesting would be eqivalent to that line plot.
 
Last edited:
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
The thing is how at all could this be a meaningful metric other than for a max SPL. As there is basically no way to make it a preference metric.

I think what Anthony Grimani is suggesting for his CEDIA specifications standard is fine, simply report the SPL at which the frequency response compresses/enhances; for his standard he stated a 3dB threshold, not sure if that's too lax. I know Erin does this testing usually, and usually even at 102dB many speakers don't reach 3dB, many 2dB or even 1dB.

Or, report the SPL when THD exceeds 100%, which typically happens in the bass.

And SPL reported is at the IEC 300Hz-3kHz.
 

-Matt-

Addicted to Fun and Learning
Joined
Nov 21, 2021
Messages
679
Likes
569
The thing is how at all could this be a meaningful metric other than for a max SPL. As there is basically no way to make it a preference metric.

The easy/simple metric would be the horizontal line on the plot (and associated spl value); which shows the spl at which the errors/distortion exceed some threshold. I think this should be roughly equivalent to this...

I think what Anthony Grimani is suggesting for his CEDIA specifications standard is fine, simply report the SPL at which the frequency response compresses/enhances; for his standard he stated a 3dB threshold, not sure if that's too lax. I know Erin does this testing usually, and usually even at 102dB many speakers don't reach 3dB, many 2dB or even 1dB.

More complex preference metrics could be calculated from the data. The important idea here is the collection of data at different volume levels to capture the non-linearity of the speaker response.

By the way, I'm of the view (perhaps naively) that the preference rating is calculated from the frequency response which, in large part, can be addressed by eq. (So the existance of eq sort of makes current preference ratings a bit pointless). A measure of the speaker's actual performance (accuracy of recreating a waveform at the listening position, and at different volumes) then seems more useful to me.
 
Last edited:

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,505
Likes
25,335
Location
Alfred, NY
The important idea here is the collection of data at different volume levels to capture the non-linearity of the speaker response.
If the speaker response is non-linear, then the distortion vs frequency captures it. One should note that distortion vs volume is invariably monotonic, so the curves will be an upper bound for distortion at lower volume levels.
 

-Matt-

Addicted to Fun and Learning
Joined
Nov 21, 2021
Messages
679
Likes
569
So, I tried it out...

From Toole's book "Sound reproduction .." - only 2 pages about non-linear distortion, and here is the summary (page 453):

"In loudspeakers it is fortunate that distortion is something that normally
does not become obvious until devices are driven close to or into some limiting
condition. In large-venue professional devices, this is a situation that can occur
frequently. In the general population of consumer loudspeakers, it has been very
rare for distortion to be identified as a factor in the overall subjective ratings.
This is not because distortion is not there or is not measurable, but it is low
enough that it is not an obvious factor in judgments of sound quality at normal
foreground listening levels."

Based on the above, I was expecting to see distortion increase with volume. I was therefore expecting to see something like this:
expected.png

Where the red line shows the point where the speaker is "driven close to or into some limiting condition" (set at about 5% THD here). And the black line shows the low frequency extension. (Note: this is just a mock-up made by inverting the real measurement below and doubling the numbers on the volume scale).

What I actually measured surprised me a bit:
actual.png

Distortion (as reported by REW in THD%) just keeps decreasing as volume is increased.

I couldn't go to higher volumes without running out of headroom on my UMIK-1, positioned close to a desktop speaker connected to my pc. Perhaps if I could drive the speaker to much higher volumes without clipping the microphone I would eventually see something like the first plot?

Is the fact that I see more distortion at lower volumes (in the actual plot) what you meant by...
the curves will be an upper bound for distortion at lower volume levels.

At low volumes (in my noisy environment), the measurement is dominated by the noise floor.
 
Last edited:

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,754
Likes
37,597
So, I tried it out...



Based on the above, I was expecting to see distortion increase with volume. I was therefore expecting to see something like this:
View attachment 182611
Where the red line shows the point where the speaker is "driven close to or into some limiting condition" (set at about 5% THD here). And the black line shows the low frequency extension. (Note: this is just a mock-up made by inverting the real measurement below and doubling the numbers on the volume scale).

What I actually measured surprised me a bit:
View attachment 182612
Distortion (as reported by REW in THD%) just keeps decreasing as volume is increased.

I couldn't go to higher volumes without running out of headroom on my UMIK-1, positioned close to a desktop speaker connected to my pc. Perhaps if I could drive the speaker to much higher volumes without clipping the microphone I would eventually see something like the first plot?

Is the fact that I see more distortion at lower volumes (in the actual plot) what you meant by...


At low volumes (in my noisy environment), the measurement is dominated by the noise floor.
Part of the reason for that is noise, especially low frequency noise is considered distortion. As you get speaker volume on above noise, you get a lower distortion read out even though it was really only noise.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,505
Likes
25,335
Location
Alfred, NY
So, I tried it out...



Based on the above, I was expecting to see distortion increase with volume. I was therefore expecting to see something like this:
View attachment 182611
Where the red line shows the point where the speaker is "driven close to or into some limiting condition" (set at about 5% THD here). And the black line shows the low frequency extension. (Note: this is just a mock-up made by inverting the real measurement below and doubling the numbers on the volume scale).

What I actually measured surprised me a bit:
View attachment 182612
Distortion (as reported by REW in THD%) just keeps decreasing as volume is increased.

I couldn't go to higher volumes without running out of headroom on my UMIK-1, positioned close to a desktop speaker connected to my pc. Perhaps if I could drive the speaker to much higher volumes without clipping the microphone I would eventually see something like the first plot?

Is the fact that I see more distortion at lower volumes (in the actual plot) what you meant by...


At low volumes (in my noisy environment), the measurement is dominated by the noise floor.
The +N part of THD+N is key here.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
By the way, I'm of the view (perhaps naively) that the preference rating is calculated from the frequency response which, in large part, can be addressed by eq. (So the existance of eq sort of makes current preference ratings a bit pointless). A measure of the speaker's actual performance (accuracy of recreating a waveform at the listening position, and at different volumes) then seems more useful to me.
You can’t just EQ any speaker into perfection, as such the people that post the auto-PEQ settings for the speakers measured also calculated the preference score, they even code it to maximize the score.
 

-Matt-

Addicted to Fun and Learning
Joined
Nov 21, 2021
Messages
679
Likes
569
You can’t just EQ any speaker into perfection.
Yes, I thought that statement might be a bit controversial.

I was trying to draw a distinction between:

1) "Preference" which is dominated by frequency response. (And can at least to some extent be addressed with EQ). (And is to some extent a personal taste).

and

2) Absolute, measured speaker performance (in terms of a number more like SINAD).
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
Yes, I thought that statement might be a bit controversial.

I was trying to draw a distinction between:

1) "Preference" which is dominated by frequency response. (And can at least to some extent be addressed with EQ). (And is to some extent a personal taste).

and

2) Absolute, measured speaker performance (in terms of a number more like SINAD).
Keep in mind the preference formula isn’t just on-axis frequency response linearity, it includes bass extension (which you can’t EQ better without sacrificing max SPL) as well as off-axis linearity, and only if the directivity index (listening window vs early reflections) is linear can EQ work well.
 

-Matt-

Addicted to Fun and Learning
Joined
Nov 21, 2021
Messages
679
Likes
569
Keep in mind the preference formula isn’t just on-axis frequency response linearity, it includes bass extension (which you can’t EQ better without sacrificing max SPL) as well as off-axis linearity, and only if the directivity index (listening window vs early reflections) is linear can EQ work well.
Yes indeed. I'm not knocking the preference formula. I was just trying to suggest an alternative way of measuring and presenting raw speaker performance data. (I'm not from this field so it is likely that I'm reinventing the wheel)!

Note, I wasn't able to test the exact approach that I proposed. (I.e., Calculating error magnitudes directly from the measured waveform, then doing an fft to get frequency resolution, then using subtraction to show relative change in distortion with volume). Instead I just used the distortion data produced by REW, not sure if this is calculated in a similar way?
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,926
Likes
7,643
Location
Canada
It's hard for me to see how you could ever get accurate distortion numbers with naive indoor measurements. Reflections are always going to cause a material impact. You can raise the SPL to subtract noise, but that doesn't work on reflections since they also scale up. You can gate your measurements, but that imposes all the typical limitations of that approach.

Of course we already have CEA2010 as a methodology for measuring max SPL/impact of distortion, and it's performed outdoors as a rule. And the Klippel-based measurements that Erin and Amir use can rely on that tool to remove the impact of reflections from any of their tests.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,754
Likes
37,597
It's hard for me to see how you could ever get accurate distortion numbers with naive indoor measurements. Reflections are always going to cause a material impact. You can raise the SPL to subtract noise, but that doesn't work on reflections since they also scale up. You can gate your measurements, but that imposes all the typical limitations of that approach.

Of course we already have CEA2010 as a methodology for measuring max SPL/impact of distortion, and it's performed outdoors as a rule. And the Klippel-based measurements that Erin and Amir use can rely on that tool to remove the impact of reflections from any of their tests.
Listen to what Sancus said. You'll save yourself lots of trouble.
 
Top Bottom