• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

High-resolution Speaker Frequency Response & Distortion Measurements and Why We Need Them (video)

I think it was part of my algebra in Maths class decades ago, so I can't remember it.

I had a thought though re the interpretation of the graph, and correct me if you know I'm wrong, assuming you understand it:
View attachment 216624
Is the Harmonic Distortion occuring at minus numbers on the X-axis there because the microphone is capturing the Harmonic Distortion, which is obviously at a higher frequency than the initial excitation frequency that caused that harmonic distortion, and therefore the microphone is capturing a reading at that higher frequency that hasn't been yet played in the chirp - if that's the case then I can understand why the harmonic distortions are listed in that graph in ever decreasing minus values on the x-axis because as you go up the different harmonic distortions from 2nd through 3rd through 4th (etc) then the microphone is picking up a response at those higher frequencies that have yet to be played during the chirp. So the minus time relevance is how much earlier the microphone picks up a significant reading for a given frequency earlier than expected in comparison to the fundamental chirp, and therefore that's how it recognises harmonic distortion from the fundamental chirp.

"Complex Numbers" red-herring aside, I think you've kinda got it!

Obviously the HD signal is simultaneous with the excitation signal, so the delays are calculation/model artifacts.

Personally, I find it clearer to reason about the above in the frequency domain view (Fig 5 below, from the paper @NTK linked), where the time and frequency relationships are more obvious.

Maybe it would've helped to clarify in the time-domain image that "2nd Harmonic" is 2nd harmonic distortion across all frequencies? YMMV of course.

Screen Shot 2022-07-05 at 10.32.50 AM.png
 
Hello All,

Like most threads here most of the posts are way off topic.

I also note that the measurements need to be high resolution to resolve differences in speaker performance.

I like to run the Log chirp several seconds long. My wife calls it whooping it up in the lab. The longer chirp provides more information and greater resolution.

I like to look at the plots with no smoothing of the raw data to get a feel for the results. Sometimes it takes a little or more smoothing to get out of the weeds to see overall or general trends.

I have posted raw, Compression Driver, APx555 impedance plots that show things with no processing or smoothing and some people believe that something was done incorrectly. Most people expect to see some degree of processing or smoothing.

An added note: Just for clarity I think that a noise floor baseline without a test signal and without speaker output should be posted to illustrate what the instrument noise floor is all by its’ self. Perhaps even burry the test microphone in a clean box of kitty litter to illustrate what quiet is with no added noise or distortion.

Thanks DT
 
"Complex Numbers" red-herring aside, I think you've kinda got it!

Obviously the HD signal is simultaneous with the excitation signal, so the delays are calculation/model artifacts.

Personally, I find it clearer to reason about the above in the frequency domain view (Fig 5 below, from the paper @NTK linked), where the time and frequency relationships are more obvious.

Maybe it would've helped to clarify in the time-domain image that "2nd Harmonic" is 2nd harmonic distortion across all frequencies? YMMV of course.

View attachment 216628
Yes, that makes total sense to me that the HD signal runs in parallel with the fundamental chirp, so that second graph in your post makes a lot of sense as you can clearly see the different HD signals happening at the same time and you can also see that the HD signals are happening at a higher frequency "sooner than expected" in relation to the fundamental chirp (as in before the fundamental chirp has actually played that higher frequency). You can also see the strength of the signal in how dark the line is, which took me a while to work out the right axis was not really an axis but instead a legend showing relative amplitude in negative dB - so the darker lines are at higher amplitude. A question though, the Relative Amplitude - is that relative to the dB of the fundamental chirp? And how come the Relative Amplitude only tops out at -10dB (rather than -0dB)? You'd think the fundamental chirp would be the reference at -0dB? And because the fundamental chirp is displayed on those graphs, in the top graph and the bottom line on the second graph, then you'd think those two lines would need to be referenced at -0dB, I don't get why -10dB is where it tops out?

I probably don't understand the clever math (formulas & terminology) that is used by the software in how it seperates it all out from the noise floor, but I think I've visualised the general theory & mechanism of how the chirp works and how harmonic distortion is detected by the microphone whilst that chirp is being played.
 
A question though, the Relative Amplitude - is that relative to the dB of the fundamental chirp? And how come the Relative Amplitude only tops out at -10dB (rather than -0dB)? You'd think the fundamental chirp would be the reference at -0dB? And because the fundamental chirp is displayed on those graphs, in the top graph and the bottom line on the second graph, then you'd think those two lines would need to be referenced at -0dB, I don't get why -10dB is where it tops out?

One would think relative to the stimulus, right (dBr)?
Without knowing exactly what the reference is here (didn't see it in the paper), it's hard to say.

I'm not sure, is my answer. Sorry.
OTOH, not an important mystery!

It could be something as silly as that is just how the colormap legend was rendered.
It's a bad colormap anyway, for picking out details.
 
One would think relative to the stimulus, right (dBr)?
Without knowing exactly what the reference is here (didn't see it in the paper), it's hard to say.

I'm not sure, is my answer. Sorry.
OTOH, not an important mystery!

It could be something as silly as that is just how the colormap legend was rendered.
It's a bad colormap anyway, for picking out details.
I just thought it was strange that the original stimulus wasn't pictured at -0dBr, so topping out at -10dBr according to the legend. Yeah, it probably doesn't matter in our overall understanding of the graph, thanks for posting those graphs up.....it solidified my understanding of the chirp and how harmonic distortion is captured during it's playback.
 
In term of audio: An oscillating signal that contain all the odd harmonics to the limit of either hearing, or nyquist frequency, or the frequency response of the electronic producing it. In other terms:
 f(x)=4/pisum_(n=1,3,5,...)^infty1/nsin((npix)/L).
Yes, but can you state that with arithmetic? :)
 
Sorry if we dissagre that a speaker that not moves produces no sound. I stop conversation.

What you are missing here is that sound = pressure. SPL = sound pressure level.

In a square wave the speaker moves and creates a pressure wave. Even if the cone doesn't move at the top of the wave it still holds the pressure by pushing forward. When the cone retracts it will create an opposite change in pressure.

Of course in a normal environment without ideal room and ideal speaker there will be bleeding, and the pressure will normalize over time. This is why you don't get a sqare wave with a battery example. I have no idea how fast this happens in practice with a signal switching much much faster than connecting/disconnecting a battery. Apparently not that fast based on those shown measurements.
 
What you are missing here is that sound = pressure. SPL = sound pressure level.

In a square wave the speaker moves and creates a pressure wave. Even if the cone doesn't move at the top of the wave it still holds the pressure by pushing forward. When the cone retracts it will create an opposite change in pressure.

Of course in a normal environment without ideal room and ideal speaker there will be bleeding, and the pressure will normalize over time. This is why you don't get a sqare wave with a battery example. I have no idea how fast this happens in practice with a signal switching much much faster than connecting/disconnecting a battery. Apparently not that fast based on those shown measurements.
!
I think, its signal theory. That top of the rectangel(dc) is build out of endless harmonics. If you define the rectangle like he did.

https://www.audiosciencereview.com/...and-why-we-need-them-video.35398/post-1238056

If you then define the n's in the hearable region, its something like a rectangel. Its not a rectangel, but the cone moves, and with that it can give a signal to the mic.
I not like to be more holy than the pope, so i was somehow wrong.

But couse the n's(harmonica's) will not start with 0 and end with infinity it will be a deformed rectangel.

Maybe @PeteL could show how it should look like with n>20 and n<20k?
 
Last edited:
I think, its signal theory. That top of the rectangel(dc) is build out of endless harmonics. If you define the rectangle like https://www.audiosciencereview.com/...and-why-we-need-them-video.35398/post-1238056

It's signal theory in that you can deconstruct it as a sum of sine waves (with fourier transform), but that's what you can really do with any wave that is not a pure sinewave. Any instrument is similarly just a collection of sinewaves with different phase, frequency and amplitude.

But when we hear sounds we do not hear just a collection of different sinewaves. At least, not directly.

The question is does a sound that is going from pressure a to pressure b and from pressure b to pressure a sound different than a sound going from pressure a to pressure b and then from pressure a to pressure c sound different. And does it even matter? It can be measured with a microphone regardless of how our brain interprets it.

(a = normal pressure, b= normal pressure + delta, c = normal pressure - delta)
 
Recently a company rep claimed that we don't need high resolution measurements of frequency response. And that measuring speaker distortion without anechoic chamber is useless. I thought I address this with a deeper dive as to how we fundamentally measure frequency response and distortion, covering the three common methods. I then show the application of high-resolution measurements to diagnosing deficiency in design and performance of speakers (and headphones).


Sorry, I don't have a text version of this so you have to suffer through the video. :) You can speed it up and get through it in 20 minutes.
Thank you for this information. Beyond the research paper mentioned if I wanted to learn more are there books you would recommend to fill in general knowledge on sound reproduction, audio processing etc? (got the math and general physics background if a bit rusty)
 
What you are missing here is that sound = pressure. SPL = sound pressure level.

In a square wave the speaker moves and creates a pressure wave. Even if the cone doesn't move at the top of the wave it still holds the pressure by pushing forward.

No.
Sound is a change in pressure, a perturbation about the semi-static offset (atmospheric pressure).

No acceleration of cone == no sound.
 
Back
Top Bottom