• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master Complaint Thread About Headphone Measurements

D

Deleted member 16543

Guest
Spectrum means frequency domain, so mixing that with the term "time domain" doesn't work.
You can transform from one domain into the other and both contain the same information, though in audio FR plots we usually just plot the smoothed magnitude and omit the phase completely... because that is the easiest to understand and FR magnitude is the most important aspect in how a headphone sounds.

"Burst responses" (technically called impulse responses or IRs) are rarely measured directly because of SNR issues, because an impulse contains very little energy especially at low frequencies.
But that's not an issue, as I said you can transform from frequency response (measured e.g. with a sweep or noise) to time domain and plot the IR.

But you need to understand what you're looking at. In other words, interpreting IRs is where it gets tough...

and where we need to be careful. For example, a lot of people don't know that any deviation from flat in FR necessarily means "ringing" in the IR.
But this does not mean there is a resonance. Even an overdamped low-pass ("slow and smooth bass roll-off") results in "ringing".

Another thing is that this "ringing" usually is not audible as such. You don't hear the ringing at the cutoff frequency of the low-pass which you'd see in the IR, instead you hear the attenuation of lower frequencies, i.e. lack of sub-bass.

So I'd say that for most people such a visualization is not helpful. Even worse, I'd consider it harmful if people started to read magical audio properties out of IRs they don't understand.

Even though this part is usually hidden from the user of a measurement rig/software, one derives the FR from the IR (after appropriate gating), not the other way around. So the raw data is in the time domain, not the frequency one.
To go from raw IR to FR there are assumptions and simplifications to be made (expressed as the gating parameters, sometimes available to the end user for tweaking, sometimes not) that have to do with specifics of the rig used and, hopefully but with most measurement rigs not, with psychoacoustics. So the FR (especially without the phase response) is a limited and quite distorted/approximated depiction of how a headphone (or speaker, for that matter) behaves.

Also, "ringing" in the IR doesn't come from a deviation from flat in the FR. So much so that a flat FR from 0 to 20 kHz (with linear phase) is generated by an IR that looks like a sinc function (lots of "ringing", without any problem whatsoever of it being there. A sinc function is actually a "perfect" IR, EQ and voicing considerations aside. The frequency brickwalled version of a Dirac delta. It's what you want the kernel of a DAC to look like, for example).
What I think he was asking for is not wiggly looking things in the IR, but the waterfall graphic, where you can see resonances in the way the FR decays in time. I personally think in the case of headphones those are generally short lived (compared to speakers, for example), but double checking never hurts.

Where IR can be useful is with multi driver headphones, to check their alignment in time and get an idea of transient reproduction accuracy.
 

xnor

Active Member
Joined
Jan 12, 2022
Messages
193
Likes
207
Even though this part is usually hidden from the user of a measurement rig/software, one derives the FR from the IR (after appropriate gating), not the other way around. So the raw data is in the time domain, not the frequency one.
No. As mentioned, to maximize SNR with speakers/headphones the measurements are typically done by recording a sweep or noise.
And typically, the transfer function is then calculated in the frequency domain. In the most naive way you divide output by input spectrum and the result will be the frequency response. To get the (time domain) impulse response you have to do an inverse Fourier transform.

Now you could also directly record an impulse (and directly get a time domain result) but this is not commonly done when measuring speakers/headphones.


To go from raw IR to FR there are assumptions and simplifications to be made (expressed as the gating parameters, sometimes available to the end user for tweaking, sometimes not) that have to do with specifics of the rig used and, hopefully but with most measurement rigs not, with psychoacoustics. So the FR (especially without the phase response) is a limited and quite distorted/approximated depiction of how a headphone (or speaker, for that matter) behaves.
No, it's an invertible transformation so there's no loss of information as long as we deal with signals with finite energy and duration.
But of course you can truncate the IR, window it... or do smoothing in the frequency domain, which is what you'd do for easier visualization in a graph.

It's funny that you mention that the FR is a "distorted" depiction of how a speaker behaves because FR is based on the assumption that it is a linear system without distortion. So, of course, you will not see non-linear distortion in an FR. Still, (magnitude of) the FR is the most important metric when it comes to how a headphone sounds.

Also, "ringing" in the IR doesn't come from a deviation from flat in the FR.
Except it does and provably so.

So much so that a flat FR from 0 to 20 kHz (with linear phase) is generated by an IR that looks like a sinc function (lots of "ringing", without any problem whatsoever of it being there.
You're literally describing a brickwall low-pass filter, which is probably one of the most blatant examples for "deviation from flat".
Which results in lots of "ringing" in the IR.
Note: as should be obvious from my previous response, with "ringing" I refer to what people see when they look at the IR and not what they hear

Also, you're mixing multiple things here / bringing up new topics. First of all, headphones are not linear but approximately min phase systems, so they don't produce "pre-ringing" in the IR, but the filter you described will.
That would in fact be audible, because it is not as easily masked by our hearing, but is not for 44.1 kHz sampling rates purely based on the fact that the cutoff frequency is above 20 kHz where the sensitivity of our hearing is abysmal.

So ringing is there and lots of it but it's a different kind of ringing and it's not audible for different reasons.

A sinc function is actually a "perfect" IR, EQ and voicing considerations aside. The frequency brickwalled version of a Dirac delta. It's what you want the kernel of a DAC to look like, for example).
Sure, it's called an ideal low-pass filter. Results in infinite "ringing" in the IR btw, so not achievable in reality but we can get arbitrarily close.

What I think he was asking for is not wiggly looking things in the IR, but the waterfall graphic, where you can see resonances in the way the FR decays in time. I personally think in the case of headphones those are generally short lived (compared to speakers, for example), but double checking never hurts.
Yes! Cumulative spectral decay (CSD) waterfall plots.
If the parameters are chosen appropriately then this is definitely a more useful visualization of the IR than looking directly at the IR.
 
Last edited:
D

Deleted member 16543

Guest
No. As mentioned, to maximize SNR with speakers/headphones the measurements are typically done by recording a sweep or noise.
And typically, the transfer function is then calculated in the frequency domain. In the most naive way you divide output by input spectrum and the result will be the frequency response. To get the (time domain) impulse response you have to do an inverse Fourier transform.

Now you could also directly record an impulse (and directly get a time domain result) but this is not commonly done when measuring speakers/headphones.

First of all, it doesn't matter if you use a sweep or a spark (approximation of a Dirac delta) in terms of domain. They are ALL signals in the time domain, and they are ALL recorded in the time domain, in the form of pressure over time sensed by the measurement microphone. So you generate signals in the time domain (headphones/speakers produce pressure over time) and you record the results in the time domain as well. ALWAYS.
It takes some mathematical trickery to go from signal recorded under a sine sweep type of excitation to the IR of the system being measured, but that is the only 1:1 (back AND forth) conversion you can do that doesn't cause any loss of info. Starting from a random noise doesn't allow you to do that, by the way. Random noise is used for steady state FR and doesn't allow you to derive phase response (necessary to go from FR to IR).

However, those different ways to measure IR do result in different SNR.. which has nothing to do with time vs frequency domain.

No, it's an invertible transformation so there's no loss of information as long as we deal with signals with finite energy and duration.
But of course you can truncate the IR, window it... or do smoothing in the frequency domain, which is what you'd do for easier visualization in a graph.

It's funny that you mention that the FR is a "distorted" depiction of how a speaker behaves because FR is based on the assumption that it is a linear system without distortion. So, of course, you will not see non-linear distortion in an FR. Still, (magnitude of) the FR is the most important metric when it comes to how a headphone sounds.

Second, you might be surprised to know that you can derive MANY FRs from a single IR. Not only, you should strive to derive one FR representing the device in question by modifying the measured IR in acoustically meaningful ways. A blatant example (coarse and imprecise as it may be) is the various amount of smoothing speaker/headphone vendors usually apply, so that potential customers don't run away screaming if they saw the raw FR (let alone the raw IR!). It's not that they try to cheat (well, not all the time at least). There is a reason why they smooth it out other than not to lose potential customers. The reason being we are luckily not that sensitive to narrow dips in the FR (thank god!).

Now transpose a similar reasoning to the time domain. The ability to find one FR that correlates to how humans perceive sound passes through filtering (gating) the IR in different ways, some more, some less meaningful.
For speakers, for example, it's important to gate the IR so that we separate direct wave vs close reflections vs far reflections (as in close and far in time of arrival). Those need to be weighted differently, to correlate to how they psychoacoustically generate a perceived sound pressure. This weighting depends on the frequency of the sound, so that's where a sine sweep is very handy as a type of system excitation (on top of providing better SNR, of course).
You don't seem to have put much though in the fact that we perceive sound from a source as a series of direct/reflected waves staggered in time, due to the nature of reflections (although this is probably not as important with headphones as it is with speakers).
In other words, we get many waves one after the other, all with its own FR. Extracting one individual FR out of these many FRs is half science and half art... And the other half taste ;)

I said FR is distorted in the sense that it gives you a distorted (incomplete) picture of what's going on (especially if one doesn't include the phase response), because it is the Fourier transform of a response in the time domain that has to go through some gating to make sense in its frequency domain corresponding form.
If you don't believe me, try to measure the same speakers in an anechoic room or measure their steady state response in the listening room.
The anechoic FR is also valid psychoacoustically (if you were to listen in an anechoic room). The listening room steady state isn't. Because we don't perceive sound the same way a mic registers sound pressure in a steady state condition.
What I meant by "distortion" has nothing to do with the non-linearities of the headphones being measured.

Except it does and provably so.

It definitely CAN come from that, but NOT ONLY from that. The only IR that has no "ringing" is a Dirac delta. And you don't want that to go through a headphone/speaker.
You're literally describing a brickwall low-pass filter, which is probably one of the most blatant examples for "deviation from flat".
Which results in lots of "ringing" in the IR.
Note: as should be obvious from my previous response, with "ringing" I refer to what people see when they look at the IR and not what they hear
A brickwall filter up to 20 kHz is ideal and flat up to that frequency. It is also acoustically equivalent to a flat up to infinity FR (again, with linear phase response), so I would not call it a blatant deviation from flat.
It is obvious what you meant. You meant wiggly looking things in the IR, and I agree that they are not necessarily audible. So much so that you actually want them, for example in a DAC kernel.
What wasn't obvious to you is that he was asking about ringing in the waterfall plot, not in the IR.
Also, you're mixing multiple things here / bringing up new topics. First of all, headphones are not linear but approximately min phase systems, so they don't produce "pre-ringing" in the IR, but the filter you described will.
That would in fact be audible, because it is not as easily masked by our hearing, but is not for 44.1 kHz sampling rates purely based on the fact that the cutoff frequency is above 20 kHz where the sensitivity of our hearing is abysmal.
I was simply explaining the concept that not all wiggly looking things in a IR come from FR deviations from flat.

So ringing is there and lots of it but it's a different kind of ringing and it's not audible for different reasons.


Sure, it's called an ideal low-pass filter. Results in infinite "ringing" in the IR btw, so not achievable in reality but we can get arbitrarily close.
Yup. Infinite ringing AND pre-ringing (imagine that! I can see the horrified look on the face of some audiophools as if I were in the room with them), causing no problem whatsoever (if the cut off frequency is 20 kHz+).

Yes! Cumulative spectral decay (CSD) waterfall plots.
If the parameters are chosen appropriately then this is definitely a more useful visualization of the IR than looking directly at the IR.
I don't think he meant you can see the CDS ringing from the IR. He asked for the IR AND the CSD, on top o the IR.
 
Last edited by a moderator:

xnor

Active Member
Joined
Jan 12, 2022
Messages
193
Likes
207
First of all, it doesn't matter if you use a sweep or a spark (approximation of a Dirac delta) in terms of domain. They are ALL signals in the time domain, and they are ALL recorded in the time domain, in the form of pressure over time sensed by the measurement microphone. So you generate signals in the time domain (headphones/speakers produce pressure over time) and you record the results in the time domain as well. ALWAYS.
Sure, that's completely obvious. That wasn't my point though, which was about SNR when doing speaker measurements (and there are some other properties that make some test signal advantageous over impulsive excitation) being a main reason against directly recording the IR.
So again, I never said that we record in anything other than in time domain. But that's obvious and irrelevant to the points I actually made so I'm not sure why this is brought up at all.

You said that we derive the FR from the IR, but as I pointed out this is typically not the case because an impulse is a poor choice of excitation for measuring speakers. Again, we typically calculate the FR in the frequency domain from the recorded test signal (which is typically not an IR itself) and then transform this back into an IR if needed.
I really hope I don't need to repeat this again. :(

Starting from a random noise doesn't allow you to do that, by the way. Random noise is used for steady state FR and doesn't allow you to derive phase response (necessary to go from FR to IR).
This is also incorrect or another misunderstanding of terms. While noise as a measurement signal brings its own problems you can definitely recover the FR of an LTI system through which you put e.g. some white noise. If you know what the signal looked before and after then you can calculate the transfer function even if the signal was random noise.
I think you're again confusing things here, as phase is part of the FR regardless.

Second, you might be surprised to know that you can derive MANY FRs from a single IR. Not only, you should strive to derive one FR representing the device in question by modifying the measured IR in acoustically meaningful ways. A blatant example (coarse and imprecise as it may be) is the various amount of smoothing speaker/headphone vendors usually apply, so that potential customers don't run away screaming if they saw the raw FR (let alone the raw IR!).
As I had already explained when I mentioned smoothing in my previous reply.
But again, those are not invertible transformations. It's altering the data for visualization (and also as explained before, the phase data is usually thrown away completely).

You don't seem to have put much though in the fact that we perceive sound from a source as a series of direct/reflected waves staggered in time, due to the nature of reflections (although this is probably not as important with headphones as it is with speakers).
Huh, where would you get that idea from? Of course I know about reflections.
Reflections are reflected (sorry) in the FR though... but not in smoothed FR plots with phase thrown again. But hey, amirm plots what seems to be raw group delay, which in several aspects is a better choice for visualization than showing the raw phase. ;)

In other words, we get many waves one after the other, all with its own FR. Extracting one individual FR out of these many FRs is half science and half art... And the other half taste ;)
I don't see the art. There is only one FR and one IR and those are transformations of each other.
As has already been mentioned as well, you can do CSD waterfall plots to help visualize things better especially since audio "FR plots" are typically just plots of the smoothed magnitude which can hide e.g. effects from reflections, timing issues etc.

To clarify, I distinguish between the FR that describes the system and the many plots and graphs and other kinds of visualizations that one can derive from it.

I said FR is distorted in the sense that it gives you a distorted (incomplete) picture of what's going on (especially if one doesn't include the phase response), because it is the Fourier transform of a response in the time domain that has to go through some gating to make sense in its frequency domain corresponding form.
Again regarding deriving the IR, this is usually not how it works as I've tried to explain three times now so I won't go into this again.

What I meant by "distortion" has nothing to do with the non-linearities of the headphones being measured.
I know. I should have included a smiley. :p

It definitely CAN come from that, but NOT ONLY from that. The only IR that has no "ringing" is a Dirac delta. And you don't want that to go through a headphone/speaker.
In an LTI system it can only come from that and that includes "reflections" since those necessarily change the FR! Regarding non-linear distortion see my previous comment.
There are also other signals that have no ringing. You can also design anti-aliasing filters or low-pass filters in general without ringing.

It is obvious what you meant. You meant wiggly looking things in the IR, and I agree that they are not necessarily audible. So much so that you actually want them, for example in a DAC kernel.
What wasn't obvious to you is that he was asking about ringing in the waterfall plot, not in the IR.
I don't think he meant you can see the CDS ringing from the IR. He asked for the IR AND the CSD, on top o the IR.
I'm not sure what he was asking. He said: "time domain spectrum plot or measure the burst response", which was confusing at least to me so I tried to clarify the terms and CSD wasn't really mentioned by him. Maybe that's what he meant.

Anyway, to reiterate my initial point: I would not show the IR because then people will read magical properties out of all the "ringing" they see... like some already do with the group delay plots.
 
Last edited:
D

Deleted member 16543

Guest
Sure, that's completely obvious. That wasn't my point though, which was about SNR when doing speaker measurements (and there are some other properties that make some test signal advantageous over impulsive excitation) being a main reason against directly recording the IR.
So again, I never said that we record in anything other than in time domain. But that's obvious and irrelevant to the points I actually made so I'm not sure why you bring it up.

You said that we derive the FR from the IR, but as I pointed out this is typically not the case because an impulse is a poor choice of excitation for measuring speakers. Again, we typically calculate the FR in the frequency domain from the recorded test signal (which is typically not an IR itself) and then transform this back into an IR if needed.
I really hope I don't need to repeat this again. :(


This is also incorrect or another misunderstanding. While noise as a measurement signal brings its own problems you definitely can recover the FR of an LTI system through which you put e.g. some white noise. If you know what the signal looked before and after then you can calculate the transfer function even if it was random noise.
I think you're again confusing things here, as phase is part of the FR regardless.


As I had already explained when I mentioned smoothing in my previous reply.
But again, those are not invertible transformations. It's altering the data for visualization (and also as explained before the phase data is usually thrown away completely).


Huh, where would you get that idea from? Of course I know about reflections.
Reflections are reflected (sorry) in the FR though... but not in smoothed FR plots with phase thrown again. But hey, amirm plots what seems to be raw group delay. ;)


I don't see the art. There is only one FR and one IR and those are transformations of each other.
As has already been mentioned as well, you can do CSD waterfall plots to help visualize things better especially since audio "FR plots" are typically just plots of the smoothed magnitude which e.g. hide effects from reflections, timing issues etc.

To clarify, I distinguish between the FR that describes the system and the many plots and graphs and other kinds of visualizations that one can derive from it.


Again regarding deriving the IR, this is usually not how it works as I've tried to explain three times now so I won't go into this again.


I know. I should have included a smiley. :p


In an LTI system it can only come from that and that includes "reflections". Regarding non-linear distortion see my previous comment.
There are also other signals that have no ringing. You can also design anti-aliasing filters or low-pass filters in general without ringing.



I'm not sure what he was asking. He said: "time domain spectrum plot or measure the burst response", which was confusing at least to me so I tried to clarify the terms and CSD wasn't really mentioned by him. Maybe that's what he meant.

Anyway, to reiterate my initial point: I would not show the IR because then people will read magical properties out of all the "ringing" they see... like some already do with the group delay plots.

I guess I would then have to reiterate for a 5th time that the IR (or its convolution with a sine sweep, if you start from that kind of excitation) is the starting point, from where you derive the FR (one of the possible ones, depending on IR gating).
I have no idea why you brought SNR into a conversation regarding time vs. frequency domain. It looks to me like you're trying to flex some muscles and are at times all over the place. Consider me not that impressed by you brain muscle flexing, incidentally..

Please read again what I wrote and if you have questions let me know (especially in regards to the many FRs one can derive from the same IR, the ability of extracting the one that's psychoacoustically relevant is indeed half art. This concept seems to be above your naive conception of audio measurements and what they entail).

Also, as a side note, if the excitation is noise you don't really exactly know instant after instant what that is. And so you can only do a FR of a recorded signal coming from a not completely known excitation. You can't derive phase response from it, unless you record the excitation as it happens, in parallel with the measurement mic (but then you also need to account for amplitude and phase response of this other excitation signal recorder). I know of no system that operates that way, even though in theory it could work.
It's just a stupid design that no serious engineer would consider implementing. Maybe that's why you think it might work...
 

xnor

Active Member
Joined
Jan 12, 2022
Messages
193
Likes
207
I guess I would then have to reiterate for a 5th time that the IR (or its convolution with a sine sweep, if you start from that kind of excitation) is the starting point, from where you derive the FR (one of the possible ones, depending on IR gating).
But as I've explained it's not. As I've explained, in speaker/headphone measurements you usually don't measure the IR directly, so it's not the starting point, no matter how often you repeat it.
And as I've also explained, even for measurements where you only need to do (de)convolution in the time domain, that is still done in the frequency domain and the result transformed back into time domain.

But it is ridiculous to get hung up on such a trivial point anyway because - as I've also explained before - both contain the same information. (You made some confusing points where you seemed to disagree with this, but it's mathematically proven.)

I have no idea why you brought SNR into a conversation regarding time vs. frequency domain. It looks to me like you're trying to flex some muscles and are at times all over the place. Consider me not that impressed by you brain muscle flexing, incidentally..
Because you don't measure IR of speakers directly because SNR would be too low. I've explained this also several times now.
Otherwise you could simply record an impulsive noise and directly get an IR. Only then would the IR be your starting point, as you keep on saying.

I think I am quite consistent in my usage of the terms so if you honestly think it's me that's all over the place then please be more specific so I can try to clarify.

Please read again what I wrote and if you have questions let me know (especially in regards to the many FRs one can derive from the same IR, the ability of extracting the one that's psychoacoustically relevant is indeed half art. This concept seems to be above your naive conception of audio measurements and what they entail).
Sure, just get personal and attack me. But why? Because I've pointed out several things that you said were wrong?
I'm not here to flex or brag, but I certainly know a thing or two about signal processing and psychoacoustics having developed products that only exist because of those fields.

Also, as a side note, if the excitation is noise you don't really exactly know instant after instant what that is. And so you can only do a FR of a recorded signal coming from a not completely known excitation. You can't derive phase response from it, unless you record the excitation as it happens, in parallel with the measurement mic (but then you also need to account for amplitude and phase response of this other excitation signal recorder). I know of no system that operates that way, even though in theory it could work.
It's just a stupid design that no serious engineer would consider implementing. Maybe that's why you think it might work...
Again, cool down dude, what's with the personal attacks?
It's good that you bring up FR of the recording components, because even though I granted that the IR would be as starting point if you recorded an impulsive noise, this is not exactly true. Why? Because it includes the FR of the recording components. So if we're pedantic then you'd also need to do some compensation processing of the raw IR. The resulting FR would be the actual thing that describes your DUT.

You also still make confusing statements like "you can only do a FR" ... but an FR contains all the information. It's an invertible transformation of the IR, remember?

Flat time delays (e.g. a few ms from sound traveling through the air, converter filters, etc.) are a non-issue. There are other issues with noise signals though, as I've also mentioned before.

Lastly, this has been implemented in several speaker measurement systems. Apparently all of their designers and developers were "non serious engineers". You must be an ultra-serious engineer to be able to judge them like that. :D
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,999
Likes
36,215
Location
The Neitherlands
Hi Amirm!

Cay you give a time domain spectrum plot of the headphones? It would be good to know about driver behaviour and how long lived the resonances are. It would be an important addition that tell a lot about the headphones. Thanks in advance.

I assume you mean a plot like this (HE400SE) ?
spectr-l-1.png


I believe only Marv (SBAF) measures burst response of headphones at a few frequencies.
 
D

Deleted member 16543

Guest
But as I've explained it's not. As I've explained, in speaker/headphone measurements you usually don't measure the IR directly, so it's not the starting point, no matter how often you repeat it.
And as I've also explained, even for measurements where you only need to do (de)convolution in the time domain, that is still done in the frequency domain and the result transformed back into time domain.

You're confusing the DFT/IDFT used as a means to deconvolve the measured response when using a sine sweep with the DFT that represents the 'meaningful' (psychoacoustically speaking) FR. The first 2 are indeed capable to give you that 1:1 conversion between time/frequency domain of the raw IR and raw FR. But I promise you, the IR and FR that are psychoacoustically meaningful look quite different from the raw ones. You first gate the IR, then find the equivalent FR (which doesn't look like the DFT used to deconvolve the IR because it's generated from a gated IR). Now, for headphones, I think we can safely assume the difference between these two FR is not as big as with speaker measurements, but they're still different (or at least they should be.. I can't put my hand on fire every measurement software does things correctly, after all).
But it is ridiculous to get hung up on such a trivial point anyway because - as I've also explained before - both contain the same information. (You made some confusing points where you seemed to disagree with this, but it's mathematically proven.)
I know that mathematically the time and frequency domain contain the same information. However, given how we hear and the psychoacoustics involved (which I'm sure you will find interesting, if you dug into them a little), you always end up (at least you should) deriving the FR after gating and applying some changes to the raw IR. That's just how things are done (or should be done).

Because you don't measure IR of speakers directly because SNR would be too low. I've explained this also several times now.
Otherwise you could simply record an impulsive noise and directly get an IR. Only then would the IR be your starting point, as you keep on saying.

Maybe I should have been clearer in my language. When I say starting point I mean that IR always comes first, then FR (the psychoacoustically meaningful IR and FR, that is). It's true that for the raw IR you may (or may not) pass through a deconvolution that is done in the frequency domain, but that's a different thing, as I explained above. They're just means to an end. You don't see that FR as a result.. and if you do, you might as well tune by ear, because it's an inaccurate depiction of how a device sounds.
Now, if we were talking DACs and amps, they would be the same thing, because there's no gating and smoothing involved. But we're talking electro-mechanical devices that create a lot of reflected signals, so they are NOT the same.
I think I am quite consistent in my usage of the terms so if you honestly think it's me that's all over the place then please be more specific so I can try to clarify.

I think I did clarify myself. We're talking about different FRs.
Sure, just get personal and attack me. But why? Because I've pointed out several things that you said were wrong?

No. Because you don't know what you're talking about. And you come at me with an attitude. Nothing I said was wrong. You're spreading confusion and misinformation. I think I have clarified where you are wrong clearly enough now, though.
But you'll probably still reply I am the one who's wrong. So this is my last reply to you.
I'm not here to flex or brag, but I certainly know a thing or two about signal processing and psychoacoustics having developed products that only exist because of those fields.

Well, consider me surprised. And if you created measurement software please let me know what it is, so that I can avoid buying it, given that you seem to not grasp the difference between raw and psychoacoustic response.
Again, cool down dude, what's with the personal attacks?

Not an attack. Just a statement. Also, you are the one starting with an attitude. Don't complain if you can dish it but you can't take it.. dude ;)
It's good that you bring up FR of the recording components, because even though I granted that the IR would be as starting point if you recorded an impulsive noise, this is not exactly true. Why? Because it includes the FR of the recording components. So if we're pedantic then you'd also need to do some compensation processing of the raw IR. The resulting FR would be the actual thing that describes your DUT.

I'm talking about compensating for the extra measurement device you would need to figure out what the excitation signal actually is, if you want to use random noise for it. Again.. stupid design, but theoretically doable.
You also still make confusing statements like "you can only do a FR" ... but an FR contains all the information. It's an invertible transformation of the IR, remember?
Yes, but if you don't know the FR of the excitation signal you can't find the FR of the device you're trying to measure now, can you?
You can only do (meaning derive) the FR of the final result (the measured signal), but you have no way of knowing which part of it is due to the source and which one is due to the device.
Flat time delays (e.g. a few ms from sound traveling through the air, converter filters, etc.) are a non-issue. There are other issues with noise signals though, as I've also mentioned before.
Who's talking about the latency of time of arrival? I'm talking about not completely knowing the excitation signal, when you use random noise. I guess you can use a pre-recorded sample of white noise for which you have preventively calculated its FR and then you'd be able to derive the FR of the device you're measuring. Basically using the same conceptual steps of using a sweep signal, but with a source signal that's much more difficult to extract the info you require from. The question is.. why would you do it? That's probably why nobody uses random noise unless they're interested in the steady state amplitude response (which is only half the FR, and not good at telling the sound balance of how a device sounds to our ears anyway).
Lastly, this has been implemented in several speaker measurement systems. Apparently all of their designers and developers were "non serious engineers". You must be an ultra-serious engineer to be able to judge them like that. :D
No, I'm just up to speed with more recent and more accurate measurement software. Are you?
 
Last edited by a moderator:

xnor

Active Member
Joined
Jan 12, 2022
Messages
193
Likes
207
You're confusing the DFT/IDFT used as a means to deconvolve the measured response when using a sine sweep with the DFT that represents the 'meaningful' (psychoacoustically speaking) FR. The first 2 are indeed capable to give you that 1:1 conversion between time/frequency domain of the raw IR and raw FR. But I promise you, the IR and FR that are psychoacoustically meaningful look quite different from the raw ones. You first gate the IR, then find the equivalent FR (which doesn't look like the DFT used to deconvolve the IR because it's generated from a gated IR). Now, for headphones, I think we can safely assume the difference between these two FR is not as big as with speaker measurements, but they're still different (or at least they should be.. I can't put my hand on fire every measurement software does things correctly, after all).
No, I absolutely don't. I even deliberately avoided introducing these terms because they're details that are irrelevant to what we discussed. And confuse with what? The sentence just ends. It's you who's again all over the place, introducing new terms, mixing topics ... so please stop saying that I am the one confusing things.

Let's clarify... again. I (we?) were talking about what the result of a speaker/headphone measurement is (and I explained why we don't typically record IRs directly), FR vs. IR, transformation between the two.
Doing different kinds of visualizations, filtering or "massaging the data" to be more psychoacoustically meaningful are additional processing steps that happen afterwards, most of which is done in the frequency domain.

You don't "find the equivalent FR" for a gated IR. Expressing it like that makes no sense and means you don't understand what is going on.
You truncate and window the IR (aka "gating"), e.g. to analyze only the direct-sound portion of the IR, and then transform this into the FR. This transform is the same as with the full FR. You don't "find equivalent FRs".

What is "DFT used to devonvolve the IR" supposed to mean? This again makes no sense in this context. What you actually mean is the FR of the (untruncated) IR.
Of course the FR of the gated IR is different because reflections alter the FR ... which is what I had already pointed out to you several times! (You know, when you made another one of your disparaging and false remarks.)

Maybe I should have been clearer in my language. When I say starting point I mean that IR always comes first, then FR (the psychoacoustically meaningful IR and FR, that is). It's true that for the raw IR you may (or may not) pass through a deconvolution that is done in the frequency domain, but that's a different thing, as I explained above. They're just means to an end. You don't see that FR as a result.. and if you do, you might as well tune by ear, because it's an inaccurate depiction of how a device sounds.
Now, if we were talking DACs and amps, they would be the same thing, because there's no gating and smoothing involved. But we're talking electro-mechanical devices that create a lot of reflected signals, so they are NOT the same.
You say you should have been clearer but then simply repeat the same statement?! Always comes first in what way? Why again mix this with psychoacoustics?
And again, you seem be confused about what's going on. Why wouldn't I get the FR as a result? What has that got to do with tuning by ear? Why are you again mixing this with how a device sounds?

At least in the way you're expressing yourself here your thoughts are too erratic to follow.

No. Because you don't know what you're talking about. And you come at me with an attitude. Nothing I said was wrong. You're spreading confusion and misinformation. I think I have clarified where you are wrong clearly enough now, though.
But you'll probably still reply I am the one who's wrong. So this is my last reply to you.
Sure, whatever makes you feel better and prevents complete escalation from your side.

Well, consider me surprised. And if you created measurement software please let me know what it is, so that I can avoid buying it, given that you seem to not grasp the difference between raw and psychoacoustic response.
Of course I do. It's YOU who keeps on mixing these topics.
You understand that we don't measure a "psychoacoustic response", right? That we do post-processing of measurement data e.g. to be more psychoacoustically meaningful, to create fancy graphs, to create more robust EQ curves, etc.
I'm not sure why you keep attacking me like that with obviously false statements.

Not an attack. Just a statement. Also, you are the one starting with an attitude. Don't complain if you can dish it but you can't take it.. dude ;)
Ok, so "your naive conception of audio measurements" and "a stupid design that no serious engineer would consider [...] that's why you think it might work..." are not personal attacks. Got it.
Isn't this embarrassing to you?

And what attitude? Clarifying your confused statements, mixing of terms and correcting what is wrong is considered bad attitude but calling others stupid and naive is not a personal attack? Right, I think I'm done except for one more clarification:

You seem to have gotten hung up on the term "random", but as I explained this does not mean that the signal is unknown. I even explicitly pointed this out (!), which you seem to have ignored so you can contradict more things I didn't even say...
Just reading and understanding this would have saved you 3 paragraphs.

As for your last and sadly yet again nasty remark, if you knew what you're talking about then you'd know that measurement software can supports multiple different kinds of excitation signals because not all signals are applicable in all situations or have certain pros and cons that make them better suited for certain use cases. Peace~
 
Last edited:
D

Deleted member 16543

Guest
No, I absolutely don't. I even deliberately avoided introducing these terms because they're details that are irrelevant to what we discussed. And confuse with what? The sentence just ends. It's you who's again all over the place, introducing new terms, mixing topics ... so please stop saying that I am the one confusing things.

Let's clarify... again. I (we?) were talking about what the result of a speaker/headphone measurement is (and I explained why we don't typically record IRs directly), FR vs. IR, transformation between the two.
Doing different kinds of visualizations, filtering or "massaging the data" to be more psychoacoustically meaningful are additional processing steps that happen afterwards, most of which is done in the frequency domain.

You don't "find the equivalent FR" for a gated IR. Expressing it like that makes no sense and means you don't understand what is going on.
You truncate and window the IR (aka "gating"), e.g. to analyze only the direct-sound portion of the IR, and then transform this into the FR. This transform is the same as with the full FR. You don't "find equivalent FRs".

What is "DFT used to devonvolve the IR" supposed to mean? This again makes no sense in this context. What you actually mean is the FR of the (untruncated) IR.
Of course the FR of the gated IR is different because reflections alter the FR ... which is what I had already pointed out to you several times! (You know, when you made another one of your disparaging and false remarks.)


You say you should have been clearer but then simply repeat the same statement?! Always comes first in what way? Why again mix this with psychoacoustics?
And again, you seem be confused about what's going on. Why wouldn't I get the FR as a result? What has that got to do with tuning by ear? Why are you again mixing this with how a device sounds?

At least in the way you're expressing yourself here your thoughts are too erratic to follow.


Sure, whatever makes you feel better and prevents complete escalation from your side.


Of course I do. It's YOU who keeps on mixing these topics.
You understand that we don't measure a "psychoacoustic response", right? That we do post-processing of measurement data e.g. to be more psychoacoustically meaningful, to create fancy graphs, to create more robust EQ curves, etc.
I'm not sure why you keep attacking me like that with obviously false statements.


Ok, so "your naive conception of audio measurements" and "a stupid design that no serious engineer would consider [...] that's why you think it might work..." are not personal attacks. Got it.
Isn't this embarrassing to you?

And what attitude? Clarifying your confused statements, mixing of terms and correcting what is wrong is considered bad attitude but calling others stupid and naive is not a personal attack? Right, I think I'm done except for one more clarification:

You seem to have gotten hung up on the term "random", but as I explained this does not mean that the signal is unknown. I even explicitly pointed this out (!), which you seem to have ignored so you can contradict more things I didn't even say...
Just reading and understanding this would have saved you 3 paragraphs.
I was going to reply point by point, but I'll just report here the one thing you said that I think is the crucial point of our contention.
You said:

Doing different kinds of visualizations, filtering or "massaging the data" to be more psychoacoustically meaningful are additional processing steps that happen afterwards, most of which is done in the frequency domain.

That may be. Yes, smoothing of the amplitude response does happen in the frequency domain (how else could it be?), but it's not the first thing that happens. The very first manipulation and what it is applied to determines what comes first. So is there any manipulation before frequency smoothing? And what is that manipulation applied to?
Yes, there's time domain gating and weighting, applied to the raw IR. And that's all I have been trying to explain to you. The raw IR comes before (it doesn't matter if the raw IR itself comes from a DTF/IDFT process used to deconvolve the measured signal generated by a sine sweep type of excitation. The raw FR from which you may - or may not - end up calculating the raw IR would indeed come before the IR, but going through a pass in the frequency domain to calculate the raw IR is not a necessary step, conceptually speaking).

What matters is the psychoacustic FR, I think we both agree to that. Personally, when I say FR, I do mean FR (both amplitude and phase response), but that's another story...
Everything that has any resemblance of correlation to how we actually perceive sounds (phychoacoustics) starts from the first manipulation, which is done to the raw IR, not to the raw FR.
So, if the first manipulation to get to a meaningful FR is done to the raw IR, it goes without saying that the raw IR comes first (even though practically speaking you may have gone through the raw FR to get to the raw IR in the first place... Emphasis on MAY).
I really don't know how much more clearly I could explain this.

Only after you derive a first version of the FR from the gated raw IR, you then smooth it in the frequency domain and that's your psychoacoustic FR... and if you kept track of the phase response too (instead of throwing it away as you seem inclined to do, or at least not too bothered by it) you may even recalculate another IR (this time yes, from meaningful FR to meaningful IR. FR would come first in this final step). Incidentally, at this stage, rather than the IR, a step response is actually more telling (because it's easier to interpret).

So there's a lot of back and forth between the two domains, but sometimes that's just for simplicity of calculation. Don't confuse the process with the concept. If this was the famous chicken or egg kind of question, the raw IR would be the answer.
The raw IR comes (or should come.. again, not every measurement software is up to speed with psychoacoustics, unfortunately) before any final response graph you end up seeing at the end (both in the time or frequency domain) that has anything to do with how we actually perceive sound.
And yes, so does the raw FR (it being just a different representation of the raw IR), but you need raw IR to apply the very first psychoacoustic manipulation. You do not need the raw FR for anything. If it's there before the raw IR (emphasis on IF), it's there only because it helps getting to what's really the important thing, from which everything else follows: the raw IR.

I really hope this clears the waters.

As for your last and sadly yet again nasty remark, if you knew what you're talking about then you'd know that measurement software can supports multiple different kinds of excitation signals because not all signals are applicable in all situations or have certain pros and cons that make them better suited for certain use cases. Peace~

Sure, but find me a software (if it even exists) that uses white noise as excitation and outputs both amplitude and phase response and I'll show you a software that took an unnecessarily complex way to achieve those results. I'm not aware of any software that goes that route.
As I stated a few times by now, white noise as excitation signal is indeed useful, if you are only interested in the steady state amplitude response and nothing else.
The steady state amplitude response is better than nothing, I guess, but technology is way past the point when you would use that to measure speakers and claim it has any truly meaningful psychoacoustic value, wouldn't you say? It wasn't intended as a nasty remark (persecution complex much?). What measurement software do you use? Are you sure you're up to speed with what is technically possible to do, in the field of headphone/speaker measurements? It sounds to me from your replies that you aren't, in all honesty.

As a side note, I do remember reading something about a manufacturer that used very short interval excitations generated by controlled arc flashes to directly measure the IR of some device they manufactured. I don't really remember what the device or the company was, but that seems like a very stupid way of doing it (I do remember the paper did have some catchy audiophool writing, though. It's always the case with stupid audio products. I'm sure they sold a few units just because of that). This to point out that just because somebody does something, it doesn't mean there's a necessarily intelligent reason behind it. Sometimes it's just about selling more units to people that have very little understanding of the tools, and what happens (or should happen) inside these "magical" black boxes they're about to buy.
 

NDRQ

Active Member
Joined
Nov 24, 2018
Messages
180
Likes
248
I assume you mean a plot like this (HE400SE) ?
spectr-l-1.png


I believe only Marv (SBAF) measures burst response of headphones at a few frequencies.

Yes, i think this could be a good addition.
Is that burst response hard to measure? Because that also shows useful information about the sound.
For example, the HD800 bass usually seems weaker than it should be, based on the frequency response. The burst response showed that the driver having some problem reaching the desired levels, so probably thats why. While in frequency response graph, we only see that it can produce the signal, doesnt show the actual details. Also showing how fast the driver stopping, how clean the background, i guess.

burst game 2.png


HD800 is the last one, first is LCD2, i dont remember what was the middle one.
Blue curve is the signal, yellow is what the headphone actually doing.
 
Last edited:

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,999
Likes
36,215
Location
The Neitherlands
This is not a standard test signal but I believe REW can do this.
Then comes the matter of correctly measuring, displaying and interpreting the signal and looking in a log scale rather than the linear scale here.

I know Marv has been looking into this and tried several methods for displaying so conclusions can be drawn.

Note that a burst is not a highly realistic test signal (equally suspect as a square-wave) because in music a signal never comes to a dead-stop as happens in a burst. There is always a decay which lacks in the test signal.
 

ADU

Major Contributor
Joined
Jul 29, 2021
Messages
1,587
Likes
1,086
Does "psychoacoustic" just mean that it's plotted on a semi-log FR graph, sax512?
 
D

Deleted member 16543

Guest
Does "psychoacoustic" just mean that it's plotted on a semi-log FR graph, sax512?

No. It means that instead of looking at the actual raw response, you manipulate it so that it's easier to extract information that is useful to correlate to how we really perceive sound, instead of how a mic registered it.
For example, smoothing out the FR (in a certain way, not the fraction of octave band way typically used) reflects the fact that we are not that sensitive to narrow band dips.
For the time domain manipulation it's a little more complicated, but in short it's about associating very early reflections to the total perceived power, and late reflections to "ambient" info.
This early/late differentiation is frequency dependent (think as in almost fixed, but in terms of full cycles) and the part that I call art is adjusting the parameters so that you arrive to a final FR that makes sense.
Like for example if you had 2 psychoacoustic FRs that differ in a certain frequency range, you are more likely to actually also hear that difference.
With raw FR that's not always the case.
Many different looking raw FRs can actually sound equivalent to our ears. Those would all produce psychoacoustic FRs that look very similar to each other (if you were a good "artist" at deriving them), even though in their raw form you would expect even substantial sound character differences.
 
  • Like
Reactions: ADU

ADU

Major Contributor
Joined
Jul 29, 2021
Messages
1,587
Likes
1,086
Interesting to hear some different perspectives on this stuff.

No. It means that instead of looking at the actual raw response, you manipulate it so that it's easier to extract information that is useful to correlate to how we really perceive sound, instead of how a mic registered it.

I'm not quite followin all the business on IR.

If you are talkin about the perception of FR on a pair of headphones though, then I'd think that would include things like smoothing over the inaudible details, putting the response into semi-log so octaves are more evenly spaced, and also applying some type of transfer function (aka a compensation or correction curve) to adjust the amplitudes at different frequencies.
 
Last edited:
D

Deleted member 16543

Guest
If you are talkin about the perception of FR on a pair of headphones though, then I'd think that would include things like smoothing-over the inaudible details,
Yes
putting the response into semi-log so octaves are more evenly spaced,
This is just visual data presentation, not data manipulation, but yes, it does make it easier to read an FR
and also applying some type of a transfer function (aka a compensation or correction curve) to adjust the amplitudes at different frequencies.
If you're referring to adjusting for the mic's and electronics responses, yes. If you're referring to adjusting for the ear's sensitivity and what type of frequency response sounds balanced to our ears, that's usually taken into account by comparing to a specific target curve, rather than compensating for the target directly in the measured response.
 

ADU

Major Contributor
Joined
Jul 29, 2021
Messages
1,587
Likes
1,086
This is just visual data presentation, not data manipulation, but yes, it does make it easier to read an FR

Thanks for the reply, sax512.

I mention the semi-log plotting, because I think people are so used to it that they sometimes forget there are other ways that the FR data could potentially be presented.

The frequencies in a semi-log plot are spatially distributed in a manner similar to the way the keys are laid out on a keyboard though, or the way notes are displayed on a music staff, with octaves spaced at equal intervals or steps. And I believe the reason we do that may be because of something called "octave equivalence".

If you're referring to adjusting for the mic's and electronics responses, yes. If you're referring to adjusting for the ear's sensitivity and what type of frequency response sounds balanced to our ears, that's usually taken into account by comparing to a specific target curve, rather than compensating for the target directly in the measured response.

I wasn't really thinking about the mic and other electronics. But I'm sure that is a very important component in getting a measurement rig correctly calibrated. I have always sort of assumed this was fairly transparent in measurement systems like the GRAS, HBK, and Head Acoustics rigs. I suppose this is not the case though with more DIY kinds of setups.

What I was mostly thinkin about was compensation for the transfer characteristics of the ear. And other distortions of the audio signal's response within the headphones that might be intended to emulate or approximate effects of listening to speakers in a room.

A target response or approximation curve like the Harman curve is designed to handle not only the transfer characteristics of the ears, but also to emulate some of the FR characteristics of sound in a room. As well as the interaction of that sound with the head and body, I believe. I think that was the general idea behind it anyway, in addition to looking at and analyzing listener's subjective preferences.

Generally speaking, I don't think it was intended to fix or calibrate errors in the response of the measurement system's microphone... unless you're thinking about some parts of the ear as also being part of that microphonic system.

A diffuse field transfer function takes into account the transfer characteristics of the ears, and also the head and body. But not the room... The room effects are missing from this type of compensation curve because the sound source for this type of compensation curve or transfer function is totally diffuse, and theoretically equal in amplitude from any direction (which is not generally what happens when listening to speakers in a typical room).

Graphers and HP reviewers will provide different options for how these different forms of compensation are displayed. And in many cases you'll be able to view it either as a raw (aka uncompensated) FR plot, with one or more target compensation curves for comparison. Or as a compensated FR plot, with the inverse of the target compensation curve already applied to the raw frequency response data.

Most of Oratory1990's and Dr. Olive's GRAS plots, for example, will display the compensated FR curve with the transfer function already applied at the top of the graph. And the raw or uncompensated FR plot (along with a target curve or transfer function for comparison) below that. A recent example from Dr. Olive's Twitter...


Jaakko's AutoEQ graphs will show both smoothed and unsmoothed versions of the response curves, as well as suggested EQ curves, which are essentially the inverse of the compensated FR "error" curves.

Another ASR member also recently posted a few Sonarworks graphs to a different topic on the Beyerdynamic DT-770. And those graphs appear to show only the compensated FR response, and a recommended EQ curve for the headphones based on that (which is more or less the inverse of the compensated FR curve). Both the raw FR curve and the target compensation curve are undisclosed in this case.

One of the problems with showing only a compensated FR plot, imo, is that there's no real consensus on what a truly neutral FR response should be for a pair of headphones. So there's always a bit of guesswork involved in translating or modifying the raw FR data to some sort of error or compensated FR curve which is perceptually accurate or reliable. I rarely use the Harman compensated plots for my own plotting/EQ-ing purposes for this reason (and some others). But will use both the raw, and also DF compensated FR plots of headphones for a wide variety of jobs. As well as these plots with various EQ adjustment curves applied to them.
 
Last edited:
D

Deleted member 16543

Guest
What I was mostly thinkin about was compensation for the transfer characteristics of the ear. And other distortions of the audio signal's response within the headphones that might be intended to emulate or approximate effects of listening to speakers in a room.

A target response or approximation curve like the Harman curve is designed to handle not only the transfer characteristics of the ears, but also to emulate some of the FR characteristics of sound in a room. As well as the interaction of that sound with the head and body, I believe. I think that was the general idea behind it anyway, in addition to looking at and analyzing listener's subjective preferences.

Generally speaking, I don't think it was intended to fix or calibrate errors in the response of the measurement system's microphone... unless you're thinking about some parts of the ear as also being part of that microphonic system.

I don't think that ear characteristics of the ear should be applied to the measured response in the stage of data manipulation. Even with my ear models, the function of the ear and canal replica is that of putting the mic capsule in the condition of recording pressure as the eardrum would sense it. That's the reasoning behind the B&K 5128 as well. Once that pressure is measured, there should be no further ear sensitivity/behavior related manipulation applied directly to the recorded signal. That is all taken into account by comparing to the correct target curve (if one can ever find it).
 

ADU

Major Contributor
Joined
Jul 29, 2021
Messages
1,587
Likes
1,086
I don't think that ear characteristics of the ear should be applied to the measured response in the stage of data manipulation. Even with my ear models, the function of the ear and canal replica is that of putting the mic capsule in the condition of recording pressure as the eardrum would sense it. That's the reasoning behind the B&K 5128 as well. Once that pressure is measured, there should be no further ear sensitivity/behavior related manipulation applied directly to the recorded signal. That is all taken into account by comparing to the correct target curve (if one can ever find it).

If I understand what you're suggesting here, it sounds like you'd prefer to have the FR data left mostly as is, in its raw form. And then just be able to compare the raw response curve to various target response curves.

That's not a bad thought. However, if you want the data to be mapped in a way that is more perceptually meaningful, then it would probably make sense to also have the ability to compensate the data with various transfer functions or target response curves, so that you can more easily see and compare the differences between the response of the headphones and other sound sources of your choice. This is probably a bit easier said than done, but it's more the direction I'd like to see reviewers and graphers going, eventually.

I think it'd be nice to be able to compare the targets and FR data in raw form as well though, as you've suggested.

Ora's graphing tool is one of the closest thing I've seen so far to something like this, because it gives you a number of different display options, including three different targets (none of which is exactly ideal imo). And the option of displaying the FR data either raw or compensated with the target of your choice. And seeing both at the same time. Not to mention the ability to display multiple FR curves at once for comparison (raw or compensated). And also to normalize or center the curves at different frequencies.

Those are all extremely useful features to have imo.

Crin's graphing tool works a bit differently, but has many of these features as well,... and then some. And is probably even more sophisticated in some ways. Some of the funnest features in Crin's tool are behind a paywall though.

One feature that might enhance both tools is the ability to use your own target curves. Or target curves from other sources. Crin's tool also allows you to compensate an FR curve using the frequency response of another headphone, so you can better see the differences in response between the two headphones, which is also pretty cool.

I can extend the features of these graphing tools with some of my own software, using the stacking function in Equalizer APO's Configuration Editor, for example. But it'd be nice to have some more powerful options like this built directly into the graphing tools, so you can try some different kinds of targets/compensation curves with the data.

I wish that Jude or somebody else would try to develop some tools like this for the HBK 5128 plots as well. Perhaps there is already some way to implement that in the headphone graphing tools that Ora is using though. (?)
 
Last edited:
Top Bottom