• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

What is group delay?

But then as I got to know more about phase relationships - I worked out that the things that were altering the frequency response - were ALL minimum phase - and therefore if I used minimum phase filters to correct the frequency response, due to the reciprocal relationships,
Uh, no, that's not how that works. A cascade of minimum phase stages compounds the damage. To correct it, you need the opposite response.
 
Uh, no, that's not how that works. A cascade of minimum phase stages compounds the damage. To correct it, you need the opposite response.
Yes - and using minimum phase properties in reverse (yin to yang) - reverses the minimum phase amplitude/frequency/time aspects...

As long as you know that the influences are sequential minimum phase - you can reverse them using minimum phase filters... easy peasy, and simple.

And as long as the system is all minimum phase, a simple pink noise or f/r sweep can be used to test and correct frequency.... as a natural consequence of being minimum phase adjusting the frequency/amplitude will then adjust time/frequency as well.... and given the reciprocal and fixed relationships of minimum phase - it will match (in reverse) the effects that cause the F/R variance.
 
As noted, group delay is rate of change of the phase. Sometimes this rate of change is expected, sometimes not. Let's look at an example:

index.php


That exponential drop off is expected. What is NOT expected are those massive spikes. At higher frequencies, this almost always indicates cancellations due to two waves interfering with each other. That causes the phase to go crazy with no correlation on what came before it. This means whatever frequency those spikes correspond to, need to be left alone as far as EQ. Reason is that the EQ will correct both the direct and reflected sound so it is a zero sum game.

Same thing happens with respect to room reflections and speakers. We call these "non-minimum-phase" response of the room/headphone.

With respect to other information in there, there can be resonances that mix with direct sound with different phase causing that fuzziness.

Understanding the above is about all that you need to know in the context of measurements you see from me.
 
Same thing happens with respect to room reflections and speakers.

At 10 foot listening position in my mostly untreated room, no smoothing:

Red, JBL LSR 308
Black, Martin Logan reQuest ESL

1643333686040.png
 
Last edited:
I must admit I am confused by this. Can you offer a more detailed explanation?

Maybe you're overthinking things?

Please stop saying, “group delay is the delay between input and output”.

Group delay is only a "delay between input and output" when there are no other delays in the system.
I meant nothing more nefarious than that...
 
As was pointed out, this is a “delay” not “group delay”:

[Phase] delay = φ/f,
Group delay = δφ/δf

I deal with group delays - at RF, calculate them and compensate for - every day, so “you can safely sleep at night”. And so far no complaints. So, pardon me, but I do know what I am talking about... But it’s ok. :)

Can someone explain why it is the group delay but not phase delay that is more important to human hearing?
 
Last edited:
Can someone explain why it is the group delay but not phase delay that is more important to human hearing?

Ok, let’s do it in a few steps. First, let’s look at sound traveling through the air. The “phase delay” is nothing but the fact that sound travels at a finite speed: There is a time delay between a sound wave leaving a source and reaching destination. [And while we could look at it as an acoustic energy “phonon” (a “wave bundle” with its front and end) and measure the delay as the time for this phonon to reach the destination…] But it is often simpler to remember that sound is a moving wave [V=Vo cos(ωt-kx+φo)], therefore the time delay between the source and destination is simply the difference between this wave’s phase at the source and destination φ (normalized to [divided by] this wave frequency ω). So again, the “phase delay” is just the propagation delay of a sound wave.

Now, the “group delay” represents a rather different phenomenon — the fact that waves of different frequency can have different velocities. Which means that such different-frequency waves — while leaving the source at the same time (and with the same phase) — would arrive to some destination with a slight delay in respect to each other. Or, with a slight phase variation between them. This is the “group delay” — waves (signals, tones) of different frequency would arrive with a delay that depends on (“is a function of”) their frequency. Mathematically, in differential form it’s δφ(ω)/δω. [And this is where the term came from - a delay between waves of different frequency traveling within a “group“ of waves of some bandwidth.]

So far it was sound waves traveling in some medium. However, the same principle — of (a) a delay of a signal/wave traversing some “component“ and (b) this delay being a function of frequency — applies to all physical systems. And just like acoustic waves, to electromagnetic waves (representing sound) as well. And this “group delay” phenomenon — the dependency of the speed (thus arrival phase) on frequency — is especially pronounced in physical resonators and filters (though can be present in other elements — like amplifiers — as well).

Finally, the same effects — the delay, and the delay variation with frequency — also exist in the digital signal processing domain. Digital signal processing started by emulating physical/analog processes and circuits — resonators and filters — but grew into the development of more sophisticated systems with [digital] filtering properties far surpassing those of their physical equivalents. Yet these digital filters still would have both the phase and group delay characteristics — defined the same way as above (for physical systems). Only, as now calculated digitally, they can be rather exotic (eg, "non-minimal-phase" [ie not "physical" or "casual"] — designed “in reverse” to compensate for group delays and other non-linearities of the physical domain.)

As for “which of the two is more important for human hearing” — from the above, a simple phase delay — whether due to wave propagation or in a filter — simply delays the sound by a small fixed amount of time. Which most definitely is unnoticeable for a simple “point source and point listener“ topology. It can probably affect stereo (or multichannel) reproduction, but is easily compensated by a fixed signal-processing delay… Meanwhile, a strong group-delay might lead to, I do not know, appearance of spatial separation of a bass-drum and cymbals in a single drum kit. Or, more probably, some loss of sound clarity… depending on how strong the delay is.
 
Last edited:
Group Delay is the derivative of the phase. That is the technical definition, non-arguable.

What this mean, and how it relates to time delay, is that it shows how the signal is delayed compared to some other part of the frequency range.

Resonances, room reflections, any deviation from flat frequency response, all create group delay.

High group delay relates to sound quality as compromised transient response and reduced imaging accuracy.
 
Bottom line guys. Can you explain it for the end user customer? How does it affect my headphones?

Lets consider the stick hitting a drum or cowbell.
Ideally that might be shown in the time domain as a pure impulse being 1 sample wide.
In the frequency domain, using an FFT, we see frequencies smeared from DC to daylight.
(I suppose that the FR of it might be interesting to someone, but that FR stuff is better for oboes and violins, than for impusive percussion instruments.)

So in a system with high variation of group delay, we find that the impulse is now smeared out so that the higher freqs arrive, and then later the lower frequencies.
In the time domain we see a wriggly thing, where we used to have only the 1 sample wide (pure) impulse.

So if we try correlating that click sound, we now find that it correlates in a few places, and the image is smeared out spatially, because it is smeared out temporally.

If one was a bat flying through the cave, they might experience the group delay signal to look like the cave wall went from solid granite or glass sheet, to more like rough brick with half the bricks not flush with each other.
 
So how is it that the constant group delay in the passband of the FIR bandpass filter does not constitute a time delay?
But it does, at least to my understanding. It's the reason in DSP there are even tricks to get rid of that delay of the signal in the time domain (by filtering the signal again in reverse, not something you can do for realtime audio), as can be seen here for example, second plot shows the original signal being delayed when just applying the FIR: https://nl.mathworks.com/help/signal/ref/filtfilt.html
 
 
^This^ Is only true if we want to ignore the time domain.
Since the 70s people have had graphic equalisers with the sliders to alter the frequency response.

There are a whole other set of equalisers that affect the time domain and the impulse response.

And there are a whole set of speakers with good time and phase performance. And a whole bunch of people that say that it doesn’t matter.

If one wants the signal coming out of the speaker to look like the sound pressure measured from the original microphone… then time and phase and amplitude all need to be considered,

One can design something and trade off group delay for cost or something else. And someone else can use an EQ to correct for it, or ameliorate it. Just they cannot do it with a PEQ.

Things are not that simple with EQ-ing the GD. Luckilly, in most cases GD simply isn't the problem, but when it is EQ won't help much, or won't help at all.

Here's an example of phase response of a 2-way speaker with LR 24db/octave XO at 1800Hz. Blue response is with phase left uncorrected and orange is with phase response corrected for XO deviation:

Capture.JPG


Although you can see quite a difference in those 2 phase responses you may be surprised how their corresponding GD looks:

Capture1.JPG


Coming to the most important part, was I able to hear difference between the two? Well, yes.. But you really need to know what to look (or better say listen) for as you can very easilly miss it.

I remember somebody once said on this forum that phase correction is like icing on the cake, and I cannot agree more. It's not that you won't notice the icing on the cake, it's just that it is not as important flavor wise. ;)
 
I read well, you can change the system traversal time!
Better the general relativity.
 
Things are not that simple with EQ-ing the GD. Luckilly, in most cases GD simply isn't the problem, but when it is EQ won't help much, or won't help at all.

Here's an example of phase response of a 2-way speaker with LR 24db/octave XO at 1800Hz. Blue response is with phase left uncorrected and orange is with phase response corrected for XO deviation:

View attachment 182492

Although you can see quite a difference in those 2 phase responses you may be surprised how their corresponding GD looks:

View attachment 182493

Coming to the most important part, was I able to hear difference between the two? Well, yes.. But you really need to know what to look (or better say listen) for as you can very easilly miss it.

I remember somebody once said on this forum that phase correction is like icing on the cake, and I cannot agree more. It's not that you won't notice the icing on the cake, it's just that it is not as important flavor wise. ;)

Why not just zoom-in the time scale or use the wavelet view?

Yeah, that kind of "fix" is really not all that crucial. But when there is little to nothing left to optimize -- and where the cost of the delay adjustment is acceptable -- might as well.
 
Things are not that simple with EQ-ing the GD. Luckilly, in most cases GD simply isn't the problem, but when it is EQ won't help much, or won't help at all.

Here's an example of phase response of a 2-way speaker with LR 24db/octave XO at 1800Hz. Blue response is with phase left uncorrected and orange is with phase response corrected for XO deviation:

View attachment 182492

Although you can see quite a difference in those 2 phase responses you may be surprised how their corresponding GD looks:

View attachment 182493

Coming to the most important part, was I able to hear difference between the two? Well, yes.. But you really need to know what to look (or better say listen) for as you can very easilly miss it.

I remember somebody once said on this forum that phase correction is like icing on the cake, and I cannot agree more. It's not that you won't notice the icing on the cake, it's just that it is not as important flavor wise. ;)

Comparing the group delay in the 1kHz or beyond is a bit of a who cares… as the lower time domain plot shows.

However the upper plot shows the phase, which is more easy to see, and maybe hear, than the group delay.
But Toole and other claim phase is not important, whereas others believe that it is.

So usually we talk about group delay in sub woofers because the time becomes pretty large.
And we talk about phase EQ in the higher freqs, because the time is slo close to zero as to not be able to see it on the graph, whereas phase jumps out.

In the case of the OP’s headphones, if the group delay is in the milliseconds, then I would consider that a bit high for a headphone.

And really that orange plot should really have group delay a lot closer to zero, unless there are a pretty small number of taps being used.
Like in a HT system, where sometimes too much delay creates a lip-synch issue.
 
My main concern related to the topic of "group delay" is "time alignment" over multiple SP units/drivers, in other words "relative delays" between the multiple SP drivers.

I recently developed my own DIY measurement methods on this issue. If you would be interested, please refer to my posts on my project thread;

- Precision measurement and adjustment of time alignment for speaker (SP) units: Part-1_ Precision pulse wave matching method: #493
- Precision measurement and adjustment of time alignment for speaker (SP) units: Part-2_ Energy peak matching method: #494
- Precision measurement and adjustment of time alignment for speaker (SP) units: Part-3_ Precision single sine wave matching method in 0.1 msec accuracy: #504, #507
 
Yes - and using minimum phase properties in reverse (yin to yang) - reverses the minimum phase amplitude/frequency/time aspects...

As long as you know that the influences are sequential minimum phase - you can reverse them using minimum phase filters... easy peasy, and simple.

And as long as the system is all minimum phase, a simple pink noise or f/r sweep can be used to test and correct frequency.... as a natural consequence of being minimum phase adjusting the frequency/amplitude will then adjust time/frequency as well.... and given the reciprocal and fixed relationships of minimum phase - it will match (in reverse) the effects that cause the F/R variance.
Still wrong. The cumulative effect of multiple minimum phase filters is still minimum phase. To undo it, you need the opposite phase response. You can't add a bunch of positive numbers and get back to zero.
 
Having skimmed this thread, I can confidently say that nobody knows what it is and that what it does is confuse audiophiles.
Post #11 nails exactly what it is and what it does.

Group delay is best explained with wavelets (shaped sine burst), which just represents what it meant with "group", a group of adjacent frequencies. It delays the envelope, the "center of gravity". A continuous sine wave has no visible group delay (as it is constant envelope).
Phase (after un-delaying the envelope) may or may not be different, that is, exact waveshape can be different for a number of equal group delays (the dropped constant in the derivative).

And vice versa, phase offset may or may not have an additional group delay.
 
Still wrong. The cumulative effect of multiple minimum phase filters is still minimum phase. To undo it, you need the opposite phase response. You can't add a bunch of positive numbers and get back to zero.
Can't we?
Phase gets back to zero when the cumulative effect is equal to 2pi*N.
 
Back
Top Bottom