• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). There are daily reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Group Delay 101

OP
DonH56

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
6,923
Likes
13,676
Location
Monument, CO
Great explanation. Thanks. Frequency dependent damping of ultrasonic waves entering concrete behind steel would be another application next to Radar.
To repeat someone else’s question: How can these group delays be corrected? Does Dirac or Re-Phase do that? Or would one want to introduce certain group delays for a warm sound like harmonics with vinyl? Is a frequency sweep enough to characterize group delays or would one need to superposition for example a fixed frequency with a sweep?
You are welcome.

There are a number of areas where this is important, including materials study as you mention, lidar/radar, broadband communications signals like spread-spectrum, and so forth. Not really my field so I do not claim competency.

As for correction, I do not know how (or if) it is done in audio programs. Several claim to adjust phase, not too difficult with a DSP, but I do not know the details. I believe Dirac Live does, as I had a Dirac Live AVP and am pretty sure it improved the group delay, and I know my SDP-75 (Trinnov) AVP improves group delay in my system. I do not know anything about RePhase. I would guess you could use something like a miniDSP to generate compensatory filters, again not something I have played with.

Nor do I have a feel for how bad it must be to be noticeable (audible); I suspect this is a non-issue for the vast majority of components, and speakers. IME a host of things we can easily measure are basically inaudible when the music plays unless they are horribly wrong (large, whatever).

Since you need to know phase shift over frequency a simple magnitude frequency sweep will not provide this information. A magnitude FFT of my two examples would look identical, for example. You need to a reference to capture changes in phase, or look at an impulse (or step) response, to determine group delay. I believe REW can (help) do it, by measuring group delay and calculating filters, but I have not tried recently.

HTH - Don
 

René - Acculution.com

Active Member
Technical Expert
Joined
May 1, 2021
Messages
218
Likes
728
I think you should plot the phase associated with the operation that shifted those sinusoidal components of the square signal. And then the phase delay and the group delay. Pretty sure you will see that for this steady-state case each component has been shifted by its respective phase delay, not their group delay (unless the two are the same) ;-). Steady-state does not tell the best story when it comes to delay. I go into some of that detail here in the attached sheet (https://www.audiosciencereview.com/.../analytical-analysis-polarity-vs-phase.29331/), and much more in this video: (
).
 

voodooless

Master Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
6,132
Likes
10,179
Location
Netherlands
i get you, but my question is, for example..
qtc alignment of 1.2, at 85hz its peaking in at 0.5db and a group delay of 4.5ms
if for each 3ms of delay is equal to an meter of perceived distance and each meter is equal to -3db in sound pressure, isn´t right to say that in this frequency i would like to put a plus of of +4db eq for time aligment?
You'd like to add a boost to time alignment? A boost would add its own group delay once again. Your room alters the frequency response, which changes the group delay as well. Making sure you have a flat in-room frequency response will go a long way toward making sure the group delay is flat-ish as well.
 

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
2,393
Likes
1,740
Location
Canada
Does Dirac or Re-Phase do that?

My experience with the latter is you can reduce GD and improve the phase but there are limits. Try to correct too much and you can cause other anomalies such as pre-ringing.

...

Suppose we flattened only the visible excess phase of this speaker "perfectly":
1669125394049.png 1669125397088.png


With a filter tailor-made from a single measurement, you're still not going to get completely flat GD post-correction in this setup:

1669125529539.png 1669125537219.png

In this speaker, the GD should be low enough anyway not to matter much. Manual phase adjustments created with rePhase (highly dependent on the design) can be audible with the right test conditions. Doesn't necessarily make it "worth-it" to apply for every occasion just because you can.
 
Last edited:

Multicore

Senior Member
Joined
Dec 6, 2021
Messages
493
Likes
474
I have put this example into another thread, maybe it would help. Constant or non-constant group delay depends on phase response and this depends on amplitude response. Please see below amplitude response, phase response and group delay of the 5th order low pass filter. GD tells signal time delay in the pass band, in this case. It is the different kind of mathematical representation of the same thing.

View attachment 245067

View attachment 245068
Seeing this makes me want to understand units.

Amplitude is clear enough, e.g. scalar volts or displacement or such.

Phase p is angle, so [p] is e.g. scalar degrees.

Frequency f is per-time, t^-1, so [f] is e.g. per scalar second.

Group delay dp/df would therefore have units [p][t] e.g. degree seconds.

Did I got that so far, boss?

Angles, being ratios of lengths, are dimensionless. So group delay ends up being denominated in time. (Great, hence delay!) But does the unit of angle you use cancel out or does it scale the result? Like, do people who work in radians see smaller group delay than those who work in degrees?
 
Last edited:

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
717
Likes
1,103
Angles, being ratios of lengths, are dimensionless. So group delay ends up being denominated in time. (Great, hence delay!) But does the unit of angle you use cancel out or does it scale the result? Like, do people who work in radians see smaller group delay than those who work in degrees?
Don't quote me on this, but I think one uses:
  • either phase in radians and the angular frequency ω, then: gd = - dΦ/dω,
  • or phase in degrees and the ordinary frequency f, then: gd = - dΦ/(df * 360).
 

MarcosCh

Major Contributor
Joined
Apr 10, 2021
Messages
1,697
Likes
1,341
Seeing this makes me want to understand units.

Amplitude is clear enough, e.g. scalar volts or displacement or such.

Phase p is angle, so [p] is e.g. scalar degrees.

Frequency f is per-time, t^-1, so [f] is e.g. per scalar second.

Group delay dp/df would therefore have units [p][t] e.g. degree seconds.

Did I got that so far, boss?

Angles, being ratios of lengths, are dimensionless. So group delay ends up being denominated in time. (Great, hence delay!) But does the unit of angle you use cancel out or does it scale the result? Like, do people who work in radians see smaller group delay than those who work in degrees?
Agree that looking at the units is the best (only?) way to really understand these concepts. Remember that the group delay is the derivative of the phase, it tells you how the phase changes (relative to the frequency, but irrelative of its actual value), hence should be the same wether you are thinking in degrees or radians.
 

Multicore

Senior Member
Joined
Dec 6, 2021
Messages
493
Likes
474
Agree that looking at the units is the best (only?) way to really understand these concepts. Remember that the group delay is the derivative of the phase, it tells you how the phase changes (relative to the frequency, but irrelative of its actual value), hence should be the same wether you are thinking in degrees or radians.
yes, i think the angle unit doesn't even appear until we try to represent a measurement, and in group delay angle has disappeared in the math so it doesn't come into it. It's just a ratio.
 

JoachimStrobel

Senior Member
Forum Donor
Joined
Jul 27, 2019
Messages
472
Likes
262
Location
Germany
You are welcome.

There are a number of areas where this is important, including materials study as you mention, lidar/radar, broadband communications signals like spread-spectrum, and so forth. Not really my field so I do not claim competency.

As for correction, I do not know how (or if) it is done in audio programs. Several claim to adjust phase, not too difficult with a DSP, but I do not know the details. I believe Dirac Live does, as I had a Dirac Live AVP and am pretty sure it improved the group delay, and I know my SDP-75 (Trinnov) AVP improves group delay in my system. I do not know anything about RePhase. I would guess you could use something like a miniDSP to generate compensatory filters, again not something I have played with.

Nor do I have a feel for how bad it must be to be noticeable (audible); I suspect this is a non-issue for the vast majority of components, and speakers. IME a host of things we can easily measure are basically inaudible when the music plays unless they are horribly wrong (large, whatever).

Since you need to know phase shift over frequency a simple magnitude frequency sweep will not provide this information. A magnitude FFT of my two examples would look identical, for example. You need to a reference to capture changes in phase, or look at an impulse (or step) response, to determine group delay. I believe REW can (help) do it, by measuring group delay and calculating filters, but I have not tried recently.

HTH - Don
That is an interesting repeating question: If you can not hear should you pay more to correct it. It easier with cars: A Dacia gets you from A to B, but some prefer an EQS. A burger might be good enough, but 3Star Michelin is preferred by some. Both bring some benefit. The choice of a 36 MB camera for HDTV grade viewing brings nothing to nobody. Unless you zoom and correct colors…. I fail to see what group delay correction might bring to Audio, except that is a technological nice concept. That might suffice.
 

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
2,393
Likes
1,740
Location
Canada
Unless you zoom and correct colors…. I fail to see what group delay correction might bring to Audio, except that is a technological nice concept. That might suffice.

I would think that in a mixing and mastering studio setting the concept is not entirely disregarded. It surely is one of the parameters an acoustician might want to observe.
 
OP
DonH56

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
6,923
Likes
13,676
Location
Monument, CO
That is an interesting repeating question: If you can not hear should you pay more to correct it. It easier with cars: A Dacia gets you from A to B, but some prefer an EQS. A burger might be good enough, but 3Star Michelin is preferred by some. Both bring some benefit. The choice of a 36 MB camera for HDTV grade viewing brings nothing to nobody. Unless you zoom and correct colors…. I fail to see what group delay correction might bring to Audio, except that is a technological nice concept. That might suffice.
I suspect the biggest audible benefit comes in system integration, getting all the drivers aligned. For many of us that means getting the subwoofer(s) and main speakers playing happily together.

I have paid more for build quality and features, higher quality materials and so forth, so am by no means immune. And of course the anal design engineer in me forever seeks perfection whether it really matters or not... :)
 

JimWeir

Member
Joined
Jun 10, 2020
Messages
31
Likes
50
For phase p and frequency f group delay GD is the negative of the derivative of the change in phase with respect to the change in frequency: GD = -dp/df. In general phase is a function of frequency p(f); that is, phase changes with frequency. That's the math behind it.

A change in phase is equivalent to a time shift, for example a 180-degree phase shift is like shifting the time by one-half cycle of the signal. The derivative is a fancy expression for the slope of a line, so it is a measure of phase linearity. A straight (linear) line has the form y = mx+b where every point x is multiplied by the slope m and added to offset b to produce a y value. For a straight line, m is constant (just a number, not a function of something else), and hopefully we remember this formula from school. Now replace m with GD so to get a straight line, that means the change in phase divided by the change in frequency must be constant, meaning group delay is a constant, and every frequency is delayed by the same amount of time. What goes in, comes out again, exactly as it was but just a little later in time.

Now, a pulse, or musical signal, includes many frequencies. If we send the signal through a component like an amplifier or speaker with constant group delay, then every frequency is delayed the same amount, and the output is just like the input except delayed in time. If the group delay is not constant, that means different frequencies have different delays through the component, so at the other end the signal will be "smeared" in time with different frequencies arriving at different times. Things like transient attacks from drums or instruments will not be as clean. There are various studies discussing just how far off the delay can be at different frequencies before we notice it, some referenced in the Wikipedia article mentioned previously (https://en.wikipedia.org/wiki/Group_delay_and_phase_delay).

A straight line of y = mx with m = 1 looks like the blue line below. If I make the slope (m) depend upon the input (x) we get the orange line, which is no longer perfectly straight. The next plot shows the error. This is what happens when group delay is not constant, the (phase) line is no longer straight.
View attachment 244718
View attachment 244719

Now try with some signals, in this case the first five frequencies in a square wave. The amplitudes decrease as frequency goes up and only odd harmonics are used (see https://www.audiosciencereview.com/.../composition-of-a-square-wave-important.1921/). Notice how all nine signals line up in the middle and again at the end.
View attachment 244720

If all the signals arrive at the same time, that is group delay is constant, then this is what you see when you add them all up:
View attachment 244721

A perfect square wave has frequencies out to infinity, and this is just five, so it does not have perfectly sharp (straight) edges and the top and bottom are not perfectly flat. It is perfectly symmetric, however, with edges and top and bottom the same across the entire period.

Now adjust the group delay so it is not constant but instead each successively higher frequency is shifted just a little bit more. Notice how the signals no longer align perfectly in the middle and at the end but are spread out a bit:
View attachment 244722

The output when we sum them all looks different and is no longer symmetric:
View attachment 244723

We can see the difference more clearly by showing them both on the same plot:
View attachment 244724

Hopefully this helps visualize how group delay can impact the signal, and why constant group delay is a typical design goal. - Don
For simple and contrived signals, it is shown the human brain can deconvolve monophonic signals and notice differences.
When we have recorded music from spaced microphones and engineered stereophonic reproduction, the variables of maintaining a precise group delay are likely not as critical.
On the other hand, anything that gives us more confidence in our gear helps us to render our own stereophonic illusions.
 

JoachimStrobel

Senior Member
Forum Donor
Joined
Jul 27, 2019
Messages
472
Likes
262
Location
Germany
I suspect the biggest audible benefit comes in system integration, getting all the drivers aligned. For many of us that means getting the subwoofer(s) and main speakers playing happily together.

I have paid more for build quality and features, higher quality materials and so forth, so am by no means immune. And of course the anal design engineer in me forever seeks perfection whether it really matters or not... :)
But that is a lot of effort for - what? Low frequency in Music should be best handled by main speakers and adding an extra speaker just for that with a frequency cut is looking for trouble no one needs ( Except that 16htz organ pipe sound if one is into Bsch). And for hearing the LFE channels in movies: Anything will do. A motor hammer triggered with a <20hz trigger might work too.
 
OP
DonH56

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
6,923
Likes
13,676
Location
Monument, CO
But that is a lot of effort for - what? Low frequency in Music should be best handled by main speakers and adding an extra speaker just for that with a frequency cut is looking for trouble no one needs ( Except that 16htz organ pipe sound if one is into Bsch). And for hearing the LFE channels in movies: Anything will do. A motor hammer triggered with a <20hz trigger might work too.
I'll just leave it at my experience differs. I have had a subwoofer in my systems since around 1980 or so for all the reasons I've posted before. No need to hash it all out in this thread.
 

milezone

Active Member
Forum Donor
Joined
May 27, 2019
Messages
126
Likes
80
Location
Seattle
I appreciate your explanation Don as it is pretty straight forward and has been a somewhat difficult concept for me to grasp. Is the variable m the negative derivative of the slope?

In addition something that is curious -- a speaker like the Meyer Sound Bluehorn touts having a flat frequency response by virtue of being phase linear. I understand the speaker implements advanced and rapid DSP processing to the input signal to achieve this result. I feel that this approach is impressive and logical when dealing with test tones, however I wonder if a music signal is too complex to achieve accurate phase linearity. And furthermore would the benefits of phase linearity in this instance be audible to a great degree compared with an otherwise properly designed speaker without time correction? In addition I wonder if some degree of bit reduction is necessary to simplify the signal to achieve said phase linearity. At what point does phase linearity become audible and at what point does flat frequency response of a music signal become audible to the point of one instance being unanimously and logically preferable to another. I am of a belief that if the frequency response of a given set of drivers in given enclosures are collectively smooth with no inherent specific points of resonance as effected by the enclosure, and or distortion breakup modes, with properly implemented crossovers, there is a decent amount of room maybe 3 to 5 db per driver, before one would observe what they might deem an artificial reproduction of sound. And within that amplitude window, I see the differences as a matter of preference.

With that said, if one has the intelligence, tools and time to achieve more accurate sound reproduction, it is obviously worthwhile to do so -- though not at the expense of overlooking design fundamentals or reducing the fidelity of the input signal though over reliance on digital processes. Though the ladder is a temporary issue if one at all, which is resolved as cpu processing power increases.

Reiterating, the analog signal path including crossovers, amplification and the characteristics of each driver, impart different degrees of delay or phase offset as related to an original source signal. The most straightforward solution, which presumably is what Meyersound did with the Bluehorn, is to measure the analog output of the speaker in various states and come up with a variable algorithm that modifies the input signal to a very precise degree, by applying time offsets to a vast number of frequency bands within the operating range of each driver. Thus anticipating time delays imparted by the correlation of basic crossover bands, amplification circuitry and driver characteristics.

I am not sure if this is possible though if a driver could be designed to in a sense print a digital signal, through the use of servomotors allowing for precise axial positioning of a driver as it relates to a digital signal, I would think this would yield a greater degree of phase accuracy than the first model which relies on analog amplification of a voice coil and is an approximation of a driver position as it relates to a digital signal. This model seems it would be more useful in low frequency transducer design and perhaps less beneficial and maybe even undesirable for mid and high frequency transducers due to its benefitting from a strictly digital signal.
 
Last edited:
OP
DonH56

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
6,923
Likes
13,676
Location
Monument, CO
I appreciate your explanation Don as it is pretty straight forward and has been a somewhat difficult concept for me to grasp. Is the variable m the negative derivative of the slope?

In addition something that is curious -- a speaker like the Meyer Sound Bluehorn touts having a quote flat frequency response by virtue of being phase linear. I understand the speaker implements advanced and rapid DSP processing to the input signal to achieve this result. I feel that this approach is impressive and logical when dealing with test tones, however a complex music signal is an entirely different thing. And furthermore I believe a musical signal is complex in its irregularity to a point that the effect and benefits of phase linearity in this instance would perhaps be hardly audible in an otherwise properly designed speaker. In addition I wonder if some degree of bit reduction is necessary to simplify the signal to achieve said phase linearity. At what point does phase linearity become audible and at what point does flat frequency response of a music signal become audible to the point of one instance being unanimously and logically preferable to another. I am of a belief that if the frequency response of a given set of drivers in given enclosures are collectively smooth with no inherent specific points of resonance as effected by enclosure, and or distortion breakup modes, with properly implemented crossovers, there is a decent amount of room maybe 3 to 5 db per driver, before one would observe what they might deem an artificial reproduction of sound. And within that amplitude window, I see the differences as a matter of preference.

With all that said, if one has the intelligence, tools and time to achieve more accurate reproduction, it is obviously worthwhile to do so -- though not at the expense of overlooking design fundamentals or reducing the fidelity of the input signal though over reliance on digital processes. The ladder is a temporary issue if one at all, which is resolved as processing power increases.

As a final thought, the analog signal path including crossovers, amplification and the characteristics of each driver, impart different varieties of delay or phase offset compared with an original source signal. The easy solution, which presumably is what Meyersound did with the Bluehorn, is to measure the analog output of the speaker in various states and come up with a variable algorithm that modifies a given signal to a very precise degree, by applying time offsets to a vast number of frequency bands, anticipating time delays imparted by the correlation of filters, amplification circuitry and drivers.

I am not sure if this is possible though a if driver could be designed to quote print in essence a digital signal, through the use of servomotors and precise positioning of the driver as it relates a digital signal, I would think this would yield a far greater degree of phase accuracy that the first model which relies on analog amplification of a voice coil.
In the first equation, m is the slope, which for group delay is negative of the change in phase divided by the change in frequency. The line example is just to show linearity, not specific to group delay (or anything else).

The Wikipedia article discusses detection thresholds; I am not competent to comment on the audibility of this. DSP provides the means to correct the phase without having to alter physical structures.
 
OP
DonH56

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
6,923
Likes
13,676
Location
Monument, CO
I think you should plot the phase associated with the operation that shifted those sinusoidal components of the square signal. And then the phase delay and the group delay. Pretty sure you will see that for this steady-state case each component has been shifted by its respective phase delay, not their group delay (unless the two are the same) ;-). Steady-state does not tell the best story when it comes to delay. I go into some of that detail here in the attached sheet (https://www.audiosciencereview.com/.../analytical-analysis-polarity-vs-phase.29331/), and much more in this video: (
).
Did that, took waaay too long and mainly proved (a) I am not a Python expert and (b) the group delay was not doing what I thought it was. Good catch, thanks!

What I did in my code was a little more subtle bug (to me); I introduced a discontinuity in the group delay, with constant but different delays before and after the discontinuity. So it shows what I wanted to show, more or less, but a step change in group delay and not a smooth curve. It does not invalidate the previous posts (whew!) but shows the effect of a step function in group delay instead of it just being non-constant. Some speakers actually do that, or pretty close to it, at least for my small set of measurements.

Here are plots showing what I originally intended to show. These don't really add anything but now I have clean code to play with. :)

The same input signals and their sum, without group delay (same as before):
Vin_gd_0.png

Vsum_gd_0.png

Now the same signals but with non-constant and smoothly-varying group delay (I used very large group delay for illustrative purposes):
Vin_gd_vary.png

Vsum_gd_vary.png

GD_vary.png
 

René - Acculution.com

Active Member
Technical Expert
Joined
May 1, 2021
Messages
218
Likes
728
Did that, took waaay too long and mainly proved (a) I am not a Python expert and (b) the group delay was not doing what I thought it was. Good catch, thanks!

What I did in my code was a little more subtle bug (to me); I introduced a discontinuity in the group delay, with constant but different delays before and after the discontinuity. So it shows what I wanted to show, more or less, but a step change in group delay and not a smooth curve. It does not invalidate the previous posts (whew!) but shows the effect of a step function in group delay instead of it just being non-constant. Some speakers actually do that, or pretty close to it, at least for my small set of measurements.

View attachment 245569
Thanks Don. I will add to this thread, perhaps in the weekend, to show phase delay vs group delay for a similar case, just to clear up any confusion.
 
Top Bottom