• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Does Phase Distortion/Shift Matter in Audio? (no*)

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,725
Likes
38,922
Location
Gold Coast, Queensland, Australia
Awesome video. I bet Paul's response would be completely out of phase with this video.

Resulting in full cancellation of Amir and Paul? Cancel culture at its worst, or best, depending on your view. ;)

Imagine if Paul and Amir said the same thing at the same time, but coming from different positions resulting in a massive superposition. It may break the internet.
amir and paul.gif
 
Last edited:

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
What is the definition of the phase shift being discussed ?
A room reflection arriving delayed and attenuated?

An electrical shift like a filter (or any reactive load) induces, ie, the difference between V and I?
For R 0 deg, for C or L +90 or - 90 deg using I as the reference.

Or the phasing of say a sub and mains?

1) Natural phase shift (minimum phase response): strictly folowing the frequency magnitude response, the so-called minimum phase response. Applies to electronics and amps in general, and single speaker driver.

2) Excess phase shift (allpass response): The typical case is the multiway speaker crossover. The individual drivers are minimum phase (with or without the corrective EQ applied) -- assuming that their centers of emission is aligned -- but the sum, the total speaker is non miminim phase. It has more phase than a single driver with the same frequency response, that's why it is called excess phase. When the centers of emission are not aligned, an addtitional delay comes into place that also gives specific additional frequency dependent phase shift (a constant slope of phase in a frequency reponse plot with linear frequency axis is a frequency group delay, that's the definition. If the group spans all available bandwidth, it's a pure time delay).
This is also called allpass response as an allpass does not alter magnitude frequency response which stays at unity (all frequencies are passed unaffected). Only the phase is altered.

3) Reflections and modes: This is something different, though related. Assume a simple floor reflection, a copy of the direct sound arrives a few ms later. This gives the typical comb filter pattern (cancellation notches). When the source is minimum phase, the overlayed response is also minumum phase (unless one single special case, when the reflection is an 100% exact copy in shape and level). This is not intuitive but correct. Assume you have the comb filter pattern with non infinite null, you can set up an analog notch filter bank with the same response and guess what, now the impulse response is the matching doublet as we would measure it. And if you apply the inverse EQ (a bunch of peaking filters) the single original impulse is showing up again.

The general rule: a transfer function is minimum phase when it can be inverted so that the product is flat FR and zero phase. You will see a lot of different definitions of what is mimimum phase and what is not, notably that reflections and modes are were not minimum phase... they are (unless for the single uninvertible case of a perfect reflection).
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,674
Likes
241,077
Location
Seattle Area
This is a reference Sean Olive pointed me to when I asked him about audibility of group delays:
https://asa.scitation.org/doi/abs/10.1121/1.381841

From Blauert's study on group delay".. Psychoacoustical tests show further that the measured distortions can approach the magnitude of the threshold of perceptibility, but in most cases will be well below this value.."
Ah, I had forgotten about that paper. Should I have included in my video! Here is the conclusion of it:


1623136129978.png


And from earlier in the paper:

1623136197953.png


This is exactly was Dr. Toole had summarized: that only specialized signals and mostly under anechoic conditions are audible. It simply is not a problem for audiophiles.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
I may be misunderstanding, but in what sense is a single driver (headphones or not) linear phase?
It is linear phase in most of the passband. Only at the frequency extremes where the response rolls off we see the corresponding minimum phase shift creeping in.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
that only specialized signals and mostly under anechoic conditions are audible.
No. Just listen to the signals I showed in post #39. This is audible to anyone in almost any listening environment and it is a very non-special signal (bass guitar, organ).
 

Francis Vaughan

Addicted to Fun and Learning
Forum Donor
Joined
Dec 6, 2018
Messages
933
Likes
4,697
Location
Adelaide Australia
Arrgh. Lordy that video from PS Audio was just appalling. That was an exercise in picking how many things were wrong, misunderstood, or just plain false. That is a three minutes of my life someone owes me.

My usual complaint about discussions of phase is that almost invariably the conversation starts to confuse time and phase. Paul does this constantly, and even Amir is guilty of one tiny slip in his video. The two are linked, but they cannot be used interchangeably.

At any frequency w, and time t, the signal = A * sin(wt + ø) where ø is the phase and A is the amplitude. This is the definition of phase.
Phase is not a delay. A delay creates a phase difference, but the phase change depends upon the frequency.
Phase is measured as an angle, not as a time. If anyone is discussing phase and they mention a time delay without a specific frequency, there is a problem.

Amplifiers with negative feedback must manage phase as part of their design. Here we do work with time and phase, because we are looking for the frequency where the inherent delay (due to things like slew rate limiting) results in the phase of the output swinging by 180 degrees. The amplifier's gain must be less than one at this frequency, otherwise it will oscillate. This is simply a way of stating the Nyquist stability criterion. This is what (nearly) every audio amplifier is bound by, and what determines its bandwidth.

A lot is made about the ear/brain's ability to locate with time information. This is really a very remarkable thing, as it requires the brain and not the ear to manage the offset. The signal from each ear to the brain must allow this time offset to be detected when the brain integrates the sound. The ear can send useful time up to about one millisecond resolution. That is 1kHz. But it isn't steady state phase information within the signal. The idea that phase information in any part of the signal of more than 1kHz adds to time based localisation is just plain wrong. This difference is not far off the offset of the ears in space when the speed of sound is taken into account. Which is hardly a surprise. As Amir notes, this is a relative offset between channels, so any offset in the reproduction chain is cancelled out anyway.

Inevitably someone will talk about speaker phase, and then absolute phase. It would be so much more helpful if such discussions used the term "polarity". Yes a polarity inversion is a 180º phase shift. One can usefully regard this as a neat coincidence rather than anything profound.

sin(wt + 180º) = -sin(wt) That is all.

But it is really unhelpful to confuse this into discussions of phase.

Absolute phase gets may people riled up. There is almost no evidence it matters, unless something is being driven into some interesting non-linearity. At high enough levels even your ears become non-linear enough that absolute phase can change the way they distort. So you can hear a difference. But unless you do something silly, like connect one channel with reverse polarity, it doesn't matter.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
At any frequency w, and time t, the signal = A * sin(wt + ø) where ø is the phase and A is the amplitude. This is the definition of phase.
Not quite, in our context. We are talking about the frequency response of the phase of a transfer function, not sine oscillators.

A typical allpass response of a crossover:
1623137830682.png

Magnitude is flat as should be, but phase goes from 0deg to -360deg. A second harmonic of a 422Hz fundamental gets shifted by 90deg from this allpass and that can readily be audible as it turnes the first waveform in my post #39 into the second.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
Absolute phase gets may people riled up. There is almost no evidence it matters,
Once again, please listen to the simple real-world-like test signal.
In many cases polarity flip does not sound that much different when the speaker has significant phase shift on its own.
In my example, when the source had the 2nd harmonic on the top of the waveform (where polarity flip is audible) then the 90deg shift turns it into the second waveform which is immune to polarity flip. And vice versa, an polarity-immune signal can become a polarity-sensitive signal by the crossover phase response.
On top of that, another (subtle) effect of polarity flip and phase distortion is that soundstage changes, phantom images change apparent size, reverb tails change in 3D-spaciousness, things like that.
 

Francis Vaughan

Addicted to Fun and Learning
Forum Donor
Joined
Dec 6, 2018
Messages
933
Likes
4,697
Location
Adelaide Australia
Not quite, in our context. We are talking about the frequency response of the phase of a transfer function, not sine oscillators.
Yeah, true. But the discussion veered quickly away from that. It underlines the problem of how messy the conversation can get.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
^ Full ack.
Another things is group delay, which is not straighforward what it really means.
I use to explain its effects as follows:
Assume a so-called frequency "blip", something like a shaped sine burst of ~5 cycles with a soft amplitude envelope (like raised-cosine window). This represents a frequency group and it also has clear time-domain character, a very non-infinite and rather music-likes signal (claves, anyone?).

A group delay / phase response now does two things:

- It shifts the "center of gravity" of the blip. You have to look at the overlaid amplitude envelope of a number of such burst with different start phase of the oscillator (the "other" phase ;-) This is the time aspect, blips at various frequencies don't align their gravity centers, some come late and that alone can be audibly, notably in the bass, the proverbial time smear. Another example is a crossover where the phases of the ways are apparently aligned (equal) but the designer missed that there was a neglected phase wrap or a polarity inversion. A sine at XO freq has the same phase on both ways but may be emitted with any multiple of a half-cycle delay which only shows up with non-infinite signals like those blips. With steep slopes (4th order or higher) the associated systematic ripple in the frequency response is not easy to spot in a wiggling general response.

- It changes the phases of the harmonic sine components in a signal as explained, and that gives rise to a different and additional type perception change, the timbral aspect, even with infinite signal with no time information.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
C'mon, your'e an expert, this signal is generated in 10 seconds with Adobe Audition. Use a 100Hz fundamental and second harmonic, the second harmonic being about 0.5Hz too low or too high. This slowly rotates the phase of the 2nd over the full cycle. If you can hear a periodic timbre change with 2s cycle time you know you are not phase-deaf ;-)
 

Thomas_A

Major Contributor
Forum Donor
Joined
Jun 20, 2019
Messages
3,469
Likes
2,466
Location
Sweden
I can't find anything in post 39 to listen to.

You can also try the one in #58. I guess there are many variants of asymmetric signals that can be used.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,674
Likes
241,077
Location
Seattle Area
C'mon, your'e an expert, this signal is generated in 10 seconds with Adobe Audition. Use a 100Hz fundamental and second harmonic, the second harmonic being about 0.5Hz too low or too high. This slowly rotates the phase of the 2nd over the full cycle. If you can hear a periodic timbre change with 2s cycle time you know you are not phase-deaf ;-)
What? I thought you said it was guitar string or something. I am not interested in test tones. Those are already in literature.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,674
Likes
241,077
Location
Seattle Area
You can also try the one in #58. I guess there are many variants of asymmetric signals that can be used.
That's a test tone too. That is not in question. I mentioned that in the video and references are provided. If there is music, let's see that.
 

mohragk

Member
Joined
May 5, 2021
Messages
54
Likes
51
Location
The Netherlands
I have already tried playing with phase. I created filters in Rephase and loaded them in Convolver in Foobar.
To hear a difference on my speakers, I have to rotate the phase between 20Hz and 20kHz by several times 360°, perhaps ten. Then there is like a weird wobble sound to the drums.
I haven't tried with headphones though.

That does not make sense. Changing the phase on the entire audible spectrum would only cause a delay. You simply introduced latency. Unless it was on only a part of the signal, then it would make sense. The attack of the drum (high freq. content) could be delayed, making it sound like a doulbe hit.
 

milosz

Addicted to Fun and Learning
Forum Donor
Joined
Mar 27, 2019
Messages
589
Likes
1,659
Location
Chicago
"Phase Accuracy" - hahahaha- think of what using a multi-microphone setup to record a symphony does to the phase of the sounds in the original room......

But if you want a speaker that can reproduce a square wave (or headphones) then you need reasonable consistency in phase vs frequency. With a square wave, if your upper harmonics are not in the proper phase relationship to the fundamental, the waveform won't be square.

There actually are speakers that can do a reasonable job with a 500 Hz~ 1 kHz square wave. My Quad ESL's do a reasonable job if you take some pains to eliminate room reflections. I remember demos of the old Ohm F where they showed the things doing a decent job with a square wave.

So if you like to listen to square waves and look at them on your 'scope, well then.....
 

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,688
Likes
4,070
That does not make sense. Changing the phase on the entire audible spectrum would only cause a delay. You simply introduced latency. Unless it was on only a part of the signal, then it would make sense. The attack of the drum (high freq. content) could be delayed, making it sound like a doulbe hit.
I said that I rotated the phase between 20Hz and 20kHz. So 20Hz was at 0° and 20 kHz at +3.600° or so.
 

mohragk

Member
Joined
May 5, 2021
Messages
54
Likes
51
Location
The Netherlands
I said that I rotated the phase between 20Hz and 20kHz. So 20Hz was at 0° and 20 kHz at +3.600° or so.

Oh, well that's something else entirely and would cause the delay effect I described, but in reverse. The body of the drum would be first, the attack would follow.
 

Thomas_A

Major Contributor
Forum Donor
Joined
Jun 20, 2019
Messages
3,469
Likes
2,466
Location
Sweden
Top Bottom