• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Does Phase Distortion/Shift Matter in Audio? (no*)

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,961
Likes
2,625
Location
Massachusetts
I don't understand this video at all. I hear phase shifts all the time. It's a problem we are constantly dealing with when mixing music, because we are almost always introducing some amount of phase shift whenever we do almost anything to a sound (EQing, etc.). When you are doing a null-test you are literally hearing the results of a 180º phase shift. We also introduce phase shift on purpose to an extent when we are manipulating the stereo field.

I must not understand what you mean, because hearing phase shift is as common as a cup of tea in the world of sound engineering.

Absolute phase I understand and the room interaction may obscure and phase changes from amplification.
Perhaps, it is like distortion, it will be inaudible with caveats for characterization.
If that is apt, there should be a point at which phase shift with frequency has audible effets.

- Rich
 

Longshan

Active Member
Joined
Feb 3, 2021
Messages
230
Likes
259
Absolute phase I understand and the room interaction may obscure and phase changes from amplification.
Perhaps, it is like distortion, it will be inaudible with caveats for characterization.
If that is apt, there should be a point at which phase shift with frequency has audible effets.

- Rich

Yeah, I completely misunderstood what the point of the video was when I posted. I should have finished the video before posting, lol. Amir is dicsussing absolute phase orientation of a sound, not phase variances.
 

Ingenieur

Addicted to Fun and Learning
Joined
Apr 23, 2021
Messages
938
Likes
747
Location
PA
Bass guitar note attached, straight into ADI-2 Pro recorded by yours truly a moment ago.
Attack phase:
View attachment 134503
Apply a 90deg phase shift between fundamental and the 2nd harmonic will make it an asymmetrical waveform.

After 500ms (still settling):
View attachment 134504
(similar)
The spectrum here:
View attachment 134508
Fundamental and 2nd at almost same level, 3rd++ further down.

After 2s (steady state region):
View attachment 134505
Phase relationsships have settled by now and a nicely asymmetrical waveform is formed. This will sound different with flipped polarity (or a 180deg phase shift between the components), and with a 90deg offset in either direction it will look like the previous section and thus is immune to inversion but not to 90deg phase shifts.

And the spectrum at this point:
View attachment 134506
2nd has lost some level but 3rd++ are way down now.

That is an invalid model.

You used a filter to add phase to the harmonic signal before combining.

That is not how it works. When the sound of multiple frequencies are recorded there is no phase in the resultant instantaneous voltage. Only magnitude. In fact, there is no frequency. You need a time period for that.

The signal would not be decomposed into Fourier frequencies each affected differently. The composite frequency would shift by the filter characteristic, basically the phase angle of the complex load.

In the real world your signal phase relationship would be constant, not applied by frequency.
 

Thomas_A

Major Contributor
Forum Donor
Joined
Jun 20, 2019
Messages
3,469
Likes
2,466
Location
Sweden
As Amir explained in his video, the effects of phase shift is audible during certain circumstances, e.g. with test signals. I provided test signal as well as music example from of a Diana Krall record where an all-pass filter at 150 Hz causes audible effects. These phenomena are known already and notably far from those claims made in the original video from Paul.

Using a headphone, the shift in timbre is quite evident:

https://www.dropbox.com/s/qaeme17ovkved9a/all pass.wav?dl=0
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,784
Likes
6,227
Location
Berlin, Germany
That is an invalid model.

You used a filter to add phase to the harmonic signal before combining.

[blah blah]
Did you actually read and understand what I posted there?
No model involved, no filter involved. Plain output of the bass guitar, I actually provided the sound snippet.
The blah blah section of your post suggests you are just a troll coughing up some buzzwords...
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,530
Likes
4,367
Does Phase Distortion/Shift Matter in Audio? (YES)


So what phase distortions are clearly audible in a horrible untreated room with lots of reflections when the music starts playing ?

Wiring one stereo speaker out of phase !

Listening to mono information on two or more speakers !

wiring up a crossover section out of phase / mismatched crossover sections / mismatched drivers !

Listening to horribly recorded music with phase problems that cannot be corrected at your end !
----------------------------------------------------------------------------------------------------------

And what phase distortions can be heard in a normal room when running test tones ?

Moving your head left to right relative to the speakers when playing tones at or above 1khz !

Left to Right to Left Combing effect when listening to monophonic sine sweeps through two or more speakers !

I’m assuming that the issue under discussion is the change in phase angle itself, independent of any consequent change in FR. Otherwise you can say “I’m hearing the phase change” but you are actually hearing the changed FR.

One has to control for the variables, which your examples do NOT do. In every instance you provide, the FR is changed, and THAT is what is audible.

The usual way to test for phase audibility is to apply an all-pass filter and do a DBT of before/after. Then one won’t make the mistake of hearing a changed FR and mistakenly claiming to hear phase changes.

cheers
 

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,846
I’m assuming that the issue under discussion is the change in phase angle itself, independent of any consequent change in FR. Otherwise you can say “I’m hearing the phase change” but you are actually hearing the changed FR.

One has to control for the variables, which your examples do NOT do. In every instance you provide, the FR is changed, and THAT is what is audible.

The usual way to test for phase audibility is to apply an all-pass filter and do a DBT of before/after. Then one won’t make the mistake of hearing a changed FR and mistakenly claiming to hear phase changes.

cheers
One did do this test, a few post up, with files to compare, all pass filter at 150 Hz, very audible.
 

Thomas_A

Major Contributor
Forum Donor
Joined
Jun 20, 2019
Messages
3,469
Likes
2,466
Location
Sweden
This is the MAIN problem for High Fidelity.
We never know it has high fidelity to...what? We do not know the original recording.
So you can use only measurements.

Well, we know the original recording. It's on the record. What we don't know is what the recording engineer heard.
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,530
Likes
4,367
One did do this test, a few post up, with files to compare, all pass filter at 150 Hz, very audible.
In your opinion, is McGowan right?
 

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,846
"Phase Accuracy" - hahahaha- think of what using a multi-microphone setup to record a symphony does to the phase of the sounds in the original room......

.
People that do record symphonies do pay attention to the phase in their microphone placements, and then do know the potential issues and how to minimize them.
 

Geert

Major Contributor
Joined
Mar 20, 2020
Messages
1,955
Likes
3,570
People that do record symphonies do pay attention to the phase in their microphone placements, and then do know the potential issues and how to minimize them.
They pay attention to how phase shifts affect how multiple microphones sum, which is a different scenario than an absolute phase shift on a finished mix.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,784
Likes
6,227
Location
Berlin, Germany
I’m assuming that the issue under discussion is the change in phase angle itself, independent of any consequent change in FR. Otherwise you can say “I’m hearing the phase change” but you are actually hearing the changed FR.
Exactly.
Changing only one variable at a time is paramount.

In our case we have two options. When testing effects of excess phase (introduced by most speaker crossovers) we can remove the excess phase by "phase-unwrapping" the input signal so that the total system now is minimum phase, the natural condition. Nothing in the speaker itself changes.

But sometimes we want to assess minimum phase effects, the typical example would be the final bass roll-off of speakers. Here we can also apply a phase unwrapping so that this roll-off now becomes linear phase, which is not natural and has side effects (see below). When you have a small ported monitor with "subsonic" filter the order of the 50'ish Hz roll-off is typically 6th or 8th order and that introduces a lot of phase above the rolloff. With the linear phase correction this type of speaker definitely sounds "faster" in the bass, the "body" of kick drums or picked upright bass not lagging.

There is no free lunch, the correction to linear phase stretches low bass transients in time differently than minimum phase (but the "center of gravity" has moved to the correct position). This can be readily audible, it basically produces a sort of pre-ringing.

Example, using a short bust of a 30Hz square wave in a 6th-order system, top trace natural minimun phase, bottom trace is the linear phase version:
1623224728284.png

The input signal starts at the curser position. The linear phase version has a preceding section (which sound more like noise raising in level) but it can bre readily spotted that the center of gravity isn't shifted, compared to the minimum phase case.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,784
Likes
6,227
Location
Berlin, Germany
People that do record symphonies do pay attention to the phase in their microphone placements, and then do know the potential issues and how to minimize them.
Well, actually they use delay on the spot mics today so that their signals don't start before the main mic. The main mic creates the stereo illusion and ambience, the spot mics create the tonal balance and/or are used for highlighting certain instruments in certain passages.
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
709
Likes
813
Much like @KSTR has said already. However, I would like to add one major difference when it comes to DSP/DRC and the use of FIR filtering especially at low frequencies in rooms. Unlike "most" DSP/DRC software for rooms, Acourate, Audiolense and Focus Fidelity (plus Denis open source DRC) provide excess phase correction at low frequencies. Which I contend is audible in the sense of the bass response not being clear sounding, aside from regular room modes.

While room modes are mostly minimum phase, there are low frequency room reflections that are not minimum phase. A good explanation is John Mulcahy's doc on minimum phase and section on, "a common cause of non-minimum phase behavior in rooms." Here is a practical example of that in my room using a 3 way stereo triamped system looking at the step response:

View attachment 134395

It is a little hard to read as we are seeing the responses of the individual drivers. First the treble arrives, then the bass/midrange driver and finally the subs, inverted phase to boot. See the part marked maximum phase? This is where low frequencies from sub have added together in the room to produce a huge peak that is higher in amplitude than the direct sound. How do we hear that? It is heard as unclear bass response as we are getting the cue from the direct sound and then a larger cue many milliseconds after the direct sound has arrived. Aside from large specialized bass traps (i.e. Helmholtz resonators) the "only" way to fix this is through the use of using excess phase correction which is "only" available through FIR filtering. IIR filters do not have the capability of excess phase correction.

With the large number of FIR taps required (65,536 or even 131,072 taps) for excess phase correction at low frequencies results in latency, typically around 3/4 of a second. This is one of the reasons why you see other DSP/DRC software use IIR filters at low frequencies so they don't have to deal with the latency issue, especially if the application of DRC is for movies where lip synch is the issue. However, players like JRiver can account for that and delay the video by the calculated FIR delay. The other reason that you don't see FIR filtering at low frequencies in h/w devices is that the DSP chips aboard have a real limitation in the number of FIR filter taps available. Typically 1024 or 2048 taps per channel which translates from a FIR filtering perspective to having 2 bands or eq below 100 Hz. Not very effective, especially where you need it the most.

Aside from the time alignment of drivers from my example above and using linear phase digital XO's so they sum properly in both the frequency and time domain, we see a cleaned up (textbook) step response. No more maximum phase peak:

View attachment 134400

The result is not only smooth bass, but bass that is crystal clear. That is the big difference between IIR DSP/DRC versus SOTA software like I mentioned above. Not only is the proof in the listening, but also consider this measurement at the listening position of the same system:

View attachment 134401

This is using REW with the default 500ms window letting all of the reflections in and no smoothing. I am using a FIR filter with 800ms of excess phase correction at 10 Hz which becomes progressively less as we move up in frequency, past Schroeder and into the diffusion zone and then not correcting any room reflections past that. See how the bass phase response is flat up to about 4 x my Schroeder frequency? This is because there are no low frequency reflections messing up the clarity of the bass at the listening position, aside from the smooth magnitude response.

Based on my testing and listening of DSP/DRC products over the past 10 years, there are very few software products that actually do this right and can produce the result you see above. While FIR filtering may be too processor intensive for AVR's and PrePro's (aside from the DSP chip limitation of how large a FIR filter can be hosted), a low power PC like an i3 2 GHz processor works just fine.

Pro tip: if you want the very best bass response from your system, use DSP/DRC software that is capable of providing excess phase correction at low frequencies.

While such a LF correction is only feasible for non-video content due to excessive latency, my question is how you prevent audible pre-ringing and how do you account for response variations within the listening area?
 

Geert

Major Contributor
Joined
Mar 20, 2020
Messages
1,955
Likes
3,570
While such a LF correction is only feasible for non-video content due to excessive latency, my question is how you prevent audible pre-ringing and how do you account for response variations within the listening area?
Don't go overboard with the correction and the pre-ringing won't be audible.

Referring to work on ‘temporal masking’ done by the Stanford University: “All pre-ringing artefacts, that fall into the interval of 20ms prior to the onset of the masker (and most effectively from 5ms) will not be audible due to pre-masking effect”.

Stanford.png

And the work done by Elliott LL.:

Elliott LL.png
 
Last edited:

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
709
Likes
813
Don't go overboard with the correction and the pre-ringing won't be audible.

Referring to work on ‘temporal masking’ done by the Stanford University: “All pre-ringing artefacts, that fall into the interval of 20ms prior to the onset of the masker (and most effectively from 5ms) will not be audible due to pre-masking effect”.

View attachment 134658

And the work done by Elliott LL.:

View attachment 134659

Exactly the problem. How do you effectively control for backward masking in your equalization procedure? If you dig deeper into psychoacoustic literature you'll find that there's not a single number like "20ms".
 

Geert

Major Contributor
Joined
Mar 20, 2020
Messages
1,955
Likes
3,570
Exactly the problem. How do you effectively control for backward masking in your equalization procedure? If you dig deeper into psychoacoustic literature you'll find that there's not a single number like "20ms".
You control the amount of pre-ringing by limiting the level of correction and the steepness of the filters (don't know all the details by heart, that info is generally available).

There's indeed not a single number, that's why I posted the graphs. The main message is that there's a hearing threshold that applies, as usual.
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
709
Likes
813
You control the amount of pre-ringing by limiting the level of correction and the steepness of the filters (don't know all the details by heart, that info is generally available).

There's indeed not a single number, that's why I posted the graphs. The main message is that there's a hearing threshold that applies, as usual.

The problem is that this is unfortunately not as straight forward as you suggest. Have yet to find those "generally available" details ;)
 
Top Bottom