• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Does Phase Distortion/Shift Matter in Audio? (no*)

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,113
Likes
2,330
Location
Canada
Only when I run REW sweeps to check out harmonic or modulation distortion. A 130dB log sweep can sound like some kind dang warning alarm, or something... lol

But transfer functions, which comprise at least 95% of measurement time, are no problem at all.
Smaart allows the use of music as the stimulus signal, so folks often don't even know I'm testing.
And even when I use straight pink noise, nobody pays it any attention. Its sound that's kin to rushing water is very gentle compared to most lake noise. (boats, PWC, etc)
(Heck, the worst noise there is imo, is dang leaf blowers and such).

So other than occasional air-raid swoops from REW, everybody just shakes their heads, and say crazy ole Mark is at his speaker building again.:D

The dreaded leaf blower! I don't know if it's the same "leaf blower" machine they use in our neighbourhood for clearing out snow as well... but those sure are noisy as heck! Unfortunately, the house I'm living in even allows me to hear it all the way down to the basement where my main listening room is situated.
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,282
Likes
4,790
Location
My kitchen or my listening room.
So other than occasional air-raid swoops from REW, everybody just shakes their heads, and say crazy ole Mark is at his speaker building again.:D

I use allpass sequences to get better SNR and a more accurate impulse response, but I set up DUT on the edge of my deck which is 12' up. I put the mike 2' in front of it. 20 seconds of "flying saucer" noises and I have a first reflection at about 19 to 21 milliseconds, depending on how high the device is.

My neighbors have only noticed my setup once, when I was checking a complaint that a speaker had a "buzz". (It didn't the guy's CD player did!)) So I played it loud for a while. Then my neighbors came out and started to grill in their back yard, so I turned it down.

Only to hear TURN THAT BACK ON! from the neighbor. :D
 

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
I found a way to test myself on this phase audibility question. I set REW to adjust to a target curve that was higher than the measurement off my speakers, and then told it no gain allowed. That created an impulse that corrects the phase of my system but doesn't EQ the volume at all.

The result - I can't hear anything I'm sure of, definitely not sure which one I like better. The phase correction costs me 55 ms latency. Interesting that the C50 clarity is slightly higher with the phase corrected. I think this is explained by the filter trading echoes for pre-echoes as can be seen on the spectrogram. Apparently pre-echo is about as audible as phase - not very.

Response comparison.jpg
phase comparison.jpg
Group Delay Comparison.jpg
ClarityComparison.jpg
spectrogram of phase corrected signal.jpg
non-phase-corrected-spectrogram.jpg
 
Last edited:

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
After an evening of listening I think the linear phase version is slightly degrading to the sound. Correcting to minimum phase rather than linear phase seems slightly better than uncorrected. When using a minimum phase target REW straightens things out as much as it can without creating pre-echoes. I need to read up on the trace arithmetic functions because I wasn't expecting that.
 

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,716
Likes
6,007
Location
US East
After an evening of listening I think the linear phase version is slightly degrading to the sound. Correcting to minimum phase rather than linear phase seems slightly better than uncorrected. When using a minimum phase target REW straightens things out as much as it can without creating pre-echoes. I need to read up on the trace arithmetic functions because I wasn't expecting that.
Looks like your finding parallels those of this master's degree thesis project.
schgor thesis.png
 

gnarly

Major Contributor
Joined
Jun 15, 2021
Messages
1,035
Likes
1,471
I've yet to see global phase correction of an already setup speaker make a splash with anyone. Most folks revert back.
Which makes sense to me, because global phase correction is so specific-axis dependent.

Linear phase needs to be implemented at the anechoic speaker-design level, imso. Where it benefits wide polar reponses. Then it rocks.
Otherwise, i think FIR global correction is a plain ole band aid. Fixes a tiny tiny hurt spot :)
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
After an evening of listening I think the linear phase version is slightly degrading to the sound. Correcting to minimum phase rather than linear phase seems slightly better than uncorrected. When using a minimum phase target REW straightens things out as much as it can without creating pre-echoes. I need to read up on the trace arithmetic functions because I wasn't expecting that.
You might want to try a pure phase correction of the crossover allpass behavior first, that is, unwrapping only the phase rotation from the crossover with an analytical, artefact-free convolution kernel.

At the moment you have a "dirty" measurement-derived correction and notably you are overcorrecting the low bass to linear-phase, not following the natural phase rotation of the highpass function. That is guaranteed to give pre-ringing/-signal in the lowest bass. Also, you've corrected for room-induced phase errors which tends to go wrong most of the times. There is a reason why professional DRC software like Acourate uses much more refined methods which took years and years of experience and development before it was mature and consistent wrt to artifacts like smear and pre-ringing.

Further, your original phase measurement seems slightly wrong wrt reference point. The phase at the top end of the range must match the phase of the minimum-phase version of the IR... but yours bends upwards which isn't correct.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
Linear phase needs to be implemented at the anechoic speaker-design level, imso.
Yep. For successful DRC it's best to make the speaker behave like a true minimum-phase system first (no allpass "excess" phase), then measure with that pre-correction in place, then correct for the room/setup influence where needed and appropriate.
 

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
Further, your original phase measurement seems slightly wrong wrt reference point. The phase at the top end of the range must match the phase of the minimum-phase version of the IR... but yours bends upwards which isn't correct.
Thank you for your comments. I think what you're saying is important so I want to make sure I understand. Are you referring to the bending upwards that starts around 85 Hz? There's a cancellation in my horns at that frequency and I don't know how best to deal with it. They have about a 6' path length and a mouth that's too narrow, at least I think that's the explanation. Being in the corners, REW's room sim also suggests a hard dip in the room response at my listening position at that same frequency. Maybe that's a bonus. Only one dip that's shared instead of two at different frequencies. In any case, the phase does that rapid kink at about that frequency.
Yep. For successful DRC it's best to make the speaker behave like a true minimum-phase system first (no allpass "excess" phase), then measure with that pre-correction in place, then correct for the room/setup influence where needed and appropriate.
I've been working on this, but it's proving complex. I can get the microphone closer to the woofers, but being corner horns it's hard to determine where the horn ends and the room starts. I think below 85 Hz it's all united. Above that it's not. If I put the mic. inside the horn mouth the 85 Hz dip turns into an 85 Hz peak with a dip on either side. I think there's a standing wave stuck in there. So, I'm forced to deal with it the best I can. It sounds surprisingly good despite it's issues. Ultimately I think I need to relegate these big woofer horns to subwoofer duty and get another module working in the range from about 60 to 300 Hz.

The midrange horn is also complex. If I measure it on axis about 40" from the mouth it does what I'd expect from the driver. At the listening position it has a big cut at about 600 Hz, so I end up altering the EQ quite a bit before it measures and sounds about right. I think this horn also is too long for it's mouth size, but not so severely.

And finally, the tweeters are far away, in the center of the room, in their own 3 tweeter array for crosstalk cancelation. It's an odd system. So far it's sounded best when I make no attempts at FIR filtering, and keep the IIR EQ as simple as possible. I think there's more potential, but there are a million adjustments to make it sound worse, and a small handful to make it sound better. That said, I've heard some intriguing things from my efforts so far, sound that is detailed and easy to listen to, and very tight in the bass, but I'm not doing it right enough yet to make it a definite win. I keep my old simple settings for reference and they keep winning.
 
Last edited:

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
At the moment you have a "dirty" measurement-derived correction and notably you are overcorrecting the low bass to linear-phase, not following the natural phase rotation of the highpass function.
Yes, I was aware that I was doing it "wrong." and would cause issues, which can be seen on the spectrogram. My comprehension is still weak on the details but I basically get why this is not a good approach. I wanted to hear how bad it was and was surprised that it hardly did anything bad at all. After listening for a while it becomes apparent that it's a little bit degraded, maybe because of those artifacts. I'm honestly not confident to say I'd pass a blind test. Overall I was surprised at how little difference it made.
 

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
Yep. For successful DRC it's best to make the speaker behave like a true minimum-phase system first (no allpass "excess" phase), then measure with that pre-correction in place, then correct for the room/setup influence where needed and appropriate.
I took a measurement of the horn array at 33" from it's center. There's a very steep phase shift before and after the resonance. Interestingly this largely goes away with parametric EQ. I extracted the excess phase and made an inverse of that, and used that as the FIR filter. Not bad for only 15 ms latency. If I do it the other way, extracting minimum phase before parametric EQ, it's 1,700 ms latency! The result looks the same.

There's still a deep notch in frequency response at the problem frequency, but at least everything is more or less lined up on either side. I don't see any pre-echoes.
Unfiltered Bass Horn.jpg
Filtered Horn.jpg
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
Are you referring to the bending upwards that starts around 85 Hz?
Ahm, no. I was looking at the region > 10kHz in the phase plot.
index.php


The region of interest for allpass phase correction is the range from ~150Hz to ~6kHz.

rePhase is the tool of choice for generating an analytical phase correction.
The more you know about your speaker the easier the phase correction design process. Like bass rolloff and crossover points and types/slopes. We also assume that the acoustic centers of the individual ways line up as otherwise you're in trouble anyway. In other words, the existing crossover should be designed to be a true Linkwitz-Riley type where all ways are exactly in phase (unwrapped value!) at all frequencies as otherwise you already have some time smear that cannot be corrected afterwards.

What you would do is to mimic the measured phase response first with rePhase. Of course the magnitude responses should be equal as well. You basically make a course (trendline) model of your speaker of low complexity. Then you take only the crossover allpasses and revert them in time (switch from" rotate" to linearize") and have you phase-correction kernel.

Maybe we should move further discussion on this to another thread as it's becoming off-topic here, perhaps in this thread where you've already contributed to?
 

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
Ahm, no. I was looking at the region > 10kHz in the phase plot.
index.php


The region of interest for allpass phase correction is the range from ~150Hz to ~6kHz.

rePhase is the tool of choice for generating an analytical phase correction.
The more you know about your speaker the easier the phase correction design process. Like bass rolloff and crossover points and types/slopes. We also assume that the acoustic centers of the individual ways line up as otherwise you're in trouble anyway. In other words, the existing crossover should be designed to be a true Linkwitz-Riley type where all ways are exactly in phase (unwrapped value!) at all frequencies as otherwise you already have some time smear that cannot be corrected afterwards.

What you would do is to mimic the measured phase response first with rePhase. Of course the magnitude responses should be equal as well. You basically make a course (trendline) model of your speaker of low complexity. Then you take only the crossover allpasses and revert them in time (switch from" rotate" to linearize") and have you phase-correction kernel.

Maybe we should move further discussion on this to another thread as it's becoming off-topic here, perhaps in this thread where you've already contributed to?
OK thanks. That description of how rephase works sounds very interesting.
Yes, I don't want to derail this thread. On topic about the audibility/importance of phase, it is important to make sure though if we're going to compare that we haven't done something damaging in the phase correction process.
 

312elements

Active Member
Joined
Oct 29, 2020
Messages
234
Likes
236
Location
Chicago
Apologies in advance for my ignorance on this topic, but I’m honestly trying to wrap my head around this. I watched @amirm ’s video response to Paul and by the end I had come to believe that phase distortion doesn’t matter. Then I read 30+ pages about rephase, linear phase filters, minimum phase filters, etc and it seems that there are a lot of folks going to a lot of effort to correct phase without over correcting phase etc. If it’s inaudible why correct it? If it’s inaudible, why worry about over correcting it? Why are we using FIR filters at all? If it’s inaudible, why aren’t FIR filters considered a solution in search of a problem at the expense of additional processing power and potentially needing to manage significant latency? I realize I’m missing something. Can someone possibly shed some light on what I’m missing? Please and thank you.
 

pma

Major Contributor
Joined
Feb 23, 2019
Messages
4,603
Likes
10,773
Location
Prague
Because the topic is not trivial at all, so you have a mixture of pointless and sensible opinions. Read what @j_j is writing, he is the real expert and the one of very few here.
 

312elements

Active Member
Joined
Oct 29, 2020
Messages
234
Likes
236
Location
Chicago
Because the topic is not trivial at all, so you have a mixture of pointless and sensible opinions. Read what @j_j is writing, he is the real expert and the one of very few here.
Thank you for the suggestion. I’ll do some digging and try and filter out some of the noise.
 

Keith_W

Major Contributor
Joined
Jun 26, 2016
Messages
2,660
Likes
6,064
Location
Melbourne, Australia
Apologies in advance for my ignorance on this topic, but I’m honestly trying to wrap my head around this. I watched @amirm ’s video response to Paul and by the end I had come to believe that phase distortion doesn’t matter. Then I read 30+ pages about rephase, linear phase filters, minimum phase filters, etc and it seems that there are a lot of folks going to a lot of effort to correct phase without over correcting phase etc. If it’s inaudible why correct it? If it’s inaudible, why worry about over correcting it? Why are we using FIR filters at all? If it’s inaudible, why aren’t FIR filters considered a solution in search of a problem at the expense of additional processing power and potentially needing to manage significant latency? I realize I’m missing something. Can someone possibly shed some light on what I’m missing? Please and thank you.

Some types of phase are audible.

If the phase rotation is the same between left and right = not audible.

On the other hand, if there are phase differences between left and right, this is audible, depending on how much phase rotation there is, and at what frequency. If it is 180deg out of phase, it will cause cancellation. If it is misaligned, it will cause a difference in the ITD (interaural time difference), because phase = time. The law of the first wavefront says that the image will be pulled towards the side of the earlier arrival - "phantom centre image drift". This is where the phantom image seems to be pulled left or right depending on frequency.

I think a lot of confusion arises from the fact that the terms sound so similar yet have different meanings depending on context. For e.g:

- Minimum phase (in context of loudspeakers and rooms) - this is the only version of the measurement which is correctible by inversion, which corrects both magnitude and phase at the same time. Loudspeakers are generally considered min phase systems, although there may be regions within a loudspeaker which are non-min phase.
- Minimum phase (in context of DSP) - this is the minimum amount of delay encountered by a signal when it passes through a filter.
- Linear phase (in context of DSP) - every frequency encounters the same delay when passing through a filter. Suppose this is 1ms (1/1000s). This is 10 full sine waves for a 10kHz signal, 1 sine wave for a 1kHz signal, and 1/10 of a sine wave for a 100Hz signal - i.e. there is frequency dependent phase rotation.
- Excess phase (in context of loudspeakers and rooms) - non-minimum phase behaviour introduced by the room, e.g. furnishings, room shape, boundaries, etc. The measured phase as captured by the microphone contains the loudspeaker's min phase response + excess phase. The EP is normally removed and discarded, as it can not be corrected (strictly speaking, not true - it can be partially corrected).
- Mixed phase (in context of DSP) - a DSP product that contains both IIR and FIR filters, i.e. both lin phase and min phase filters.
- Mixed phase (in context of rooms) - another term for measured phase, i.e. loudspeaker's minimum phase response + excess phase.

As alluded to earlier, there is also absolute phase and relative phase. Absolute phase = where phase integrity is maintained between left and right. For e.g. if you flip the polarity of both speakers, you won't hear it. Relative phase = where the phase of left and right is misaligned, for e.g. if you flip the polarity of one speaker you will definitely hear it. Sound seems to be coming from inside your head. If you do a sweep of both speakers playing together, you will see comb filtering. Having said that, phase and polarity are not the same thing, but this post is getting too long already.

It took me months to understand all this confusing terminology. It is misleading to say that all phase rotations are inaudible. You have to refer to exactly what type you are talking about.
 

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
If the phase rotation is the same between left and right = not audible.
This is interesting in the case of the phantom center, where there are cancellations at each ear, and phase reversals between the cancellations. We don't notice the phase flips because they're the same in each ear, at least so long as we're centered between the speakers.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,781
Likes
6,223
Location
Berlin, Germany
@Keith_W, I think you still mix up a few things.

Loudspeaker drivers and fullrange single-driver loudspeaker systems are generally minimum phase, whereas the bulk of multi-way loudspeakers are not, they have excess phase (aka allpass behavior) on top of the overall minimum phase behavior. The excess phase is what we want to correct (remove) to have the natural minimum phase response because the difference is subtle but still audible (to most people, at least. Proven many times).

This correction can be done by "unwrapping" the excess phase globally or use linear-phase crossovers (resulting in an acoustic linear phase response of the crossover function, thus *not* introducing excess phase).

Linear phase means that the phase frequency response does not naturally follow the magnitude frequency response as it would for a minimum phase system, rather it is constant at zero (after removing any pure delay, which just means a linear law -- constant slope -- of phase vs frequency, hence the name).

Mixed phase usually refers to a combination, a transfer function that is neither minimum phase nor linear phase like the typical multi-way loudspeaker mentioned above.

All those meanings do *not* change when we are talking about DSP room corrections etc as these terms all refer to general transfer functions.

Absolute phase normally refers to polarity but is considered sloppy terminology.
 
Last edited:
Top Bottom