• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Minimum Phase vs Linear Phase

Abuot raedibaltiy: I awlyas touhgt tihs was fnuny:

It deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae. The rset can be a toatl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.

That just freaked me out- was shocked I was able to read that so easily.

This is why I can't spell at all.

I would look like a drooling idiot if I lived in a pre-spellcheck era...
 
Again, we have a confusion between the effects of tools used in the production mixing process and EQ's used in the reproduction process.

What was shown in the video was the effects when two correlated signals, one not filtered, one filtered, were combined. The analysis was given at about the mid-point of the video. A signal was duplicated into 2 streams. One stream was filtered, and the other was not. When there was any phase shift in one of the streams, as when one stream went through a nonlinear phase filter, there would be interference cancellations (and therefore alterations of the frequency response) at certain frequencies when the signals are combined later. The later part of the video showed that effect with multi-miked drum tracks. The signals in the various tracks are correlated, as they are from the same sound source, by recorded at different locations.

That doesn't happen in the reproduction EQ or RIAA equalizations we are concerned about. There is no combining of two separately filtered streams of correlated signals. Something like that will potentially be significant if we perform cross-feed or up- or down-mixing, but then we are intentionally altering the original signals.
 
Again, we have a confusion between the effects of tools used in the production mixing process and EQ's used in the reproduction process.

What was shown in the video was the effects when two correlated signals, one not filtered, one filtered, were combined. The analysis was given at about the mid-point of the video. A signal was duplicated into 2 streams. One stream was filtered, and the other was not. When there was any phase shift in one of the streams, as when one stream went through a nonlinear phase filter, there would be interference cancellations (and therefore alterations of the frequency response) at certain frequencies when the signals are combined later. The later part of the video showed that effect with multi-miked drum tracks. The signals in the various tracks are correlated, as they are from the same sound source, by recorded at different locations.

That doesn't happen in the reproduction EQ or RIAA equalizations we are concerned about. There is no combining of two separately filtered streams of correlated signals. Something like that will potentially be significant if we perform cross-feed or up- or down-mixing, but then we are intentionally altering the original signals.
This, obviously, a marketing video, designed to show off how the versatility offered by fabfilter can be used to sculpt the sound in different ways depending on how you want to mix different sources together. So while you want to stick with minimum-phase for most of your EQ, there are some circumstances in which linear-phase produces a more desirable result. You can get away with this because the ringing is generally only audible when you have very steep transition slopes, as in his initial example which uses a radical notch.

(As a general observation, all digital filters operate by splitting the signal into a number of correlated streams and then recombining them after performing transforms, but this is happening at a more fundamental level.)
 
Thanks!

Another thing. I've often heard the pro-min phase people say that speakers and headphones are minimum phase devices, so it doesn't make sense to use linear phase filters.

If we look at headphones (easier because it's a single driver) and linear phase filters, is the driver attempting to play these pre-echo signals coming out of the DAC?

Sometimes people don't understand what a 'linear phase" filter really it...
A 'linear phase' filter delays all frequencies equallly. It is just a filter with a constant time delay, and is a wonderful thing because if you add/subtract outputs from multiple linear phase filters with the same delay -- then the signals add cleanly.

A not-linear phase filter (a generalization of a minimum phase filter) has a variable delay vs. frequency. This means that with a 'minimum phase filter' various parts of a signal will be temporally separated as they pass through the filter. This doesn't make 'not-linear phase' filters a bad thing, because wonderful, intricate things can be done with such filters -- but for audio, it screws with the temporal relationships in a signal.

On the other hand, in realtime situations, where the long delay of a linear phase filter can cause troubles, then the headaches of a not-linear phase filter might be a good tradeoff.

On my very complex, sophisticated project, if I didn' thave linear phase filters to use, the project would simply be mind-numbingly impossible to do.

And, yes, it is very possible for linear phase and minimum phase filters to sound differently. The minimum phase can delay different frequencies with different delays, perhaps smearing the sound. (Smear isn't necessarily a bad thing, that is, if you know about it.)
 
That's the central missunderstanding , there is no pre echo signals coming out of the DAC with music . They are there with the test signals .

There are several things going on with gibbs effect appearing to move around, and also the varying delays vs. frequency of not-linear phase filters.

First -- that 'ringing' is NOT 'ringing', but instead shows what happens when there are missing frequency components. There is NO new energy in the gibbs 'pseudo-ringing', but is simiply the effect of missing frequency components so that they cannot add/subtract at the correct time to cancel and 'make nice'. When the gibbs effect moves around because of a not-linear phase wrt linear phase, that is all about the time delays not being constant vs. frequency on the not-linear phase filter.

Also, there is the possibility, with long not-linear-phase filters for certain aspects of a sound (esp transients) not arrive at consistent times. This is not much of a problem (e.g. RIAA EQ) with simple, smooth filters. In the 'minimum phase'/analog arena, the time delays usually start becoming important with high Q values or sharp filters and higher order filters. Sharp, higher order, minimum phase filters can have a very strange time delay vs. frequency.

RIAA linear phase is probably not optimum, but the error isn't important with such smooth filters. (I am doing something similar with DolbyA bandsplitting, where I am using specially crafted FIR filters instead of Q=0.420 and Q=0.470 2nd order IIR filters.) I compensate for time delays throughout the entire software system -- so long FIR filters are no biggie. When compensating for the delays, linear phase filters are easier. What IS the delay of a minimum phase filter? Answer: what is the frequency, what is the filter EQ characteristic, etc... For linear phase, the EQ and length are almost (not quite) disjoint.

For real-time applications, where delay can be a killer, minimum phase can have some real advantages.
 
There are several things going on with gibbs effect appearing to move around, and also the varying delays vs. frequency of not-linear phase filters.

First -- that 'ringing' is NOT 'ringing', but instead shows what happens when there are missing frequency components. There is NO new energy in the gibbs 'pseudo-ringing', but is simiply the effect of missing frequency components so that they cannot add/subtract at the correct time to cancel and 'make nice'. When the gibbs effect moves around because of a not-linear phase wrt linear phase, that is all about the time delays not being constant vs. frequency on the not-linear phase filter.

Also, there is the possibility, with long not-linear-phase filters for certain aspects of a sound (esp transients) not arrive at consistent times. This is not much of a problem (e.g. RIAA EQ) with simple, smooth filters. In the 'minimum phase'/analog arena, the time delays usually start becoming important with high Q values or sharp filters and higher order filters. Sharp, higher order, minimum phase filters can have a very strange time delay vs. frequency.

RIAA linear phase is probably not optimum, but the error isn't important with such smooth filters. (I am doing something similar with DolbyA bandsplitting, where I am using specially crafted FIR filters instead of Q=0.420 and Q=0.470 2nd order IIR filters.) I compensate for time delays throughout the entire software system -- so long FIR filters are no biggie. When compensating for the delays, linear phase filters are easier. What IS the delay of a minimum phase filter? Answer: what is the frequency, what is the filter EQ characteristic, etc... For linear phase, the EQ and length are almost (not quite) disjoint.

For real-time applications, where delay can be a killer, minimum phase can have some real advantages.
With a Q of 0.4 the filter slopes will be very mild, and any ringing will be so low in level it masks out and is inaudible. Of course the same can be said of phase distortion effects produced by a min-phase filter. So it’s reasonable to use whatever design makes the overall calculations easier.
 
With a Q of 0.4 the filter slopes will be very mild, and any ringing will be so low in level it masks out and is inaudible. Of course the same can be said of phase distortion effects produced by a min-phase filter. So it’s reasonable to use whatever design makes the overall calculations easier.
I think where you'll see/hear actual audible differences between minimum and linear phase is for certain long, FIR type filters. The opportunity for delay differentials are high when using certain filter shapes with a long potential delay (lots of taps.) If you have more taps, then more potential for delay, which might nor might not be fully realized. What if you have lots of delay when the signal is at -100dB? Who really cares, right?

All of these variables can cause a bunch of confusion when comparing the filter types or even analyzing the character of the sound from a filter. Instead of simple generalization, IF I was actively considering not-linear phase EQ in an audio application, I'd look at the delay vs. gain profile. If the delay differentials are high in cases where the gain is also high, then there might be a larger liklihood of audible effects.

For technical applications, all bets are off. Whenever combining filter outputs, there is a myriad of complex considerations. In a way, that is what happens in audio/acoustic band splitting applications. When various filters have different delays, and will be combined after the speakers create the sound -- the same kind of cancellation/re-enforcement can happen. Most importantly, when considering the speakers and electronics processing delays, that ends up being complex. At that point, whatever filter type being used isn't the important thing, it is all about how the combination of the speakers, electronics (filters, whether digital or analog) and even the time delay acoustically - how they all add together and sound.

For most nyquist type filtering and/or when doing lots of signal combining -- linear phase is just easier. A long not-linear phase digital filter 'worries' me, but might not be a problem. Linear phase does the best representation of the original signal, but the time delay might not be acceptable.
(on my project, the total delay through the system for correcting consumer recordings is onthe order of 5 seconds, mostly filter/transform delays -- that would be TOTALLY unaccpetable in a PA application!!!) I use LOTS of very precise, wideband Hilbert transforms and sharp FIR filters that need to be summed -- all told, those can create LOTS of time delay. I'd go nuts trying to make my project work if someone gave me an edict that ALL processing must be minimum phase -- I'd likely have to just give up, dig my grave and jump in :-).
 
  • Like
Reactions: NTK
Minphase problems are to be corrected with minphase solutions, it's as simple as that, for me at least.

RIAA EQs etc all are minphase problems which is intuitive.

Less intuitive is the correction of, for example, a single post echo (a later Dirac pulse slightly lower in level as the main one in the IR). The FR is the typical comb filter pattern with finite nulls. This is also a minphase problem which is less intuitive. You could simple use curve fitting of a bunch of minphase notch filters to the comb filter pattern and that fully replicates the impulse doublet. And thus its inversion (1/minphase remains minphase) then corrects it.

A DAC reconstruction filter is a task with no exact spec of phase gender, anything from linphase to minphase can be used, though -minphase ("maximum phase", pre-ringing only) is not useful of course.
 
Minphase problems are to be corrected with minphase solutions, it's as simple as that, for me at least.

RIAA EQs etc all are minphase problems which is intuitive.

Less intuitive is the correction of, for example, a single post echo (a later Dirac pulse slightly lower in level as the main one in the IR). The FR is the typical comb filter pattern with finite nulls. This is also a minphase problem which is less intuitive. You could simple use curve fitting of a bunch of minphase notch filters to the comb filter pattern and that fully replicates the impulse doublet. And thus its inversion (1/minphase remains minphase) then corrects it.

A DAC reconstruction filter is a task with no exact spec of phase gender, anything from linphase to minphase can be used, though -minphase ("maximum phase", pre-ringing only) is not useful of course.

First, I agree that there is an ideal that matching an analog filter with an IIR would be nice. That doesnt' always work as a direct conversion,even in the case of my DolbyA band splitting with nice, wide skirts.

SO, when considering the realities of blinear conversion(or similar options), and then the alternatives which might include higher order filters, non-matching freqeucny responses, even more strange time delays, etc -- it is best to make an engineering instead of cookbook decision, if possible. Trying to follow the cookbook can put one into an impossible situation.

To 'correctly' solve the bandsplitting problem with a straight 2nd order IIR filter at a 44.1k sample rate signal, there would have to be an upconversion to a higher rate so that the IIR filter would be kept intact as 2nd order, but then the upconversion has issues also. I *have* been considering some complete 'engineering' design solutions where the up/down conversion has shortcuts, instead of cookbook, but there be dragons. Right now, I have a beautiful solution that works -- and any impairment associated with not being linear phase is FAR overwhlemed by the mostly INFINITELY superior DolbyA decoding abilities.

I DO have an upconversion design idea that eliminates the need for a complete upconversion/downconversion infrastructure without needing nyquist filtering -- but for now, the nice emulation of the IIR front ends by simple FIR filters and crafted frequency response that matches the desired analog response -- works MUCH better than a direct IIR equivalent. (I have studied the Sony DolbyA patent, very carefully, it misses a lot of practical aspects of the design -- even though the practical aspects are at the essense of teh patent. Using the DolbyA patent would have theoretically eliminated some of the 'close to nyquist' matters, but actually doesn't. It was best to totall avoid any aspect of the patent anyway.) When REALLY doing something that pushes the bounds and doing it accurately -- sometimes the cookbook gets thrown away.


I think what bothers me the most is basing a choice on the idea of 'pre-ringing'. (Gibbs isn't ringing, even though the term is used in common parlance) Basing a filter type on essentially non-existent 'pre-ringing' is a basically a strange decision. However, if one is worried about 'time delay' through the filter, or time skews vs frequency (smearing of the transients) or whatever -- those are decisions on real things.

So -- it is nice to think 'min phase' for 'min phase' is a good cookbook truism, but a 'min phase' FIR or IIR application might not come direclty from an analog filter prototype, because there are so many aspects of the decision that might point to a 'linear phase' solution for an analog (minimum phase) filter.

Understanding the problem being solved, the currently known solution space, a true engineering decision is your best bet. Sometimes, the known solution space isn't enough, so it is important for the engineer to expand their solution space. (I have to 'expand' my own solution space from time to time -- it is called 'learning'.)
 
I think what bothers me the most is basing a choice on the idea of 'pre-ringing'. (Gibbs isn't ringing, even though the term is used in common parlance) Basing a filter type on essentially non-existent 'pre-ringing' is a basically a strange decision.

Ah well, so much in the audiophile world is like that - a supposed solution to a non-existent problem.
 
Ah well, so much in the audiophile world is like that - a supposed solution to a non-existent problem.
For an audiophile, a decision that is based on what 'sounds best' is the most honest criteria. I try not to get into the 'sounds best' arena when talking about technical matters. Sometimes, as you might be implying, the technical and audiophile worlds aren't totally orthogonal, but are also not fully correlated either.
 
For an audiophile, a decision that is based on what 'sounds best' is the most honest criteria.

It might be the best criteria for that audiophile, but it definitely isn't "honest". It is actually based on self-deception that then easily turns into false claims.
 
It might be the best criteria for that audiophile, but it definitely isn't "honest". It is actually based on self-deception that then easily turns into false claims.
I agree about the deceptiveness of the human sensory apparatus. I have had HORRIBLE times trying to utilize my subjective judgment. for creating a reliable result The only reason why I sometime must listen is because of a lack of measurement capability. (Very complex, sophisticated state issues.) Normally, I limit the subjective review for review alone, and not for a primary measurement.

When I have needed to be dependent on my general hearing abilities, I have become more and more distrustful. This distrust also includes mistaken, but well intended subjective claims from others. It is so easy to let ego (in the best sense) to get in the way of good judgement on these matters.

When trying to depend on my hearing, I have become very accustomed to games like 'whack a mole' and 'chasing rabbits into rabbit holes.. In some people(including me), there is nothing 'accurate' about the auditory system!!!

The ONLY thing that saves me -- when I must do a subjective evaluation, I am HUNTING FOR FAILURE, and not focusing on making something sound good. When I must tweak (YUCK -- hate tweaking!!!), I listen for 'tells' and not 'sounds good'. Optimizing something for 'sounding good' is mostly just a waste of time, and will be yet another visit into the rabbit hole.

OTOH, if the result is for the sole enjoyment of the audiophile alone, then there is NOTHING wrong with a subjective evaluation and being a 'tweakibilly'. It is a big mistake to extrapolate personal enjoyment to others with subjectively tuned/tweaked results. Best not to let 'ego' get in the way of reality on 'tweaking' and 'subjective measurement'.
 
OTOH, if the result is for the sole enjoyment of the audiophile alone, then there is NOTHING wrong with a subjective evaluation and being a 'tweakibilly'. It is a big mistake to extrapolate personal enjoyment to others with subjectively tuned/tweaked results. Best not to let 'ego' get in the way of reality on 'tweaking' and 'subjective measurement'.

Ramen!
 
How do you figure? If the phase shift from a minimum phase filter is problematic, wouldn't adding another minimum phase filter compound the problem rather than correct it?

Clearly you haven't read the latest MQA marketing brochure. :)

Seriously, though, yes, this was my question as well (and it is a question, as I am not an expert): is it even practical to try to exactly counteract/undo a particular phase shift like that?
 
How do you figure? If the phase shift from a minimum phase filter is problematic, wouldn't adding another minimum phase filter compound the problem rather than correct it?
Of course, but eliminating excess phase from a transfer function (like in a linphase crossover) is not a minphase problem. The time-inverse of the extracted excess phase IR (an allpass function) is the required correction (convolution kernel) then and obviously it can't be minphase.

Examples:
- correct a driver (or phono cartridge or whatever) response so it fits the (minphase) target --> minphase correction.
- correct the excess phase of a standard crossover --> non-minphase correction.
- "speed up" the bass at a speaker's lower roll-off to less (or even zero) phase rotation --> non-minphase correction
 
Clearly you haven't read the latest MQA marketing brochure. :)

Seriously, though, yes, this was my question as well (and it is a question, as I am not an expert): is it even practical to try to exactly counteract/undo a particular phase shift like that?
As long as we're dealing with linear systems, any phase shift can be undone by applying a filter with the opposite shift.
 
(Novice reviving this thread, I hope this resurrection is OK.)
Could someone please confirm that it's correct/incorrect that if a hundred different EQs (all 64 bit plugins, minimum phase) are in a chain, with the last EQ undoing the cumulative changes of the previous 99, then the resulting signal will not be "worse" than the signal that entered the chain. (OR similar scenario, if you put 100 EQs in a chain, the result will not be "worse" than 1 EQ that represents the same filter effect as the sum of those 100 EQs). Thank you. I'm asking because of usage of multiple Goodhertz plugins plus Headphone EQ.
Does "stacking EQs" and "putting in a chain" have identical meaning?
 
Back
Top Bottom