• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Does Phase Distortion/Shift Matter in Audio? (no*)

My problem with the "old" approaches to phase coherency was they inevitably sacrificed parameters we can actually hear, such as FR and/or the polar pattern. DSP offers a cleaner approach, but from my (limited) experiments, phase coherency doesn't offer anything I can hear, except at bass.
 
Also in electronics you can achieve better phase without quite as much sacrifices in other areas , in some cases it comes along for the ride like in a hirez DAC .

What is the phase issue in a DAC?

A +/- 180 degree rotation is about as big as it gets, before one starts talking about time alignment, rather than phase.
As the speaker can have 180 degrees flips, I am confused as to what the magnitude of the DAC phase issues, and where they are happening.
Can you elaborate on that, or show some example or link ?
 
... I just bought some Neumann kh 150 monitors, they advertise phase linearity, in this thread people seem to say it's not audible, my question is, is there any practical reason for them to advertise it? Thank you.
By advertising the phase linearity of their monitors, Neumann is flagging to potential customers that Neumann has the technical expertise to both consider and control the KH150's phase linearity, at least to some degree. Considering that phase linearity in microphones and electronic equipment is able to be obtained on a routine basis, it seems that producing phase-linear monitors is akin to being a "final frontier" of sorts.
 
When a speaker such as the KH-150 already possess the necessary DSP hardware to perform phase correction, then there is really no reason not to do it, as it is "free" (as in it can be done with no extra cost).
The extra latency associated with the DSP that is required to perform phase, and other, corrections, results in latency, which can be of concern in real-time monitoring situations if it gets too large. Does anyone have some information on the amount of DSP latency exhibited by different monitors and/or subwoofers? It seems that although latency is linear in phase (a time delay), it can cause problems for musicians playing an instrument when their mechanical input produces a significantly delayed acoustical output that they hear.
 
As Amir has said in the title of this thread, our ears are not at all sensitive to "phase distortion". Below is an example of phase distortion I borrowed from miniDSP. The top graph shows the waveform of a 2 kHz square wave reproduced by an uncorrected speaker. The bottom one is the phase corrected version. The phase corrected version definitely looks "more right" to the eyes, but our ears usually can't differentiate between the two.

When a speaker such as the KH-150 already possess the necessary DSP hardware to perform phase correction, then there is really no reason not to do it, as it is "free" (as in it can be done with no extra cost). And Marketing will do what Marketing does.

View attachment 312385
Yes, so I guess it's designed to make the integration with other speakers of their line-up flawless, i doubt this is just marketing.
 
The phase corrected version [of a square wave] definitely looks "more right" to the eyes, but our ears usually can't differentiate between the two.
Under what circumstances would a listener be able to differentiate between the two acoustical outputs generated by such corrected and uncorrected square waves?
 
I've designed a two way speaker with a 2.3Khz crossover where they both are in phase and over the audible frequency band there is less than +/- 15 degrees phase shift. It took a long time to find drivers that would seamleslly work together and then find the best crossover point.
A (theoretical) 2nd-order closed-box low-frequency loudspeaker produces +90° of phase shift at its –3dB point. A vented-box loudspeaker produces +180° of phase shift at its –3dB point. How did you manage to keep the phase shift to less than ±15° "over the audible frequency band"?
 
Under what circumstances would a listener be able to differentiate between the two acoustical outputs generated by such corrected and uncorrected square waves?
Please see this post and the one following it. The file share links have expired. If you want me to generate my test signals again, just let me know.

 
Dunlavy: "And we find that in order to reproduce those sounds with a level of accuracy such that you cannot literally hear any difference between the live and the recorded sound, you have to have a speaker that exhibits almost perfect impulse and step responses. The only way to do that is to time-align the drivers very, very accurately, usually within a matter of a few microseconds, then use a minimum-phase, first-order crossover network and get everything right. And you have to have an on-axis response of better—well better—than ±2dB."

With respect to well-engineered multi-way loudspeaker systems, could it not simply be that an on-axis response better than ±2dB is the main determiner of being able to reproduce "sounds with a level of accuracy such that you cannot literally hear any difference between the live and the recorded sound"?
 
Dunlavy: "And we find that in order to reproduce those sounds with a level of accuracy such that you cannot literally hear any difference between the live and the recorded sound, you have to have a speaker that exhibits almost perfect impulse and step responses. The only way to do that is to time-align the drivers very, very accurately, usually within a matter of a few microseconds, then use a minimum-phase, first-order crossover network and get everything right. And you have to have an on-axis response of better—well better—than ±2dB."

With respect to well-engineered multi-way loudspeaker systems, could it not simply be that an on-axis response better than ±2dB is the main determiner of being able to reproduce "sounds with a level of accuracy such that you cannot literally hear any difference between the live and the recorded sound"?
The problem created by using first-order crossovers is worse than not time aligning the drivers, with the most serious being the drivers have to cover much wider frequency ranges because of the slow roll-offs of the crossover filters.
 
Phase is at least important when looking at the difference between channels!
Very small deltas can shift parts of your stereo image.
 
What is the phase issue in a DAC?

A +/- 180 degree rotation is about as big as it gets, before one starts talking about time alignment, rather than phase.
As the speaker can have 180 degrees flips, I am confused as to what the magnitude of the DAC phase issues, and where they are happening.
Can you elaborate on that, or show some example or link ?
My point was that there usually are no phase issues with a properly designed DAC more like the opposite if it’s properly designed you also have good behaviour in this regard.

As a counter example to passive speakers where you have to trade in real performance in other areas to get it.
 
yss it does shift , must have two mics on each HF horn with audio mixer overlap the two so used with REW RTA and when adjusting variable phase , yes i can see higher end shifting and yes carefully positioned between the two speakers or three screen or five screen , yes i can hear it its very faint and yes so must exist otherwise i wouldn't see it on the RTA , what i find with ps audio Paul , is all that tech gear in that room and sigh , you see my point , just talk and no actual showing it sigh
 
I come this days again to the topic phase ... playing with the rephase to make a convolution filters for roon and watching the videos from this channel https://www.youtube.com/@ocaudiophile

Anyway ... my 2c ... my thoughts first on the theory. Phase shift in bass must be a thing ... 125° let say are 10miliseconds at 35 hertz or 3,5meter and that is actually a good speaker. With higher frequencies at some point shouldn't matter, but bass ...

In practice ... still didn't processed all the staff from the tutorials and the videos and didn't have the filters for roon, but i will do one filter only phase, no EQ ... that must give some answers :)

But perhaps interesting ... on GLM i am definitely hearing the difference in this option:
2024-01-31-210344.jpg
 
I come this days again to the topic phase ... playing with the rephase to make a convolution filters for roon and watching the videos from this channel https://www.youtube.com/@ocaudiophile

Anyway ... my 2c ... my thoughts first on the theory. Phase shift in bass must be a thing ... 125° let say are 10miliseconds at 35 hertz or 3,5meter and that is actually a good speaker. With higher frequencies at some point shouldn't matter, but bass ...

In practice ... still didn't processed all the staff from the tutorials and the videos and didn't have the filters for roon, but i will do one filter only phase, no EQ ... that must give some answers :)

But perhaps interesting ... on GLM i am definitely hearing the difference in this option:
2024-01-31-210344.jpg
When I switch phase linearity off and on in my Kiis, I hear no difference. Amir does have documentation of several empirical studies that show that it makes no audible difference. Go figure!
 
When I switch phase linearity off and on in my Kiis, I hear no difference. Amir does have documentation of several empirical studies that show that it makes no audible difference. Go figure!
Whether it is audible depends on what kind of sounds you use for testing.
My DAC has a switch to instantly switch from linear to minimum phase. With a square wave, I can definitely hear the difference. But that should be unsurprising, since I've measured its response and in minimum phase the waveform transition spikes have significantly higher amplitude.
But with actual music, not so much. I'm not sure I could discern them.
Now I normally don't listen to square waves or other test signals, so one could say there is no "practical" difference.
Put differently, one might say that the difference is musically transparent, but not perceptually transparent. Of course, this leads to the question of borderline cases of unusual music having some of the characteristics of test signals.
My point is that the question about audibility has a more nuanced answer.
 
Dunlavy: "And we find that in order to reproduce those sounds with a level of accuracy such that you cannot literally hear any difference between the live and the recorded sound, you have to have a speaker that exhibits almost perfect impulse and step responses. The only way to do that is to time-align the drivers very, very accurately, usually within a matter of a few microseconds, then use a minimum-phase, first-order crossover network and get everything right. And you have to have an on-axis response of better—well better—than ±2dB."

Perfect impulse needs infinite bandwidth, perfect square waves need infinite bandwidth and be able to produce DC.
Neither mics nor speakers have either of these. So whats "almost" perfect?
 
...
Perfect impulse needs infinite bandwidth, perfect square waves need infinite bandwidth and be able to produce DC.
Neither mics nor speakers have either of these. So whats "almost" perfect?
In those terms, literally nothing in the real world is perfect. Nature itself does not have perfect impulse response. Air pressure cannot change instantaneously, that would require infinite power. The physical world in which sounds originate and travel is power and bandwidth limited.

Also, the music we listen to doesn't exercise the full power & bandwidth of the physical world. Though sometimes it may exceed the limits of the mics used to capture it.

So the audio system doesn't actually have to be perfect. It just has to be closer to perfect than these other limiting factors.
 
Dunlavy: "And we find that in order to reproduce those sounds with a level of accuracy such that you cannot literally hear any difference between the live and the recorded sound, you have to have a speaker that exhibits almost perfect impulse and step responses. The only way to do that is to time-align the drivers very, very accurately, usually within a matter of a few microseconds, then use a minimum-phase, first-order crossover network and get everything right. And you have to have an on-axis response of better—well better—than ±2dB."

Perfect impulse needs infinite bandwidth, perfect square waves need infinite bandwidth and be able to produce DC.
Neither mics nor speakers have either of these. So whats "almost" perfect?

”What is almost perfect”, in a band limited sense, is the impulse response.
if one is listening to music with impulses, then it is important.

tones, and organ music, probably not so much…
 
... Amir does have documentation of several empirical studies that show that it makes no audible difference. Go figure!

Other people have also purchased the books by Toole, Olive, etc.


Whether it is audible depends on what kind of sounds you use for testing.
My DAC has a switch to instantly switch from linear to minimum phase. With a square wave, I can definitely hear the difference. But that should be unsurprising, since I've measured its response and in minimum phase the waveform transition spikes have significantly higher amplitude.
But with actual music, not so much. I'm not sure I could discern them.
Now I normally don't listen to square waves or other test signals, so one could say there is no "practical" difference.
Put differently, one might say that the difference is musically transparent, but not perceptually transparent. Of course, this leads to the question of borderline cases of unusual music having some of the characteristics of test signals.

Probably the opposite in that some test signals are tones, and a lot of l]music is percussive.


My point is that the question about audibility has a more nuanced answer.
Agreed!
 
Back
Top Bottom