• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Does Phase Distortion/Shift Matter in Audio? (no*)

Geert

Major Contributor
Joined
Mar 20, 2020
Messages
1,936
Likes
3,514
The problem is that this is unfortunately not as straight forward as you suggest. Have yet to find those "generally available" details ;)
Didn't say it was straightforward ;) I corrected the group delay of my ported speakers (speakers only, not measured GD@LP). Afterwards I checked the level of pre-ringing by convolving the filter and checking the step response in REW).
 

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,838
They pay attention to how phase shifts affect how multiple microphones sum, which is a different scenario than an absolute phase shift on a finished mix.
OK, but I was responding to a contributor that was talking of phase problems of multiple microphones, and Amir talk as well in the video of resulting signal at listener position to demonstrate his point about phase, your point is that the only valid discussion on the subject is absolute phase shift on a finished mix?
 

PeteL

Major Contributor
Joined
Jun 1, 2020
Messages
3,303
Likes
3,838
Well, actually they use delay on the spot mics today so that their signals don't start before the main mic. The main mic creates the stereo illusion and ambience, the spot mics create the tonal balance and/or are used for highlighting certain instruments in certain passages.
Sure, both my statement and yours are true, but they are different things.
 

Geert

Major Contributor
Joined
Mar 20, 2020
Messages
1,936
Likes
3,514
OK, but I was responding to a contributor that was talking of phase problems of multiple microphones, and Amir talk as well in the video of resulting signal at listener position to demonstrate his point about phase, your point is that the only valid discussion on the subject is absolute phase shift on a finished mix?
I think the scope of the video is limited to absolute phase; identical phase shifts applied to both channels of a stereo signal as this is what a normal hifi component can produce. Applying different phase shifts to sources (microphones) or playback channels is a whole different story. In this case you might see impact on the frequency respons or perception of the stereo image.
 

ferrellms

Active Member
Joined
Mar 24, 2019
Messages
296
Likes
254
Amazing video! Thanks!

I am curious how this applies to high end dsp / digital room corrections. As it seems like Dirac and other high end DRC really stress that they use FIR and IIR filters to help with the phase of each speaker. I get how having symmetrical phase in the L and R speaker is important (just wire one of your speakers out of phase to see the effect). But my understanding is that as long as both speakers have the same phase then its ok and not worth chasing (which this video also agrees with). So is there any utility to high end room correction and their focus on phase? Or what am I missing about DRC when they talk about phase correction?
I have a DRC system (Sonarworks) that allows a switch for "linear phase" that adds about 40 msec delay, "optimum phase" that is a compromise between linear phase and no delay (for watching video or tracking while recording). I was convinced by ABS (audiophile BS) that linear phase sounded better and would have put money on it. When I set up an A/B test, I could not distinguish linear phase and 0 delay. Don't believe your ears in the context of ABS alone. BTW, the same kind of tests have convinced me that "HD" audio is ABS.

However, the frequency response improvements from the DRC are for real and can be easily heard A/B.
 
Last edited:

Francis Vaughan

Addicted to Fun and Learning
Forum Donor
Joined
Dec 6, 2018
Messages
933
Likes
4,697
Location
Adelaide Australia
Well, actually they use delay on the spot mics today so that their signals don't start before the main mic.
It addresses one of my favourite issues in orchestral recording, which is the time delay front to back in an orchestra.
The distance from the back row (clarinets, bassoons, percussion etc) to the front exceeds the Hass Effect time, allowing the ear to lose the sense of and integrated single event. Those members of the orchestra actually play ahead of the beat enough that they are perceived to be in time with the main part of the orchestra. This is something of an unknown skill orchestral players acquire.
Close micing them without a delay destroys the entire effort and actually introduces a perceivable problem in the musical result.
Which is a bit of a pet hate of mine.
 

Geert

Major Contributor
Joined
Mar 20, 2020
Messages
1,936
Likes
3,514

mitchco

Addicted to Fun and Learning
Audio Company
Joined
May 24, 2016
Messages
640
Likes
2,397
While such a LF correction is only feasible for non-video content due to excessive latency, my question is how you prevent audible pre-ringing and how do you account for response variations within the listening area?

As mentioned in my post, media player programs like JRiver can account for FIR filter latency while watching video content, so lipsync is non-issue. The DSP/DRC software programs mentioned in my post use preringing compensation, so a non-issue. As I have shown in various posts at ASR and in gory detail in my DSP book, frequency response (and timing) remains near perfect across a 6ft x 2ft area at the LP.
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
646
Likes
653
As mentioned in my post, media player programs like JRiver can account for FIR filter latency while watching video content, so lipsync is non-issue.

Unfortunately not a solution if the dominant source for content is streaming.

The DSP/DRC software programs mentioned in my post use preringing compensation, so a non-issue. As I have shown in various posts at ASR and in gory detail in my DSP book, frequency response (and timing) remains near perfect across a 6ft x 2ft area at the LP.

Sorry don't have your book so I could examine your data.
 

Francis Vaughan

Addicted to Fun and Learning
Forum Donor
Joined
Dec 6, 2018
Messages
933
Likes
4,697
Location
Adelaide Australia
Unfortunately not a solution if the dominant source for content is streaming.
Indeed. This has quickly become the main source of content for most people. A lot of content is only available on streaming sources. It isn’t a trivial problem. So much so that not addressing it really consigns a possible solution to being a historical curiosity rather than anything useful.
 

Bullwinkle J Moose

Active Member
Joined
Mar 31, 2021
Messages
217
Likes
90
It addresses one of my favourite issues in orchestral recording, which is the time delay front to back in an orchestra.
The distance from the back row (clarinets, bassoons, percussion etc) to the front exceeds the Hass Effect time, allowing the ear to lose the sense of and integrated single event. Those members of the orchestra actually play ahead of the beat enough that they are perceived to be in time with the main part of the orchestra. This is something of an unknown skill orchestral players acquire.
Close micing them without a delay destroys the entire effort and actually introduces a perceivable problem in the musical result.
Which is a bit of a pet hate of mine.
Sounds like nonsense to me

How many city blocks does that orchestra cover exactly?
 

Longshan

Active Member
Joined
Feb 3, 2021
Messages
230
Likes
259
Well, actually they use delay on the spot mics today so that their signals don't start before the main mic. The main mic creates the stereo illusion and ambience, the spot mics create the tonal balance and/or are used for highlighting certain instruments in certain passages.


There are a million different ways sound engineers record orchestras. But in all of those ways, taking care of phase issues is a part.
 

Longshan

Active Member
Joined
Feb 3, 2021
Messages
230
Likes
259
It addresses one of my favourite issues in orchestral recording, which is the time delay front to back in an orchestra.
The distance from the back row (clarinets, bassoons, percussion etc) to the front exceeds the Hass Effect time, allowing the ear to lose the sense of and integrated single event. Those members of the orchestra actually play ahead of the beat enough that they are perceived to be in time with the main part of the orchestra. This is something of an unknown skill orchestral players acquire.
Close micing them without a delay destroys the entire effort and actually introduces a perceivable problem in the musical result.
Which is a bit of a pet hate of mine.

That really has more to do with the nature of certain instruments, particularly wind instruments, which take longer to "start" when we begin to play. That being said, as a horn player, I do anticipate the beat to an extent, especially if my entrance is in the lower register.

Despite out best efforts, orchestras are never really together in our entrances. If you have ever heard an orchestra that has crystal clear entrances where everyone begins on time, you are either listening a recording that has been "magicked" over to the make the orchestra sound perfectly together, or you just aren't listening very carefully.
 

Francis Vaughan

Addicted to Fun and Learning
Forum Donor
Joined
Dec 6, 2018
Messages
933
Likes
4,697
Location
Adelaide Australia
Sounds like nonsense to me
Hass Effect is 10 to 40 milliseconds max for natural sounds depending on content. That is between about 30 to 120 feet in air. No it isn’t nonsense. I have quite a few friends that play professionally in symphony orchestras. This is quite well understood stuff.
 
Last edited:

Bullwinkle J Moose

Active Member
Joined
Mar 31, 2021
Messages
217
Likes
90
I’m assuming that the issue under discussion is the change in phase angle itself, independent of any consequent change in FR. Otherwise you can say “I’m hearing the phase change” but you are actually hearing the changed FR.

One has to control for the variables, which your examples do NOT do. In every instance you provide, the FR is changed, and THAT is what is audible.

The usual way to test for phase audibility is to apply an all-pass filter and do a DBT of before/after. Then one won’t make the mistake of hearing a changed FR and mistakenly claiming to hear phase changes.

cheers
Oh Really?

So.....all I hear is a change in frequency response, but not the widening of soundstage, or the combing effect, or the loss of mono content

Well, that is very interesting

Your knowledge intrigues me and I'd like to subscribe to your newsletter
(LoL)
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,690
Likes
6,012
Location
Berlin, Germany

Francis Vaughan

Addicted to Fun and Learning
Forum Donor
Joined
Dec 6, 2018
Messages
933
Likes
4,697
Location
Adelaide Australia
If you have ever heard an orchestra that has crystal clear entrances where everyone begins on time, you are either listening a recording that has been "magicked" over to the make the orchestra sound perfectly together, or you just aren't listening very carefully.
Dunno. My local orchestra is pretty good. Very dependant on the conductor. Some can get them to really gel. I attended a lot of concerts they perform. Got home from one earlier tonight.
The point about the time to get an instrument to sound is however a good one.
 

Longshan

Active Member
Joined
Feb 3, 2021
Messages
230
Likes
259
Dunno. My local orchestra is pretty good. Very dependant on the conductor. Some can get them to really gel. I attended a lot of concerts they perform. Got home from one earlier tonight.
The point about the time to get an instrument to sound is however a good one.


Yeah, I play in pretty good orchestras too. But compared to say, a recording where two synth parts come in at the exact same time, no orchestra is close.

For an example of a great orchestra with what I'd call more than average timing discrepancies when it comes to entrances, listen to pretty much any recording of Karajan with the Berliner Philharmoniker.
 

AndreaT

Addicted to Fun and Learning
Forum Donor
Joined
Jul 19, 2020
Messages
613
Likes
1,182
Location
Commonwealth of Massachusetts
A viewer of my videos and member he suggested that I do a video commenting on a video that Paul McGown did on audibility of phase shifts. Here is Paul's video which was really about a different question (why we need wideband amplifiers) but turned it into phase being an audible problem:


Here is my answer to him:


Of course phase is an important electrical and acoustic thing and has relevance in countless situations. It is just that it should not be used to create myths and fear in audiophiles with respect to audibility in the context Paul and others are using.

Another Emperor's New Clothes tale myth debunked! Excellent video Amir!
A
 

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,348
Likes
1,804

2 videos and 8 pages discussing the audibility of phase later and nobody's posted a single blind ABX test result? I'll get things started:

test.png


Pretty obvious to me. Much more than any difference in SINAD between two non-broken DACs/amps (i.e. inaudible).
 
Top Bottom