• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Rephase, phase corrected put can't hear a difference

V-men

Member
Joined
Nov 28, 2023
Messages
6
Likes
0
Hello,

I had already corrected my bumps and minor dips with REW and APO i have a set of genelec 8030 on a table and a svs 1000 pro. the crossover is digital at 65hz done by the sound card. I learned of rephase recently.

I only did the mid and highs as i read this was best. and unwarping the phase in low frequency seems complicated.
The 3 pictures : is first the correction estimate of the speaker (i took a close range reading (40 cm) and used paragraphic phase eq,
2nd is the result at my seating position and the
3rd was before correction.

i tried on/off the convoler in apo and I can't notice any difference in 3 listening position, i ask a firend and couldn't hear a difference. Is it because genelec have active crossover, or rephase should be used to fine tuned the phase between sub and speaker? I read on places phase cannot be heard (other than sound being cancelled and less loud)

thx for any insight
 

Attachments

  • Screenshot 2024-01-06 005121.png
    Screenshot 2024-01-06 005121.png
    66.3 KB · Views: 100
  • Screenshot 2024-01-06 005212.png
    Screenshot 2024-01-06 005212.png
    38.6 KB · Views: 98
  • Screenshot 2024-01-06 005313.png
    Screenshot 2024-01-06 005313.png
    64.8 KB · Views: 98

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,113
Likes
2,330
Location
Canada
Hello,

I had already corrected my bumps and minor dips with REW and APO i have a set of genelec 8030 on a table and a svs 1000 pro. the crossover is digital at 65hz done by the sound card. I learned of rephase recently.

I only did the mid and highs as i read this was best. and unwarping the phase in low frequency seems complicated.
The 3 pictures : is first the correction estimate of the speaker (i took a close range reading (40 cm) and used paragraphic phase eq,
2nd is the result at my seating position and the
3rd was before correction.

i tried on/off the convoler in apo and I can't notice any difference in 3 listening position, i ask a firend and couldn't hear a difference. Is it because genelec have active crossover, or rephase should be used to fine tuned the phase between sub and speaker? I read on places phase cannot be heard (other than sound being cancelled and less loud)

thx for any insight

Few are able to hear post phase linearization improvement in that area. What do you think you're trying to fix or improve?

The use case for me at home is mostly for my multichannel setups where driver phase profiles do not match due to mixing and matching different speakers -- also, I prefer to keep the group delay lower than if one were to use "normal" minimum phase sub-mains crossovers.
 
OP
V

V-men

Member
Joined
Nov 28, 2023
Messages
6
Likes
0
hey thanks for the response. i was thinking it might improve the imaging or spaciousness of the sound. they have huge tutorial on how to use it. but it seems usefull in edge cases the way you mention it. REW and APO are much easier to use and by doing pink randomise noise you can record a bigger zone in less time to have a better results for eq. so i don't see why the fuss about rephase then?
 

DVDdoug

Major Contributor
Joined
May 27, 2021
Messages
3,033
Likes
3,995
Phase has to be relative to something.

If your midrange and tweeter are out-of-phase at the crossover frequency the soundwaves will cancel. In fact, a high-pass filter (the crossover network) usually introduces a phase-lead, and a low-pass a phase-lag. To correct for that, the polarity ("phase") of the midrange is often reversed in a 3-way speaker, or the tweeter in a 2-way.

Or if you reverse the wiring to one speaker (in a stereo pair) to flip the phase 180 degrees you'll get drastic cancellation of the bass and weird comb filtering at higher frequencies, especially when you move around in the room. It tends to create a "widening" effect.

But if you reverse the connection to both speakers they are back in-phase with each other and everything will sound normal again.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,352
Likes
6,866
Location
San Francisco
i was thinking it might improve the imaging or spaciousness of the sound.
It might, but the 8030 is already pretty good so the change won't be drastic enough to hear easily.

Also, the effect I've noticed (if any, hard to be sure) is mostly with transients.

If you want to stress test this, just play impulses / clicks (you can use a really low frequency square wave) and see if you hear any change. That's the most extreme edge case for phase correction.
 

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,113
Likes
2,330
Location
Canada
i don't see why the fuss about rephase then?

It's a just another tool... probably more suited for speaker builders/sound system designers that make extensive use of DSP e.g. equalizing phase/line arrays or very high sensitivity horn loaded compression drivers. And as already mentioned, when combining multiple different speakers in a multichannel system in order minimize lobing/cancellations/comb filtering.

Some use rePhase to create interesting improvements/results for 2-channel stereo listening with the addition of a "phase shuffler" filter: https://www.diyaudio.com/community/threads/fixing-the-stereo-phantom-center.277519/

i was thinking it might improve the imaging or spaciousness of the sound.

Not so much for enhanced "imaging", but, for added "spaciousness" or immersion in an otherwise dry recording or album, there are other plugins that one can insert in the chain if you're using more flexible processing platforms such as room reverb and spatialization VSTs.

---------

For none time-critical applications where processing latency is not so much of an issue, one can also use it to reduce the group delay:


What follows are summed L+R sweeps (upmixed to 5.1c) measured at three listening positions across the couch:

1704587991588.jpeg



prior "additional" filtering
1704586501728.png 1704586504950.png 1704586508061.png

rePhase created filter
1704586515455.png

post "additional" filtering
1704586519827.png 1704586523297.png 1704586527798.png


Wavelet spectrograms of the 5.1 channels:

edit: actually, it turns out that I could cut it even further down to 33ms with some slight modification and using the cosine window and moderate optimization -- but then there is a corresponding decrease in LF resolution control and increased likelihood of "slippage" and/or ripples -- can be seen clearer by importing the convolution filter into REW and looking carefully at the phase/magnitude/GD changes...

1704641420704.png 1704641424518.png 1704641428204.png1704641496131.png 1704641526489.png 1704641530595.png
*this is to make sure one is "probably" not adding too much of pre-echo ripple -- however, listening tests to confirm are always necessary, IMO.


The heard effect with a quick A/B kick drum listening track test (performed only a few minutes ago):

Well, the bass response is clearly stronger and louder overall. Perhaps, if one were really extra 'critical', there are some slight hints of pre-echo, but nothing too excessive/unacceptable in this case. Frequency magnitude response remains identical before and after the inverse filter.
 
Last edited:

norman bates

Active Member
Forum Donor
Joined
Sep 29, 2022
Messages
208
Likes
187
Location
Iowa, US
Woodblock ?
Keys gingling ?
Rain ?
Audience clapping ?

A long decay scare drum ?
 
OP
V

V-men

Member
Joined
Nov 28, 2023
Messages
6
Likes
0
Thx all for your answer @ernestcal, how do you reduce group delay with rew or rephase, i didn't see the optoin or settings? as well i'll try the square sine and get back to if i noticed a diffrence! thx again.
 

tmuikku

Senior Member
Joined
May 27, 2022
Messages
302
Likes
338
Hi, if you listen too far away, the original harmonics, meaning phase information, is ruined by room reflections. If you want to hear difference in phase you must listen close enough to the speakers in order to have fighting chance to hear "phase".

How close is it then you might ask? close enough for brain to be able to hear original harmonics I'd say! How do i know when my brain can hear original harmonics then, you might ask? And I'd say there is transition in your perception that marks whether a sound has phase information intact enough so that your brain pays attention to it. This is quite noticeable difference perceptually and you can test it just by moving closer or further from your speakers.

When you are close enough so that direct sound is sufficiently loud enough in relation to room sound, your auditory system can separate the "close and important sound" to it's own neural stream. This happens when original harmonics (iow. phase) is preserved well enough, which makes amplitude peaks sticking out loud enough above all sounds in the room and your auditory system considers this nearby important sound. Now your brain will pay involuntary attention to the sound, separate the direct sound to it's own neural stream leaving all the room sound as background stream, the envelopment. Important point is, that the perceptual change defines a point beyond which phase information is a mess to your auditory system to be considered noise, or closer it is orderly enough to deserve attention, so by definition I would think that you cannot perceive difference phase linearization makes if your auditory system cannot detect "phase" to begin with. On/off phenomenon perceptually.

There is a risk for very long post so I leave it here :D you can find at least my posts on the forum searching "audible critical distance", or go directly to source reading David Griesinger studies about Limit of Localization Distance, or Auditory Proximity.

I don't know what is your listening setup like, but could speculate that linearizing phase could move the transition, LLD, bit further into room. Also that it might have small spatial effect to sound, which you can likely hear only if you are closer to speakers than the transition. Listen how you perceive phantom image getting bit tighter for example. As I tried to explain already, by definition phase is a mess to your auditory system beyond the transition and I bet you cannot hear any difference unless you adjust your crossover dramatically so much that frequency response changes enough, or there is so long a delay the drivers have distinct echo if you purposely ruin it. I'm speculating this assuming Griesinger studies are correct and also that's what I'm hearing and not something else. From DSP with my own setup to say the same that phase is not particularly audible, and beyond the transition it is not audible. Perhaps all there is to audibility of phase is the existence of the transition, either brain pays attention to sound or not.

Thus phase linearization doesn't matter (is not audible) if room has chance to mingle it all up before your auditory system has chance to make sense of it, so logically you must be close enough to speakers to have any chance hearing it difference with phase linearization.
 
Last edited:
OP
V

V-men

Member
Joined
Nov 28, 2023
Messages
6
Likes
0
so i've listened to a square wave at 250hz at different %, as well bubbles from yosi horikawa. one of the listening position was close and didn't noticed difference with succesive on/off. So for now i'll leave it at that. but i'm curious for the group delay i have a peak like enerstcarl at around 220hz. i wouldn't mind playing with that to see if it improves the quality. thx again for all the comments. As well i can assure that i can localise sounds very well with the system a close range and listening position. (bubble from Yosi is a superd song to test that). thx again for all the insight
 
Last edited:

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,113
Likes
2,330
Location
Canada
so i've listened to a square wave at 250hz at different %, as well bubbles from yosi horikawa. one of the listening position was close and didn't noticed difference with succesive on/off. So for now i'll leave it at that. but i'm curious for the group delay i have a peak like enerstcarl at around 220hz. i wouldn't mind playing with that to see if it improves the quality. thx again for all the comments. As well i can assure that i can localise sounds very well with the system a close range and listening position. (bubble from Yosi is a superd song to test that). thx again for all the insight

I would avoid "correcting" GD peaks above 120-140 Hz in general. We can sort of pull them down more easily below that point, but any higher increases the chances of causing unwanted audible artifacts such as pre-echo. *Also, some speaker & room systems are more amenable to phase/GD inversion -- mentioned by @tmuikku was the state of the room acoustics, for example.
 
OP
V

V-men

Member
Joined
Nov 28, 2023
Messages
6
Likes
0
thx i have 3 peaks of GD one at 45hz one at 130 and one at 220hz, if you don'T mind showing me or guiding me to a help guide on it. i won't do the 220hz to avoid pre-echo and will see if my it sounds worst or better in the end for the 2 other peaks.
thx
 

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,113
Likes
2,330
Location
Canada
how do you reduce group delay with rew or rephase

By inserting an FIR filter(s) that inverts some of the exhibited time domain (excess phase & GD) distortion... Yet, how to do it rather depends -- it really could go on numerous ways. But, ideally, the speaker manufacturer would have done it themselves -- for example, Hedd's linearizer plugin (which is now built-in the monitors).

If you want to keep all operations within REW, you will have to 1) apply frequency dependent windowing (e.g. FDW 3-15) to the measured response, 2) generate/extract the excess phase version of the IR, 3) invert the excess phase and thereafter convolve it with the original unwindowed response.

Somewhat more detailed than even what I've done in the prior example (combination with room EQ) is by measuring each speaker/sub at the main listening positions with an acoustic time reference. Vector average the relevant measurements and attempt to linearize each set with the goal of matching the final "aligned" magnitudes and phases better. I would look at the actual response as well as the "excess" phase and GD curves -- both of which you can export and open/drag into the rePhase window itself.


*edit: reduced to 30ms

Vector averaged measurements of the "pre-equalized" subwoofer & center channel speaker across the listening couch:
1704889229949.png 1704889234886.png


Predicted rePhase filter fit:
1704889287489.png 1704889291208.png

Actual rePhase filter fit:
1704889311627.png 1704889316904.png


Quite a number of filters are utilized... However, focusing on the ~125-130 Hz GD peak, conservatively, I would try to avoid applying "compensate" APF q above 4, personally.


Other graphical views of the aligned sum:
1704889371405.png 1704889374891.png 1704889379277.png 1704889382982.png 1704889385571.png


Note when opening rePhase generated FIR filters within REW for examination or when performing a test convolution, you will need to expand the IR window and adjust the magnitude level back to zero:
1704889453204.png 1704889455899.png

1704889532538.png


1704863044169.png




Full mdat and rePhase files:
 
Last edited:

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,113
Likes
2,330
Location
Canada
If you want to keep all operations within REW, you will have to 1) apply frequency dependent windowing (e.g. FDW 3-15) to the measured response, 2) generate/extract the excess phase version of the IR, 3) invert the excess phase and thereafter convolve it with the original unwindowed response.
1704890823663.png 1704890834139.png

Generated excess phase and thereafter exported to a .wav file convolution filter:
1704891192142.png 1704891197903.png

Inversion was performed using REW's simple "Inverse A phase" trace arithmetic function over the FDW excess phase curve. While you can get an incredibly "flat" result this way, there's very limited control (often requiring a large 'sample count' like 64k for export) and the filter often includes rather acute or over-extending corrections. Windowing further or applying a LF or HF "tail" to reduce possibility of excessive correction often causes "slippage" elsewhere.
 

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
Something to think about with phase in normal two channel stereo; center panned sounds get comb filtered above about 2K typically because of inter aural crosstalk. Each time there's a dip in response crossing your ears, the phase at your ear flips 180 degrees. This is in a perfectly anechoic space. I use crosstalk cancelation and I'm not too far from my tweeters, with first significant reflections arriving about 20ms later, so I should be able to hear whatever can be heard.
Tonight I was playing with the FIR filter on the tweeter, which goes down to 1200 Hz. Switching it on and off I heard absolutely nothing change. I was wondering if it was even working. I measured from the listening position and saw nice, flat phase response with it on, curved with it off. So it was working. After linearizing the midrange and then time aligning with the tweeter, the difference became more apparent. I think it's a broadband effect, and my perception so far is that it's beautiful. I can't prove it's actually the phase linearization but I can't get a non phase linear setting to sound as good as when I linearize the phase of the drivers. It could be that I get better response at the crossovers because everything sums correctly. It's easier for me to work with, so I get better results. Or maybe it's just my imagination. In any case the software recently became freely available to me and my computer can easily handle the processing, so I'm really glad to have it.
 

tmuikku

Senior Member
Joined
May 27, 2022
Messages
302
Likes
338
Hi, my current understanding of what you describe is something like this:
Harmonics of vocal for example, span from root on the low mids all the way up to tweeter bandwidth. What phase linearization (from fundamental all the way up to many kHz) can do is to maintain better "dynamics", when all harmonics line up every fundamental cycle as they should, huge amplitude peak superimposes, which ought to give more dynamic sound but also help with localization and stuff like that. I think this is fundamental thing to trick auditory system to present your perception a natural sound :) Basically it's the Griesinger stuff I've been vocal about on various threads.

Conversely, phase linearization doesn't matter too much if room sound is too loud, or some other issues dominate, like having normal stereo setup and not being equidistant to both speakers, loud edge diffraction, loud early reflections and so on, some of which are frequency dependent while some more broadband phenomena. Regardless, all extra sounds contribute to "noise" and I think phase linearization is just one way to reduce noise a little, or better yet maintain the original sound above noise, but so is edge roundovers, non-resonant boxes, acoustic treatment and positioning for early reflections, better driver and so on, room acoustics :) Make all of these fine and you've got yourself a problem free playback system, which ought to deliver what's printed on the recording in a way that sounds "natural". Help auditory system receive what's on the recording.
 

OCA

Addicted to Fun and Learning
Forum Donor
Joined
Feb 2, 2020
Messages
679
Likes
499
Location
Germany
Hello,

I had already corrected my bumps and minor dips with REW and APO i have a set of genelec 8030 on a table and a svs 1000 pro. the crossover is digital at 65hz done by the sound card. I learned of rephase recently.

I only did the mid and highs as i read this was best. and unwarping the phase in low frequency seems complicated.
The 3 pictures : is first the correction estimate of the speaker (i took a close range reading (40 cm) and used paragraphic phase eq,
2nd is the result at my seating position and the
3rd was before correction.

i tried on/off the convoler in apo and I can't notice any difference in 3 listening position, i ask a firend and couldn't hear a difference. Is it because genelec have active crossover, or rephase should be used to fine tuned the phase between sub and speaker? I read on places phase cannot be heard (other than sound being cancelled and less loud)

thx for any insight
I see at least 4 full rotations in the bass frequencies in your graphs which points to around 40ms delay (from the top of my head) of the subwoofer relative to other drivers. This will introduce group delay in that region and is quite audible as muddy sound with bass waves raining over mids and highs as they are delayed. A bit of a delay in this area is normal since the woofer driver has more inertia to move but you should minimize that as much as possible without introducing artifacts like pre-echo.

You can use normal or time-reversed (compensate mode) min phase all pass filters instead of phase linearization tools of rephase. Higher precision corrections can be achieved with those and if you can keep Q values for 2nd degree allpass filters at sqrt(2)/2 = 0.7071... they will be very stable.
 

Dumdum

Senior Member
Joined
Dec 13, 2019
Messages
339
Likes
222
Location
Nottinghamshire, UK
Phase has to be relative to something.

If your midrange and tweeter are out-of-phase at the crossover frequency the soundwaves will cancel. In fact, a high-pass filter (the crossover network) usually introduces a phase-lead, and a low-pass a phase-lag. To correct for that, the polarity ("phase") of the midrange is often reversed in a 3-way speaker, or the tweeter in a 2-way.

Or if you reverse the wiring to one speaker (in a stereo pair) to flip the phase 180 degrees you'll get drastic cancellation of the bass and weird comb filtering at higher frequencies, especially when you move around in the room. It tends to create a "widening" effect.

But if you reverse the connection to both speakers they are back in-phase with each other and everything will sound normal again.
Only a 12db filter or 36db filter result in a polarity flip being required… 24db and 48hz don’t require a flip… 6/18/30db make the drivers 90 degrees out so both ways are equally out
 

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
773
Likes
660
Location
Eugene, OR
Hi, my current understanding of what you describe is something like this:
Harmonics of vocal for example, span from root on the low mids all the way up to tweeter bandwidth. What phase linearization (from fundamental all the way up to many kHz) can do is to maintain better "dynamics", when all harmonics line up every fundamental cycle as they should, huge amplitude peak superimposes, which ought to give more dynamic sound but also help with localization and stuff like that. I think this is fundamental thing to trick auditory system to present your perception a natural sound :) Basically it's the Griesinger stuff I've been vocal about on various threads.
My perception is that it does help with localization and clarity. What it does with perceived dynamics is more complicated. A changing phase response can perhaps reduce some natural dynamics while introducing some unnatural ones. My overall perception is that linear phase sounds smoother, and sometimes less punchy and snappy than when I'm just going for a minimum phase response, which provides slightly weird but sometimes captivating dynamics. The linear phase comes across as more relaxed and natural. Listening to Ali Farka "The Source" I hear percussion sounds seem just right in linear phase and it's very pleasing. They clack and click in a realistic way, not in necessarily the most dramatic way. It's like there's a proper evenness to the dynamics of everything in the recording. Again, I can't say with any confidence that this is really because of the timing being so accurate. It may be that the response is just overall smoother because the drivers merge better at the crossover. If I close mic. each driver the response is nearly identical with linear or minimum phase. At the listening position there are differences, some of it looks a bit ugly either way with some stand out dips and peaks here and there. It's usually a step backwards if I try to smooth those out at the listening position as they involve other reflections in the room. So, it's hard to know what it's actually supposed to look like. My ear and brain are giving me the impression that the linear phase is working better.

Conversely, phase linearization doesn't matter too much if room sound is too loud, or some other issues dominate, like having normal stereo setup and not being equidistant to both speakers, loud edge diffraction, loud early reflections and so on, some of which are frequency dependent while some more broadband phenomena. Regardless, all extra sounds contribute to "noise" and I think phase linearization is just one way to reduce noise a little, or better yet maintain the original sound above noise, but so is edge roundovers, non-resonant boxes, acoustic treatment and positioning for early reflections, better driver and so on, room acoustics :) Make all of these fine and you've got yourself a problem free playback system, which ought to deliver what's printed on the recording in a way that sounds "natural". Help auditory system receive what's on the recording.
I'm inclined to agree with you.
 
Top Bottom