Hi,
For what it's worth, I have successfully ABXed a phase distortion that is about the same amplitude as the one of my room in low frequencies (score 16/16).
My room has two peaks in the frequency response : +18 dB at 55 Hz and +10 dB at 70 Hz. The group delays (including the one of the speakers) are +100 ms at 55 Hz and +67 ms at 70 Hz. I use a MiniDSP 2x4 with parametric filters in order to correct them.
I simulated a fake frequency response that looks more or less like the one of my system, measured at the listening position, with minimal phase filters, in Rephase (REW tells me that in the low frequencies, my system is mostly minimal phase). Then I applied this eq to an audio sample.
Then I created a correction of this frequency response by inverting all the filters in Rephase. But this time I created two versions : a minimal phase one and a linear phase one.
I applied these two corrections to my eq-ed file. The result are two files that roughly simulate what would happen if a room correction was made with a linear phase eq and with a minimal phase eq (this one is identical to the original file).
The time smearing in the linear phase corrected file is easy to hear. ABX 16/16.
The group delay in the audio file was 74 ms at 50 Hz, as seen in the waveform (the music featured a 50 Hz percussion that triggered this delay, but nothing at 55 Hz, where the delay would have been maximum).
The interesting part is the big consequences on digital room correction : low frequency peaks in the room's response
must be corrected with minimal phase filters, not linear phase ones.
After the simulation, I went further and created a linear-phase version of my room correction, that has a frequency response identical to the one I use in my MiniDSP (my correction features 16 parametric filters and 2 low-shelf filters, all between 35 and 650 Hz).
So I could really compare, with my speakers, the same room correction with minimal phase filters (in the MiniDSP) and with a linear phase version of the exact same filters (in Foobar2000's convolver).
The difference is audible, but curiously, the problem sounds like time smearing rather than group delay.
If you are interested, I recorded the results with the Umik-1 in a video. Please forgive the bad sound quality : the microphone is mono, has a lot of background noise, and the recording is made from the listening position, which makes it sound very "hollow". Although the microphone records a faithful frequency response, it sounds much worse than in reality (as you already know if you have already tried to record the sound of your own system from your listening position).
The video was aimed at a french discussion forum and features a lot of explanations in french.
https://drive.google.com/file/d/1kyUQmkIAALCHbvVYmiZLcSVmazEn2F8s/view?usp=sharing
The interesting parts are :
Sample 1
9:14 - original
9:48 - from the listening position, no eq
10:13 - from the listening position, minimal phase eq
10:54 - from the listening position, linear phase eq
Sample 2
11:28 - original
11:53 - from the listening position, no eq
12:15 - from the listening position, minimal phase eq
12:46 - from the listening position, linear phase eq
The time smearing is barely audible in the linear phase versions.
For the ones who watched the video from the beginning, the first example illustrates a huge apparent group delay. But it turns out that this is largely due to an amplitude artifact that distorts the envelope of musical sounds, as I show on the sonogram. The group delay of the room should be audible there ,too, but it is difficult to isolate it from this artifact.