Taking the non-minimum phase story to the next level, the underlying issue is that the mic and analyzer are showing what looks like a problem - the acoustical interference of the direct sound and a reflection. However, because the direct and reflected sounds arrive from different directions, which the mic cannot recognize, it is ignoring the reality that two ears and a brain can recognize the difference. What looks like possible audible coloration is interpreted by humans as simple, innocuous spaciousness - a.k.a. a room. Humans have considerable abilities to separate the timbral identities of a sound source from those added by a room. It happens in all live, unamplified, music performances and all conversations.
We have no measurement apparatus that performs as a living binaural hearing system,
@Floyd Toole , thanks for taking part here. Your presence, contributions and guidance are much appreciated.
You wrote that «because the direct and reflected sounds arrive from different directions, which the mic cannot recognize, it is ignoring the reality that two ears and a brain can recognize the difference». You mention this point numerous times in your book («two ears and a brain») and in your 30-page 2015 JAES article. Interestingly, the DRC (digital room correction) authors are following a very different research paradigm. This is what (text in bold, italic, below) the author of Audiolense* wrote just recently (99 percent Google translation, but I think the message is quite understandable):
«
I have already argued that there is no fundamental distinction between speaker correction and room correction, but I can repeat this here: When correcting the speaker and then placing it in the room, the speaker correction will also change the room contribution. It is exactly the same as when you correct the sound after the speaker is in place».
«Some skeptics are too concerned about the distinction between speaker correction and room correction. It is a starting point that leads the discussion into a dead end; What could be discussions of shades of differences, potential improvements and challenges of various kinds is reduced to a dogmatic debate of the basic battle: Earth is flat - no it is round.
An echo-free correction has exactly the same effects on reflected sound as a correction based on measurement in the listening position. The only fundamental difference is that you do not have control over the resulting sound .. It seems that some do not get this into their heads, even though it is quite straight forward to understand. It is believed that reflected sound is fundamentally different from other sound. Or you do not realize that reflections mix with direct sound that is a few ms fresher in a way that makes the two create the sound together - and at the same time. Perhaps one thinks that the reflections do not interfere with the direct sound. No matter what you think and believe here: In both cases, the sound signal entering the loudspeaker is changed and in both cases this has consequences for both direct and reflected sound.
The frequency correction in Audiolense has been almost hassle-free for over 10 years, both above and below the Schroeder frequency. I think the same can be said of the one in DRC and Acourate. I have views on how some of the competitors do, but we are talking about small things in relation to the fundamental and often uninformed objections that are highlighted. This works and it has worked for a long time. There is also a lot of research that is in touch with the full-range correction that shows that it works.
Relatively advanced FIR correction has now become widely used. Trinnov has algorithms that project phantom speakers in line with different multichannel standards, Google is on the path of audio correction that changes as the listener moves around, experimenting and researching new bass solutions using many basses to control the entire sound field in room. DBA, CABS, MIMO and others. And then we have a audio interested crowd that calls for evidence that the earth is round».
So it seems like the DRC experts are of the opinion that the content of the Toole 2015 JAES paper is of flat earth quality, and the DRC experts seem to think they are the ones that are driving audio science forward.
It is this gap between for example Toole and the DRC experts - as if they belonged to different tribes - that I find interesting. We have two different research programs crashing here, haven’t we?
Though I have a balanced view on DRC myself (I use DRC), the following words popularized by Carl Sagan come to mind:
«Extraordinary claims require extraordinary evidence».
It is the DRC crowd that claims a new leading role in audio science, so it would be natural that the contenders come up with the new evidence, right? Intererestingly, the DRC experts use mathematics and microphone measurements as evidence, or «observations» as Freeman Dyson called it. Why haven’t we seen more than one - 1 - competently managed blind test on DRC in the 20 plus years during which the DRC models have been improved upon?
*I asked the Audiolense author to join this dicussion on ASR but he declined («not interested») and wanted to stay on the other forum where the «tribe» seems to be more in favour of his thoughts. So there seems to be a «tribe» factor in this fascinating debate.