At the risk of being accused of boosting sales on my recent book, all of this is covered in some detail, with measurements and references. BTW the book is not a "profit center" for me - it is a technical book.
The short story is that steady-state in-room measurements reveal nothing reliable about the performance of the loudspeaker, which is the essential starting point for understanding potential sound quality. It is essential to have comprehensive anechoic data on the loudspeaker in order to interpret room curves. If you have such anechoic data in the right format it is possible to predict the steady-state room curve above about 500 Hz in normally reflective rooms. It is also possible to calculate a prediction of subjective ratings in double-blind tests. If one has access to such data choosing a neutral sounding loudspeaker is easy. Without it, it is a crapshoot.
So, if one has a known neutral loudspeaker what does "room EQ" bring to the party? Above about 500 Hz, very little that is reliable - mostly general spectral trends; not detailed irregularities, for reasons mentioned in my last post. At low frequencies equalization is almost certainly beneficial and easily measured steady-state data are all that is necessary. The most important curve is the one measured where you are listening, not averaged over the room or multiple seat listening area. The latter obviously represents an average, not what is truly heard at any seat. It is popular because it makes curves look so much better. One can have an audience EQ and a personal EQ in some flexible systems.
A free measurement system like REW is excellent. The next step is to find prominent spectral peaks below about 500 Hz and attenuate them using a parametric equalizer, another relatively simple task if one has access to DSP in the signal path. Avoid filling narrow dips. They are not as audible as they are visible - humans respond readily to excessive sound at specific frequencies (resonances) but largely ignore narrow dips; an absence or deficiency in sound. The major commercial algorithms differ mainly in how they decide which peaks to attenuate and which dips to fill. Doing it manually allows one to decide by ear which actions are the most beneficial. Often only minor intervention is necessary.
When I see extremely flat and smooth high resolution full bandwidth room curves it is an indication that some things were done that probably should not have been done.
I have one of those all-singing-dancing-highly-advertised-elaborately-mathematical processors. It took manual intervention to restore the inherent excellence of my neutral loudspeakers after "room EQ". This is a work in progress. One definitely needs mathematics and DSP skills, but one also needs the acoustical and psychoacoustic knowledge to provide the necessary guidance and discipline. In some of the systems it is evident that the latter elements are deficient. The profit motive is obvious though. Note that most of the room EQ algorithms come from companies that do not make loudspeakers.
The short story is that steady-state in-room measurements reveal nothing reliable about the performance of the loudspeaker, which is the essential starting point for understanding potential sound quality. It is essential to have comprehensive anechoic data on the loudspeaker in order to interpret room curves. If you have such anechoic data in the right format it is possible to predict the steady-state room curve above about 500 Hz in normally reflective rooms. It is also possible to calculate a prediction of subjective ratings in double-blind tests. If one has access to such data choosing a neutral sounding loudspeaker is easy. Without it, it is a crapshoot.
So, if one has a known neutral loudspeaker what does "room EQ" bring to the party? Above about 500 Hz, very little that is reliable - mostly general spectral trends; not detailed irregularities, for reasons mentioned in my last post. At low frequencies equalization is almost certainly beneficial and easily measured steady-state data are all that is necessary. The most important curve is the one measured where you are listening, not averaged over the room or multiple seat listening area. The latter obviously represents an average, not what is truly heard at any seat. It is popular because it makes curves look so much better. One can have an audience EQ and a personal EQ in some flexible systems.
A free measurement system like REW is excellent. The next step is to find prominent spectral peaks below about 500 Hz and attenuate them using a parametric equalizer, another relatively simple task if one has access to DSP in the signal path. Avoid filling narrow dips. They are not as audible as they are visible - humans respond readily to excessive sound at specific frequencies (resonances) but largely ignore narrow dips; an absence or deficiency in sound. The major commercial algorithms differ mainly in how they decide which peaks to attenuate and which dips to fill. Doing it manually allows one to decide by ear which actions are the most beneficial. Often only minor intervention is necessary.
When I see extremely flat and smooth high resolution full bandwidth room curves it is an indication that some things were done that probably should not have been done.
I have one of those all-singing-dancing-highly-advertised-elaborately-mathematical processors. It took manual intervention to restore the inherent excellence of my neutral loudspeakers after "room EQ". This is a work in progress. One definitely needs mathematics and DSP skills, but one also needs the acoustical and psychoacoustic knowledge to provide the necessary guidance and discipline. In some of the systems it is evident that the latter elements are deficient. The profit motive is obvious though. Note that most of the room EQ algorithms come from companies that do not make loudspeakers.
Last edited: