• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Understanding the State of the Art of Digital Room Correction

Status
Not open for further replies.
But, one smoothed measurement isn't nearly as bad as it's made out to be, assuming it's done right, and properly windowed at different frequencies (no using ONE window, nope).
Yes, a frequency dependent window is being used, as is pyschoacoustic filtering. This is derived from your excellent presentation on, "Acoustic and Pyschoacoustic Issues in Room Correction." I encourage folks to download the presentation and take in the first 31 slides. This explains at low frequencies we want to correct for both the loudspeaker and room and at high frequencies the first arrive timbre. This is also explained in detail in my video starting here. I revisit when I start using DSP tools that has these capabilities here.

I also cover the CTA2034 here and show the JBL M2 spins. But in the case of DRC, that is not the end goal. The end goal is to have a smooth frequency response arriving at our ears at the listening position. Technically, the ideal minimum phase response arriving at our ears as I explain in detail.

To verify the end result, we can drop a mic at the listening position and take one or multiple verification measurements with REW's default window of 500ms so we are letting all of the reflections in, on purpose! That way we can see how well the "room eq" is working along with linearizing the loudspeakers direct sound above a certain frequency range. That starts here.

The reason why the video is close to 2 hrs long is that this is complicated. It takes time to wrap ones head around frequency dependent windowing, psychoacoustics, room acoustics, our ears non-linear frequency response versus SPL, minimum phase systems, non-minimum phase behaviour in both loudspeakers and room acoustics, echoic memory, FIR filtering, convolution, etc. I tried to put the video together by building on concepts from the ground up to the point of using the DSP tools, listening to the results, because in the end, it is all about how good it sounds, to the verification measurements.

So if there something that folks don't understand or take issue with, then reference that particular part in the video and I will try my best to answer.
 
Last edited:
Yes, a frequency dependent window is being used, as is pyschoacoustic filtering. This is derived from your excellent presentation on, "Acoustic and Pyschoacoustic Issues in Room Correction." I encourage folks to download the presentation and take in the first 31 slides. This explains at low frequencies we want to correct for both the loudspeaker and room and at high frequencies the first arrive timbre. This is also explained in detail in my video starting here. I revisit when I start using DSP tools that has these capabilities here.

I also cover the CTA2034 here and show the JBL M2 spins. But in the case of DRC, that is not the end goal. The end goal is to have a smooth frequency response arriving at our ears at the listening position. Technically, the ideal minimum phase response arriving at our ears as I explain in detail.

To verify the end result, we can drop a mic at the listening position and take one or multiple verification measurements with REW's default window of 500ms so we are letting all of the reflections in, on purpose! That way we can see how well the "room eq" is working along with linearizing the loudspeakers direct sound above a certain frequency range. That starts here.

The reason why the video is close to 2 hrs long is that this is freaking complicated! It takes time to wrap ones head around frequency dependent windowing, psychoacoustics, room acoustics, our ears non-linear frequency response versus SPL, minimum phase systems, non-minimum phase behaviour in both loudspeakers and room acoustics, echoic memory, etc. I tried to put the video together by building on concepts from the ground up to the point of using the DSP tools, listening to the results, because in the end, it is all about how good it sounds, to the verification measurements.

So if there something that folks don't understand or take issue with, then reference that particular part in the video and I will try my best to answer.
Day job?
 
@AudioJester and @fluid, please feel absolutely free to do as you like. My experience is that from the day I went back to basics, sticking to correcting only the modal region based on steady state measurements, forgot about messing with the phase response, etc, I have been very happy, switching between the 2 sets of eQ (one not shaving off the 39 peak, the other tailoring the bass response after Toole's idealised steady state), exported from REW, introduced as txt in HQP. My avatar shows my in response responses for the 2 sets up to 500 Hz, then it's flat to 2K, then gently rolls off about -7dB by 18 KHz. Attached steps is for my active speakers in my room prior to eQ. Just had my body checked (edit i meant shaken but checked is OK somehow !) by Bitches Brew peaking @ 96 dB C at Listening Position though the eQ set was the flatter one and bass rolls off below 65Hz or so and in room response (MFSL SACD) is about flat up to 2K ; it's that good. Actually I was thinking about starting a thread based on adding SPL metering and in room measurements to listening impressions of tracks that appear in reviews ie in Stereophile etc. But for the time being I'd rather enjoy music ; how about Schubert?
 

Attachments

  • step.jpg
    step.jpg
    73.5 KB · Views: 106
Last edited:
Really now, this is only for psychoacoustic reasons? No, it's not. Consider wavelength vs. head position.
This is basic physics. Perhaps some other issues need to be tackled first?

You are aware of the relationship between wavelength and frequency, yes? You are aware that your ears are 6" or so apart, too, yes?

Talking down to people doesn't help the discussion in any way.

To answer your question, yes, I'm aware of all of that. You might want to consider the problem in this ongoing back and forth here is not the receiver but the sender?
You've said that smoothing is needed not only for psychoacoustic reasons. If you want to filter out later energy then yes, a window on the impulse response will do that. At the time same time you're loosing resolution in the frequency domain. That's the inevitable tradeoff when doing the transform. High frequency resolution, low time resolution – high time resolution, low frequency resolution.

Do we really need to discuss and agree the absolute basics before a meaningful discussion can be had?
 
Last edited:
As to 'bad correction', that is simply a result of NOT SMOOTHING. Again, consider the relationship between wavelength and frequency. If you don't smooth, your "correction" is only valid for the middle of the head, and neither ear is there, furthermore, there's no HRTF's involved, so all of volume velocity, coherence length, and time delay come into play if you don't smooth.

Which is why you MUST smooth, single point, multi point, WHATEVER.

I already said that I absolutely agree with you that smoothing is a must, so I will agree once more. :)

Regarding your remarks that "bad correction" is a result of not smoothing, I am not sure. More than a few times I have witnessed such effects from some pretty well known RC software products and, while I don't know exactly based on which strategy they generate fitlers I have some doubts that they overrcorrect the region between transition frequency and app 900Hz, and sometimes also the region above 900Hz. But you are certainly right - incorrect smoothing may indeed lead to such artifacts.
 
Regarding your remarks that "bad correction" is a result of not smoothing, I am not sure. More than a few times I have witnessed such effects from some pretty well known RC software products and, while I don't know exactly based on which strategy they generate fitlers I have some doubts that they overrcorrect the region between transition frequency and app 900Hz, and sometimes also the region above 900Hz. But you are certainly right - incorrect smoothing may indeed lead to such artifacts.

Well, ONE cause is bad smoothing. It's certainly not always that. One can always make mistakes. We see that proven every day in the larger world.
 
So if there something that folks don't understand or take issue with, then reference that particular part in the video and I will try my best to answer.
I already did but your answers were evasive at best. Do I need to cite your relevant posts? I can and will if needed.

If you really want this thread to make people understand why you think Uli's room correction approach would be "state of the art" then you need to explain things (better). Your current way of communicating here (and JJ's for that matter) does the opposite. No wonder people feel that this thread, your various articles comparing/evaluating room correction solutions and writing a (not free) book that describes your room correction approach (which is basically just what Uli Brüggemann has been doing for the last 10 years) is nothing but part of a marketing strategy to sell your services and making money along the way.
 
Last edited:
After EQ it looks something like this

View attachment 162496

Well, this response implies that EQ filters are doing a really good job, well done!

Btw, for a final check of accuracy of LF response I would always recommend making MMM measurement of the response with both speakers playing (L+R). Doing that will show if eventual phase cancellations between channels are affecting linearity in 20-150Hz region.
 
OK, thank you for this. You've established you do not have any interest in using and/or reviewing Audiolense (or Acourate).

Audiolense and Acourate are best 2 RC software I have come across and tried. Both are capable of generating very good filters even in very complex speaker/room scenarios. I would find denying capabilities of those 2 products quite ridiculous..
 
Sigh. What basics do I deny?

Asked and answered, several pages ago?

What's the deal here, anyhow, you just seem to be here to put stuff down as far as I can tell.

I notice you didn't say a thing about coherence length, either. Got beef?
 
In an attempt to get this thread back on track here's data I took some 4 years ago. It's just subs but it's 3 of them and each sub was measured at the very same 17 locations. Here's the spatial distribution:

room-overview.png


The "low" points are 65cm above the floor, "high" points 125cm. Points are space 60cm. Point "1" is the main listening position (95cm above the floor).

Because we have only subs let's only do a low frequency but "state of the art" optimization as described in this thread using only point 1 – @mitchco ? From there we can calculate how good or bad all other points perform after that optimization. I'll run a multipoint optimization through MSO and post the results afterwards.

The REW .mdat can be downloaded at https://drive.google.com/file/d/1-NBjpfuQK3Okzv3NhXMUuU13ssIxY6E3/view?usp=sharing

P.S. Yes, all points have been measured using a timing reference.
 
Last edited:
Just a quick glimpse on difference between point-1 (in red, 1/6 smoothing) vs average of all responses (in blue, 1/6 smoothed before averaging):

Capture.JPG



And that is, of course, only a tip of the iceberg, as even the average of all points would greatly differ from the combined response of all 3 subs because of timing/phase differences between them. Basing room correction on a single point measurement of each sub cannot possibly lead to a good response in this scenario.
 
Last edited:
Because we have only subs let's only do a low frequency but "state of the art" optimization as described in this thread using only point 1 – @mitchco ? From there we can calculate how good or bad all other points perform after that optimization. I'll run a multipoint optimization through MSO and post the results afterwards.
I don't get it, why would it be one vs the other? I would deal with the subs first with MSO or equivalent to get the lowest spatial variation and minimize the issues and then move on to an overall correction.
 
.....
Can't have a moving goal post. Inside of any processor you're reviewing; as you know, is a computer as well...I really don't understand why a Windows computer running Roon Core is that big of a deal. Even though you will not try it, you should for yourself and then you can actually report your findings all with measurements cuz me moving these beasts around alone ain't happening.
......
I have agree with Jay here. Please understand the usability argument. You are perfectly content with a PC based (controller)solution, but until it is made to a IO convolver 'black box', it is unusable for some scenarios.

As an aside, until j_j-s PSR is mainstream (never?), we have to content with 2->n channel upmixing. Currently, for music, only Logic7 (Griesinger) and PL2 Music (Fosgate) are good solutions (sadly both deprecated). Try to get these on a PC! Some of us have come to a conclusion, that 'upmixing'>>'room correction', especially with modern multisub solutions and spatially well behaved speakers.
 
@AudioJester and @fluid, please feel absolutely free to do as you like. My experience is that from the day I went back to basics, sticking to correcting only the modal region based on steady state measurements, forgot about messing with the phase response, etc, I have been very happy
I don't see any issue with using that sort of approach and if you prefer it that is all that matters. Certainly it is very easy when altering the phase response to make terrible sounding filters. Very little of this is fool proof but using a good speaker in a good room and correcting steady state measurements in the modal region is the hardest to get wrong. The better the speaker the less a good algorithm and settings would try to correct above the transition in the first place.
 
Status
Not open for further replies.
Back
Top Bottom