• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Using REW's new inversion feature for room correction

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
691
Likes
1,196
Is this rooted in some psychacoustic research based on the prescedence effect or echo thresholds?
In a roundabout way, like the slides from jj's presentation the window can be longer at lower frequencies and as frequency rises it becomes more important to only correct the first arrival sound. Experimentally the window length in the midrange is important and needs to be quite short. Line arrays benefit from longer windows above the midrange than regular speakers would.
 

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,106
Likes
2,313
Location
Canada
I suspect my point-source boxes and their compromised positioning in the room are far more sensitive to variations in frequency response than a corner placed line array would.

A single-point correction would never do it justice. Whichever one point at midfield distance throughout the length of the couch measured, the overall family of anechoic curves would be chopped off sloppily/crudely like from a badly angled butcher knife’s point of view. It’s quite tragic. It would be very nice if I could just hand in the reigns completely to a semi-auto EQ routine. At the moment, I’m not there yet.
 
Last edited:

jaakkopetteri

Active Member
Joined
Apr 10, 2022
Messages
180
Likes
111
Couldn't this inversion feature be used with a vector average of multiple measurement points?
 

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,106
Likes
2,313
Location
Canada
Couldn't this inversion feature be used with a vector average of multiple measurement points?

There are a couple other alternative programs that can do types of averaging that REW currently does not support, for example:

1660760530876.png



Haven't used it yet though.


*Just found out FIR Designer can even use MMM data to create FIR filters with smaart live applications:

 
Last edited:

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
691
Likes
1,196
I suspect my point-source boxes and their compromised positioning in the room are far more sensitive to variations in frequency response than a corner placed line array would.
This is most likely true, the averaging effect and vertical directivity of the line array does help in room.
A single-point correction would never do it justice. Whichever one point at midfield distance throughout the length of the couch measured, the overall family of anechoic curves would be chopped off sloppily/crudely like from a badly angled butcher knife’s point of view. It’s quite tragic. It would be very nice if I could just hand in the reigns completely to a semi-auto EQ routine. At the moment, I’m not there yet.
Averaging measurements can be tricky. It can be useful in a situation like the listening window curve of the spinorama, minor position dependant phenomena are averaged out leaving a truer representation of the perceived signal. But it can also create a situation where you don't get a true answer for anywhere but a hedge your bets choice somewhere in the middle that ends up being right nowhere.

If for no other reason than to see what you have, the different types of averages are good to measure, make and test to see if any of the fit your use case successfully.

If you want spatial uniformity over an area, an average might seem like a good idea but if that area is large and you don't have properly setup multiple sources you will get the classic wrong answer everywhere issue. It may not sound bad anywhere but it won't be as good as it can be anywhere either. Pick your compromise or stack the deck so you have to compromise less either way.
 

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,106
Likes
2,313
Location
Canada
If you want spatial uniformity over an area, an average might seem like a good idea but if that area is large and you don't have properly setup multiple sources you will get the classic wrong answer everywhere issue. It may not sound bad anywhere but it won't be as good as it can be anywhere either. Pick your compromise or stack the deck so you have to compromise less either way.

Not too long ago Charles Sprinkle from Kali Audio was in a video discussion talking about the benefits of equalization using spatial averaging with a very simple MMM technique around the MLP. Shortly afterwards there was an interesting back and forth exchange in the comments section where Matthew Poes said he'd rather not use any averaging -- pretty much stating the same downsides you pointed out.

Also, I suppose, in-room measurements can be suspect depending on the degree of variance found between the measured positions.

In the end, for the best way to avoid adding too much or too little EQ (FIR or ordinary PEQ), one would need to have more than one reference curve visible during filter creation. So if time permits, I'd would always use an overlay graphical view mode to look at how my "global EQ" adjustments affect multiple measurements simultaneously. This process is going to be much more tedious, of course, but the end result is totally in the hands of the operator/calibrator -- and what compromises is he/she willing to get a more balanced sound across a wider listening space.

1660901154196.png


In the above example scenario, one can still listen to music from the front half (desk MLP) all the way to the couch seats at the back end of the room without experiencing too much bass. If I were feeling more selfish, a completely flat EQ at the desk MLP can also be loaded via an alternative EQ preset.
 
OP
P

ppataki

Major Contributor
Joined
Aug 7, 2019
Messages
1,215
Likes
1,354
Location
Budapest
What if I need to optimize only at the MLP?
My better half and my kids couldn't care less about how the system sounds on the other parts of the couch and the room :D
Hence I guess for me the single-point measurement is just fine, or?
 

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
691
Likes
1,196
What if I need to optimize only at the MLP?
Hence I guess for me the single-point measurement is just fine, or?
Quite possibly but it is still something to decide for yourself by trying. The best way to start for me is to see what you have to work with. Begin with a single position central between the speakers. The best way to find this for me is to have both speakers play at the same time, measure a short sweep and move the mic until the two impulse peaks overlay each other. The mic is then in the central acoustic position and no relative delays will be needed. You might be surprised how far this is from where you're head normally is, at least I was.

Then measure something more like ear positions, and positions within where your head could be placed while listening. Compare these and see what you have with whatever windowing and smoothing you will use, or various options if you haven't settled on anything yet. With a line array and some form of treatment in the room to control reflections they might all be within a very small margin of each other. In this case the single central position will most likely be the best base to begin processing with particularly if there is to be any phase manipulation.

If there is quite some variation within that head space after windowing, then some form of average may be better. This is where having the basic correction be to a flat target, assists with comparing filters. The likelihood of significant gain differences making the choice becomes less. Then it is a matter of switching between the convolution filters to see which you prefer (leaving the overall house PEQ slope in place).

You might prefer different types of measurements to use for different parts of the overall correction. You might use a single position to correct to flat and then a MMM in the head area to set tonal balance with shelving filters.

Ultimately what sounds best to you may not fit within the 'rules' of good equalization. If you were a diligent soldier and never went outside the lines you would never know to begin with. The worst risk of trying something with an EQ is that it sounds bad and you spent some time making something. I don't consider this a waste as it still should have taught you something or at east provided another data point. The best sounding correction algorithm for me was found quite early, I then spent a really long time trying to make it better because it didn't follow the rules. I couldn't find anything that was better and I tried very hard.

There are many comments on ASR that are very definite, because somebody said so or because it is a known conclusion from research. The problem with dogma is that it may not always be right in every situation, but if repeated enough it sees like it couldn't be any other way and is universally applicable. When you have the tools and time make your own mind up.
 

jasoncd

Member
Joined
Jan 22, 2022
Messages
41
Likes
20
This is all over my head, but I gave this a quick try. I need to redo it from the beginning I think, as I think I didn't have everything right, but first impressions are good. Simple FR measurements of this inversion convolution are comparable to Dirac - better looking in some places, worse in others to my untrained eye.

Subjectively I like what I'm hearing with the convolution. It's hard to do an A/B with Dirac with matched volume. Dirac default curve does want to extend the low end a lot more, to 20hz vs 30hz without Dirac. Not a big deal either way with the music I listen to.

I'm guessing depending on the regularization % you correct to, you'd set that amount of headroom in whatever is running the convolution? I think in the video he said 8% was a good number, which I think was 5db as the max it would correct, so setting -5db pre-amp would prevent clipping?
 

OCA

Addicted to Fun and Learning
Forum Donor
Joined
Feb 2, 2020
Messages
649
Likes
473
Location
Germany
I mentioned in a previous thread that there a few things in the video that did seem right to me. One of the biggest for me was the description of the Harman Target Curve and it being the only curve anyone would need. I'll attach some words from Floyd Toole that appear in the same publication the graph of curves come from. The whole paper is free and worth reading if not already. The Measurement and Calibration of Sound Reproducing Systems.

https://www.aes.org/tmpFiles/elib/20220817/17839.pdf

"Research by Olive et al. [48] was distinctive in that the loudspeaker used was anechoically characterized, the room described [49], and high-resolution room curves measured. In the double-blind tests, listeners made bass and treble balance adjustments to a loudspeaker that had been equalized to a flat smooth steady-state room curve. The loudspeaker had previously received high ratings in independent double blind comparison tests, without equalization. Three tests were done, with the bass or treble adjusted separately with the other parameter randomly fixed, and a test in which both controls were available, starting from random settings. It was a classic method-of-adjustment experiment. For each program selection, listeners made adjustments to yield the most preferred result. In Fig. 14 the author has modified the original data to separately show the result of evaluations by trained and untrained listeners. This is compared to the small room prediction from Fig. 13(a). The “all listeners” average curve is close to the predicted target, except at low frequencies where it is apparent that the strongly expressed preferences of inexperienced listeners significantly elevated the average curve. In fact, the target variations at both ends of the spectrum are substantial, with untrained listeners simply choosing “more of everything.” An unanswered question is whether this was related to overall loudness—more research is needed. However, most of us have seen evidence of such more-bass, more-treble listener preferences in the “as found” tone control settings in numerous rental and loaner cars. More data would be enlightening, but this amount is sufficient to indicate that a single target curve is not likely to satisfy all listeners. Add to this the program variations created by the “circle of confusion” and there is a strong argument for incorporating easily accessible bass and treble tone controls in playback equipment. The first task for such controls would be to allow users to optimize the spectral balance of their loudspeakers in their rooms, and, on an ongoing basis, to compensate for spectral imbalances as they appear in movies and music."

There is also a slide from one of Sean Olive's documents that shows some more information
https://www.juloaudio.sk/Umiestnenie_reprosustav/History of Harman Target Curve.pdf

View attachment 224853

So while there is a lot of similarity in preferred room curves the idea that you can pick one and apply to all speakers with all directivities in all rooms and have them sound the same or right is not a good one.

In my own correction routine I have moved away from having DRC use a target curve to define the overall steady state response. While I have had good results using that method it can be quite time consuming and complicated making different target curves trying to find improvements.

What I do now is use DRC to correct to a flat response and then apply a layer of PEQ over the top to set the tonal balance which was judged by ear. This PEQ is a collection of shelving filters spaced an octave apart which allows the slope to be changed consistently or varied at certain points based on how it sounds. Each speaker and room would be different and this sort o approach allows easy modification of the tonal balance while listening in a controlled way. Sometimes a small change to the Q of the shelf or gain by a a small amount can impact the perception quite a lot. Eventually things start to sound more "right" for want of a better word.

Experimentation in this sort of processing is important because there is still a fair bit of alchemy in getting the best result as the method itself is impacting so many different factors at once. The usual factors pointed out as to why "room correction" is not a good idea are valid in of themselves but in my own experience if they are considered and managed then it is still possible to get a really result but one with a one size fits all curve.
In his excellent tutorial (the link points to the exact time of this explanation of the quite long video), Mitch Barnett from Accurate Sound explains ideal target curve and our perception of it varying with volume in full detail:
For your information, I have recently changed my target curve to the straight line with a fixed slope one shown as the most preferred in this picture (my previous Harman target curve was the second best - green line below) and I now produce even better results with inversion:
1663155673519.png
 

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
691
Likes
1,196
In his excellent tutorial (the link points to the exact time of this explanation of the quite long video), Mitch Barnett from Accurate Sound explains ideal target curve and our perception of it varying with volume in full detail:

For your information, I have recently changed my target curve to the straight line with a fixed slope one shown as the most preferred in this picture (my previous Harman target curve was the second best - green line below) and I now produce even better results with inversion:
Mitch's video is excellent as are his other articles and yes the perception of a speakers tonal balance changes with SPL so when making an overall correction the level that you intend to listen at is becomes important to measure at. This is just another reason why there is no "ideal" curve to use for all situations. Mitch makes filters for clients as a service based on their own measurements. He makes a number of different ones for the clients to test out for themselves not just one.

A general fall of somewhere in the region of 1dB per octave in room sounds good with most speakers that have rising directivity. It is what you get when you put a good conventional forward firing speaker in a good room. If you were to apply the same curve to an omnidirectional loudspeaker with a flat power response it would not sound anywhere near the same. This is why anechoic measurements or some idea of the speakers directivity is important when equalizing full range.

Take for example the image from Sean Olive's room correction study. Look at the curves and see what was changed between each one. The original has a pretty big trough around 2 to 3KHz which has been filled in to some extent by all of the corrections. Beyond that what has changed is the slope of the overall response and the smoothness of the bass. These are both important factors. It is easy to see why the top red one was most preferred of the options presented. That does not mean that another curve wouldn't have sounded better, but it wasn't tested. The red curve with the 2 to 3K scoop left in may well have sounded better and been preferred, had that been stated to be preferred there might be a lot of house curves made to resemble it one that basis alone.

I have tried every conceivable strategy I could think of and compared literally hundreds of combinations of filters and slopes over a number of years. I have watched videos, read books, articles and ideas from other's but ultimately my opinion comes down to testing for myself. There is no one curve to rule them all for all speakers in all rooms and all people, the anti room correction crowd have that part right. The difference between good and great sound for me was not that much of an EQ change but it had a real effect on perception. It's easy enough to get in the ball park but it still takes a bit of trial and effort to get something that really clicks.
 

OCA

Addicted to Fun and Learning
Forum Donor
Joined
Feb 2, 2020
Messages
649
Likes
473
Location
Germany
Mitch's video is excellent as are his other articles and yes the perception of a speakers tonal balance changes with SPL so when making an overall correction the level that you intend to listen at is becomes important to measure at. This is just another reason why there is no "ideal" curve to use for all situations. Mitch makes filters for clients as a service based on their own measurements. He makes a number of different ones for the clients to test out for themselves not just one.

A general fall of somewhere in the region of 1dB per octave in room sounds good with most speakers that have rising directivity. It is what you get when you put a good conventional forward firing speaker in a good room. If you were to apply the same curve to an omnidirectional loudspeaker with a flat power response it would not sound anywhere near the same. This is why anechoic measurements or some idea of the speakers directivity is important when equalizing full range.

Take for example the image from Sean Olive's room correction study. Look at the curves and see what was changed between each one. The original has a pretty big trough around 2 to 3KHz which has been filled in to some extent by all of the corrections. Beyond that what has changed is the slope of the overall response and the smoothness of the bass. These are both important factors. It is easy to see why the top red one was most preferred of the options presented. That does not mean that another curve wouldn't have sounded better, but it wasn't tested. The red curve with the 2 to 3K scoop left in may well have sounded better and been preferred, had that been stated to be preferred there might be a lot of house curves made to resemble it one that basis alone.

I have tried every conceivable strategy I could think of and compared literally hundreds of combinations of filters and slopes over a number of years. I have watched videos, read books, articles and ideas from other's but ultimately my opinion comes down to testing for myself. There is no one curve to rule them all for all speakers in all rooms and all people, the anti room correction crowd have that part right. The difference between good and great sound for me was not that much of an EQ change but it had a real effect on perception. It's easy enough to get in the ball park but it still takes a bit of trial and effort to get something that really clicks.
I agree with all your points about the variations of an ideal target curve in principle. In my experience though it's not exactly the target curve that differentiates a good room correction from the bad. At the end of the day, too much or too little bass or mid or treble can all be easily compensated once you start from a target curve of some standard. It's in the dynamics of the resulting sound which is a synergic sum of factors like height of the first impulse peak, driver time alignment (dealing with crossover phase shifts), shape of the step response, left & right speaker coherence, etc. These are not only much harder to get right but also will not be really fixed with simple changes in the shape of the target curve. Think about the difference when you are hearing music from outside your apartment and your windows are closed. And then you just open your window. All frequencies still have the same relative sound pressure level at your ears but the sound is so much more dynamic and lively. Over the years, I have rather concentrated to get these right rather than rely upon a magic target curve but it's been convenient to start from a commonly agreed response target to concentrate to achieve good results with the other dynamics.
 

spalmgre

Member
Joined
May 22, 2019
Messages
48
Likes
14
Location
Helsinki
Mitch's video is excellent as are his other articles and yes the perception of a speakers tonal balance changes with SPL so when making an overall correction the level that you intend to listen at is becomes important to measure at. This is just another reason why there is no "ideal" curve to use for all situations. Mitch makes filters for clients as a service based on their own measurements. He makes a number of different ones for the clients to test out for themselves not just one.

A general fall of somewhere in the region of 1dB per octave in room sounds good with most speakers that have rising directivity. It is what you get when you put a good conventional forward firing speaker in a good room. If you were to apply the same curve to an omnidirectional loudspeaker with a flat power response it would not sound anywhere near the same. This is why anechoic measurements or some idea of the speakers directivity is important when equalizing full range.

Take for example the image from Sean Olive's room correction study. Look at the curves and see what was changed between each one. The original has a pretty big trough around 2 to 3KHz which has been filled in to some extent by all of the corrections. Beyond that what has changed is the slope of the overall response and the smoothness of the bass. These are both important factors. It is easy to see why the top red one was most preferred of the options presented. That does not mean that another curve wouldn't have sounded better, but it wasn't tested. The red curve with the 2 to 3K scoop left in may well have sounded better and been preferred, had that been stated to be preferred there might be a lot of house curves made to resemble it one that basis alone.

I have tried every conceivable strategy I could think of and compared literally hundreds of combinations of filters and slopes over a number of years. I have watched videos, read books, articles and ideas from other's but ultimately my opinion comes down to testing for myself. There is no one curve to rule them all for all speakers in all rooms and all people, the anti room correction crowd have that part right. The difference between good and great sound for me was not that much of an EQ change but it had a real effect on perception. It's easy enough to get in the ball park but it still takes a bit of trial and effort to get something that really clicks.
Thank you for your writing so now I don't need to make the effort. It takes much effort to get to your conclusion. I know as I also made the effort and came to the same result. It took me seven years after entering the digital FIR EQ XO world.

Now my dilemma with the perfect sound and curves considering the speakers I have is to stop my effort and just enjoy the music. Or build some new speakers and see if I can find the "click" once again. But of course, we all know where this is going.
 

Attachments

  • 1-IMG_2776.JPG
    1-IMG_2776.JPG
    484.3 KB · Views: 37

spalmgre

Member
Joined
May 22, 2019
Messages
48
Likes
14
Location
Helsinki
My initial question entering this thread was, where can I adjust how many taps the inverted correction has when it is exported and convoluted?

I have also tried to do Driver Lineration as Mitch suggests in his book. My Najda prosessor can prosess about 1000taps per channal. This is enough for the horn and 10" mid-driver.
 

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
691
Likes
1,196
You have window the filter when exporting the impulse response, the length of time of the window will determine the tap length.
 

spalmgre

Member
Joined
May 22, 2019
Messages
48
Likes
14
Location
Helsinki
Ok I tried than you.

Can I calculate the time and taps relation?

I thought that if I could import to rePhase then saving from there would give me the taps setting.
 

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
691
Likes
1,196
Window width (seconds) = Taps / Sample rate

e.g. 1000/48000 =0.0208333333 seconds

The filter can only be 20.8ms long to be 1000 taps at 48K sampling. This includes both sides of the window so for linear phase it will be half that time, for minimum or mixed phase it can vary. The left window needs to take account of any significant change otherwise truncating it will change the filter.

If you have a very detailed correction from inversion windowing it to 1000 taps will smooth the low frequencies out and can be quite different than what was intended.
 

spalmgre

Member
Joined
May 22, 2019
Messages
48
Likes
14
Location
Helsinki
If you have a very detailed correction from inversion windowing it to 1000 taps will smooth the low frequencies out and can be quite different than what was intended.

I think Amirs video Understanding Audio Frequency Response & Psychoacoustics tell why you can smooth before the inversion or rePahse process.

But looking at the curves maybe there is not so much need to smooth at least over 800Hz. And I must say that I can not hear the difference.
I also use FIR for the 10" mid JBL 2123 driver 250-800Hz. Their 1023 taps are certainly limiting at the low end but not as much as one would think. You can also overcorrect and find a good balance. rePhase does show when generating where you are going.

But there would not be a need for this discussion if there were available convolution processors that could do more than a few thousand taps. If like I do active XO then the best solution for multi-thousand taps is a PC or Mac and some multichannal DAC. But then you invite all the problems that today's constantly updating operation systems bring with them.
And no one in the family will be able to use them. Therefore I have tried to seek the best possible compromise with the taps I have in my Najda 8-channel processor with 1023 taps per channel. I use 2/4 miniDSP with the JBL 20235 15" driver. I also convolute room correction FIR in a PC but it is very difficult to hear any difference as the low taps filtering in the Najda processor is working quite well.
 

Attachments

  • Kaikki mittaukset.png
    Kaikki mittaukset.png
    244.4 KB · Views: 49

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
691
Likes
1,196
I was not describing smoothing the measurement but the fact that when a window is applied the filter itself will be smoothed. This can be somewhat unpredictable and make the filter behave differently than expected. If the inversion is being performed on an already heavily smoothed measurement, the inverse filter may well be capable of being described in 1000 taps. It is quite reasonable to use IIR EQ for the low frequencies when FIR taps run out.

There are a few hardware processors that are capable of quite high tap counts but they are not in the same price bracket as a Najda.
 
Top Bottom