• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

What DRC do you use ?

You're absolutely right -I didn't notice the year of the test. That completely invalidates its relevance, if there was ever any doubt.
It's "relevance" was to illustrate that doube bind tests have been done, by a very respected reasercher, and shown interesting results. What does the year of the test have to do with it? Do you dismiss Newton's experiments because they were done in the 17th century?

That said, it would be nice to read about blind tests of current named products!

Have any been done?
 
I'm using Acourate now. I've used DRC-FIR and REW before that.

I started out with passive speakers, but eventually went full active by replacing the outboard electronics for my old NHT Xd active system with separate hardware. That NHT box had a 2 channel ADC, 6 DSP channels, and 4 channels of class D amplification (the NHT subs had built-in amps). The signal path is

CamillaDSP on a RPi4 -> Motu Ultralite Mk5 --
-> 2 SVS SB-1000 Pro subs in stereo configuration
-> 2 Fosi V3 stereo amps -> 2 NHT Xds satellites (no internal crossovers)

So far I haven't felt the need to replace the V3s.

I use Acourate for

1. Nearfield measurements of the drivers using sinc pulses.
2. Linearization of the drivers' nearfield responses.
3. Creation of crossovers for subs, mid-ranges and tweeters.
4. Alignment of the subwoofers using the sine convolution method.
5. Final single-point measurements.
6. Final room correction
 
Last edited:
Denon AVR-X6300H with Audyssey MultiEQ XT32. I do use the $20 app to more easily customize curves and frequencies. I have thought about the full MultEQ-X but never have felt that motivated to move on up. Perhaps in time I will. I need to install new rear surrounds first, then it may be time to take the plunge. But in general, I'm very happy with the current sound, so I'm not sure how much better it could be vs. the time commitment.
Curious, you go down the Audyssey One path ?

Otherwise I also use a bit older Audyssey XT and XT32, and one newer unit with the app. Almost got a model that permitted Dirac but didn't really see the need, like with the MultEQ-X app.
 
REW, rePhase and occasionally DLBC mixed into the sauce for active XO 3-way speakers and 2 subs.

Since I went with an active speaker system over a decade ago and had to create my own crossovers it only made sense to bake the room correction into the frequency response. Over the years I've used many tools to implement it like a miniDSP 4x10, Motu Ultralite MK4 but now have my ultimate solution of an Okto DAC8 Pro for the 8 channels and all my filters are hosted through Hang Loose Convolver.

This is a graph I use for HLC. I can simply bypass DLBC and the signal goes to my own filters instead + a bit of bling for monitoring the signal.

dirac path3.jpeg
 
In the past I had a denon x4000 with Audyssey MultiEQ XT32. That was so much better then without room correction so a wanted without drc again.
But after we moved to another home the new room was/is not optimal due a strange shape, also audyssey could not make it perfact(the version of the x4000, in new ones you can adjust more as far i known).

Now we have a yamaha with ypao as drc, and I am happy again. Somtimes I want to try dirac or so, but to be honest the sound now if fine.
 
Curious, you go down the Audyssey One path ?

Otherwise I also use a bit older Audyssey XT and XT32, and one newer unit with the app. Almost got a model that permitted Dirac but didn't really see the need, like with the MultEQ-X app.
If I had any major issues with the current sound, then maybe I would try the Audyssey One approach. But otherwise, it looks like a large time commitment with little benefit to my room.
 
Here's an article by Sean Olive on tests he performed back in 2009, though he doesn't name products. (He mentions in a comment that the two best performing were Harman products, and the others tested were not good... one was no better than no treatment, another was worse!)


-The best performing real product in that test was RoomPerfect. However, Dr. Olive noted some usability issues, such as an initial 800Hz mains/subs crossover (designed for use with Lyngdorf’s boundary speaker concept). The UI has improved quite a bit since then - the current webUI is IME quite good. It has to be easier than the small screen on the original box!
-Anthem ARC was statistically equivalent to no EQ. I think had the test been run a couple years later ARC would have shown improvement, due to improvements to their room gain configuration.
-Audyssey was worse than no-EQ. Audyssey’s target curve has not been updated, so under those conditions (default parameters) Audyssey would have the same failing result today. So, @Old_School_Brad in that respect the 2009 testing is highly relevant.

FWIW I’ve used all of the serious ones (installed on hardware, does not demand playback from a general purpose computer) except for Genelec GLM, DSpeakers Anti-Mode, Meridian MRC, and Trinnov waveforming. That is to say, I’ve used Anthem ARC/1M/Genesis, Audyssey, Dirac Live, Dirac Live Bass Control (DLBC), Dirac ART, Lyngdorf RoomPerfect, manual measurements-based PEQ, Neumann MA 1, and the most basic form of Trinnov Optimizer (Sherwood Newcastle AVR).

Here’s what I use now:
Main system (immersive): DLBC
Secondary system (stereo): DLBC
Nearfield/desktop: DLBC layered over Neumann MA-1*
Guest room: Lyngdorf RoomPerfect
Courtyard: manual PEQ

*I expected this to fail and kicked myself when I realized I forgot to turn MA 1 off. In practice it works astoundingly well. DLBC leaves MA 1’s work alone above 200 Hz or so, and beautifully blends the 2 non-Neumann subs. DLBC also allows for sit/stand presets based on separate measurements.

I would like to be using Dirac ART. In beta testing it provided greater upper bass fidelity, caveat being IIRC I used it to 300 or 500 Hz, while the released implementation goes to 150 Hz IIRC. It’s complex to use, but brilliant in the hands of someone equipped to deal with that. However, the beta was computer software rather than on real audio hardware, so it was severely limited as to remote control, sources, and channels—I used it on a makeshift 2.2 channel system.
 
Last edited:
-The best performing real product in that test was RoomPerfect. However, Dr. Olive noted some usability issues, such as an initial 800Hz mains/subs crossover (designed for use with Lyngdorf’s boundary speaker concept). The UI has improved quite a bit since then - the current webUI is IME quite good. It has to be easier than the small screen on the original box!
-Anthem ARC was statistically equivalent to no EQ. I think had the test been run a couple years later ARC would have shown improvement, due to improvements to their room gain configuration.
-Audyssey was worse than no-EQ. Audyssey’s target curve has not been updated, so under those conditions (default parameters) Audyssey would have the same failing result today. So, @Old_School_Brad in that respect the 2009 testing is highly relevant.
I remain unconvinced. For one, the original test does not disclose the names of the DRC systems -where did you get this information? (Perhaps I missed it.)

It’s entirely possible that while the target curve hasn’t changed, the algorithms have evolved. Unfortunately, we have no way to confirm this. Given that it’s been 15 years since the test, I’d expect that updates have been made during that time.

Take Dirac Live, for example. It updated its recommended curve a few years ago and also offers three options for initial measurements. While I’m less familiar with other DRC systems, I’d assume they have implemented similar advancements and use similar methods.

Beyond these variables, the room environment and the hardware running the software also play significant roles. This includes differences in microphones and, more importantly, the number of available taps in systems with embedded DRC software. For instance, Dirac (the system I know best) performs differently when run on a powerful PC compared to the entry-level miniDSP DDRC-24 or a NAD integrated or pre-amplifier.

Considering all these variables, I stand by my original statement: the 2009 test is no longer relevant today. Moreover, I don’t believe it’s possible to conduct a proper comparison that would yield meaningful or enlightening results given the complexity and variability involved. Though I wish it were possible.
 
I bought an Anthem MRX540 after seeing measurements for the AVM70 and MRX1140 which means I'm using ARC. It's great. Love the better (looking) mic and "real" mic stand. The ARC software has a user friendly UI, and allows for custom adjustments. I've found it to make my system sound better than when I was using Dirac through a NAD T758 V3.
Anthem Arc is fantastic. I love the quick measure feature
 
I remain unconvinced. For one, the original test does not disclose the names of the DRC systems -where did you get this information? (Perhaps I missed it.)

It’s entirely possible that while the target curve hasn’t changed, the algorithms have evolved. Unfortunately, we have no way to confirm this. Given that it’s been 15 years since the test, I’d expect that updates have been made during that time.

Take Dirac Live, for example. It updated its recommended curve a few years ago and also offers three options for initial measurements. While I’m less familiar with other DRC systems, I’d assume they have implemented similar advancements and use similar methods.

Beyond these variables, the room environment and the hardware running the software also play significant roles. This includes differences in microphones and, more importantly, the number of available taps in systems with embedded DRC software. For instance, Dirac (the system I know best) performs differently when run on a powerful PC compared to the entry-level miniDSP DDRC-24 or a NAD integrated or pre-amplifier.

Considering all these variables, I stand by my original statement: the 2009 test is no longer relevant today. Moreover, I don’t believe it’s possible to conduct a proper comparison that would yield meaningful or enlightening results given the complexity and variability involved. Though I wish it were possible.
Yes, our understanding of the science of psychoacoustics, the computational power of the hardware, and the correction algorithms, themselves, have evolved significantly since 2009. Both Dirac and Anthem ARC have undergone multiple generations of development and refinement since 2009. The initial version of ARC, for example, was released in 2008. This was followed by ARC 2, then ARC 3, and now ARC Genesis, with multiple refinements and bug fixes delivered via firmware revisions to the equipment within each generation. ARC Genesis's capabilities are orders of magnitude beyond those of the original Anthem ARC.
 
I don’t believe it’s possible to conduct a proper comparison that would yield meaningful or enlightening results given the complexity and variability involved. Though I wish it were possible.

Not sure it's a proper comparison but here on ASR someone in the last year or so did a hands-on comparison of many of the current DRC suites. Sorry, I can't find the thread right now.
 
I used REW to measure my system and develop parametric EQ for my room. I use LMS embedded within piCorePlayer on a Raspberry Pi 4 for digital playback and utilize a custom-convert.conf file and SOX to implement the parametric EQ. It works very well. I can swap between to custom-convert.conf files depending upon whether I'm listening on my speakers or or headphones. The only thing I'm missing is EQ for vinyl on the rare occasions I play it.

Martin
 
REW with a UMIK, and a slightly modified version of some Python-based software (cannot remember name currently and it's on my other PC but pretty sure it's well-known here - IIRC modified it because it couldn't produce the exact type of PEQ I wanted) for creating 7-band PEQ which I then put on an RME Adi-2 Pro. Bunch of manual steps and back-and-forth in between, like deciding which peaks are worth tackling and which aren't, but end result is totally worth it. Would be tempting to find out if an automatic method would do better.
 
FWIW DRC to me generally means Dynamic Range Compression, REQ more for the others....
 
For digital room correction, I use Equalizer APO on Windows with a convolution filter, using an impulse response file generated from RePhase after performing in-room measurements with REW and a UMIK microphone.
 
For DRC, I was using REW with MathAudio HeadphoneEQ inside foobar 32bit for several years.

I recently switched to REW with foo_dsp_convolver inside foobar 64bit.
 
I remain unconvinced. For one, the original test does not disclose the names of the DRC systems -where did you get this information? (Perhaps I missed it.)

It was disclosed at some point. Back in the day there was some question which was ARC and which was RoomPerfect, based on subsequent improvements to ARC. The measurements provided in the deck made the bottom-dweller obvious.

It’s entirely possible that while the target curve hasn’t changed, the algorithms have evolved.

WTF does that even mean? A system with a bad target curves does a better job of hitting it?

Take Dirac Live, for example. It updated its recommended curve a few years ago and also offers three options for initial measurements.

Dirac was not available to the public in 2009.
All of these systems allow variable numbers of measurements. And IIRC always did. It’s true Dirac has evolved its default target curve recommendation in the direction of RoomPerfect…

While I’m less familiar with other DRC systems, I’d assume they have implemented similar advancements and use similar methods

Why?

Beyond these variables, the room environment and the hardware running the software also play significant roles. This includes differences in microphones

MUCH less of an issue than made out to be on the internet IME.

and, more importantly, the number of available taps in systems with embedded DRC software. For instance, Dirac (the system I know best) performs differently when run on a powerful PC compared to the entry-level miniDSP DDRC-24 or a NAD integrated or pre-amplifier.

Do you have evidence of that? I know it’s been long posited, but I haven’t seen supporting data.

Considering all these variables, I stand by my original statement: the 2009 test is no longer relevant today.

You really haven’t provided anything that even tends to support that point.

The @Sean Olive 2009 study is relevant above all because it
A) showed that full-band room correction could improve the sound quality of a bad speaker.
B) Demonstrated that the preferred bass target in a small room should not be flat.

Which system is which is just noise. Two were vaporware, after all. Except…one had a flawed target curve that was exposed, and has not been materially updated.

Moreover, I don’t believe it’s possible to conduct a proper comparison that would yield meaningful or enlightening results given the complexity and variability involved. Though I wish it were possible.

Maybe. It would be extremely tedious and expensive to arrange a controlled comparison of room correction systems in an immersive system. And with MIMO systems a mono comparison won’t do it.
 
Currently pleased with OCA's A1 Nexus w/ UMIK-1 calibration. My receiver only has the mid tier of Audyssey (XT) but after a bit of experimenting with DEQ on/off and different target curves, I'm ultimately very pleased with the result. My best results were with D&M House curve and DEQ off for measurements, later manually enabled with an appropriate Reference Level Offset.

The imaging alone was much improved by finding and measuring at the "true centre" MLP vs eyeballing it with the Audyssey calibration.
 
You really haven’t provided anything that even tends to support that point.

I don’t feel the need to elaborate further. I’ve shared my opinion on the matter, and based on the variables mentioned in the post you’re referring to, I believe the test is entirely irrelevant in today’s context. This is my perspective, and I’ve provided what I consider to be reasonable arguments to support it. If you disagree, that’s perfectly fine.

The @Sean Olive 2009 study is relevant above all because it
A) showed that full-band room correction could improve the sound quality of a bad speaker.
B) Demonstrated that the preferred bass target in a small room should not be flat.

Was this something new, or did the test simply confirm what we already knew? Reading this now, it feels like common knowledge within the audio world. I’m fairly confident it was the same in 2009, but I’m open to being proven wrong.
 
Back
Top Bottom