• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Battle of the DSP's - MiniDSP+REW vs FIR methods

klettermann

Senior Member
Joined
Mar 2, 2022
Messages
312
Likes
251
Location
Coastal Connecticut
After a long learning period I think I've got my system close - if not at - the end point of no more tinkering. This was done using REW and MiniDSP SHD Studio at every step of the way. Specifically, this entailed
  1. Optimize positioning (MLP, mains, 2 subs)
  2. Time aligning subs with each other
  3. EQing respective subs and creating a single virtual sub
  4. EQing mains
  5. Setting XOs
  6. Minor room response tweaks to manage impulse response and decay
  7. Repeat for finer tuning,
The end result was that the system sounded far better than ever before, just glorious, and measures well too. When I started I knew next to nothing about DSP of any sort except Audessy (as built into AVRs) and Dirac. My focus was on room correction/EQing and all that entails. More recently I've become more aware of the whole computer audio thing and all the associated tools, particularly HQPlayer. In some circles the REW-measure-MiniDSP approach is held in disdain in favor of FIR methods, often without measurements and, seeming, often with little emphasis on room correction.

So, can anybody comment on respective pros/cons of each approach? To be clear, my main interest is getting the system working well in the room it's in and making measurable improvements. Thanks and cheers,
 
Stereo system or multichannel?
Stereo

Sources DSP (EQ, XO, Timing) DACs EQ Amp Spkrs, Subs
Lumin U1 Mini | --> MiniDSP SHD Studio -->| Schiit Yggy LiM --> Schiit Loki Max --> ML No. 332 --> Maggie MGIIIA's*
Panasonic SV3700 DAT | | Schiit Modius e -----------------------------------------> Heco Sub 30A (x2)
Cambridge CD Transp |
*factory rebuilt, better than new
 
I use a hybrid strategy in my 2.2 layout. First, I integrate the subwoofers in time and frequency within MSO, experimenting with various setups for optimal response. Then, I use Dirac for better phase and time integration.

Dirac narrows the soundstage width and naturality but provides more control. MiniDSP allows me to activate and disable Dirac at one button, giving me the option to select more openness or control at a press.
 
I use a hybrid strategy in my 2.2 layout. First, I integrate the subwoofers in time and frequency within MSO, experimenting with various setups for optimal response. Then, I use Dirac for better phase and time integration.

Dirac narrows the soundstage width and naturality but provides more control. MiniDSP allows me to activate and disable Dirac at one button, giving me the option to select more openness or control at a press.
I went through the same thing. After lots of experimentation I dropped MSO cause all that matters is a single listening position. I tried Dirac but had the same issues and went back to just REW. Sounds great!
 
It is very tricky. MSO is a bit stubborn and you have to fix some values in order to guide it to a optimal solution, otherwise MSO can provide huge delays and crazy DSP for each Sub.

Dirac seem to be easy but you have to be aware if you don’t want to loose wide soundstage and at least in my case MiniDSP Umik1 provided much worse results than costlier Umik 2.
 
It is very tricky. MSO is a bit stubborn and you have to fix some values in order to guide it to a optimal solution, otherwise MSO can provide huge delays and crazy DSP for each Sub.
Fully agree! But at the end, with only MLP to worry about and just 2 subs, it didn't have any real advantage for me vs just REW.
Dirac seem to be easy but you have to be aware if you don’t want to loose wide soundstage and at least in my case MiniDSP Umik1 provided much worse results than costlier Umik 2.
Hmm, I never tried umik2. In any case I was never able to get Dirac to performing well on my system. It was always kind of grainy and artificial sounding. My best results by fast were just REW for all the measurements, making EQ targets etc. Really pretty wonderful.
 
In some circles the REW-measure-MiniDSP approach is held in disdain in favor of FIR methods, often without measurements

Most of the exact same corrections can be done with IIR or FIR but FIR takes more processing power than PEQs or biquads. MiniDSP uses Sharc processors which are limited in computational power just being an IC instead of a SoC with functional OS able to do far more - even a Raspberry Pi Pico as is being developed by Weeb Labs on another thread. There are advantages and disadvantages to both sorts of filters and no reason they can't be used together in the same system.

There is no way anyone can come up with a functional FIR filter without first taking measurements.
 
At the moment, FIR's greatest weakness is that it is a DIY solution. You will need a PC/Mac/Linux box or a RPi. So you will need some computer skills to get it up and running, particularly if you are running Linux. The other problem is that audio routing is not easy, particularly if you have multiple sources. It works best for a single source, you set it up once, and it works. This is FAR from something like a MiniDSP, where you can switch inputs and adjust the volume with a remote control. And the best thing about a MiniDSP is its robustness, no nightmare software updates or software glitches breaking your DSP chain.

I say "at the moment" because @mitchco has a hardware FIR processor in the works. Right now very little is known about it, but if my dreams are fulfilled it will provide MiniDSP-like functionality but with FIR processing.

The other major weakness of FIR is its latency, which is unavoidable. This is dependent on taps/sample rate (NOT on computing power!) and can range between 100ms to 1.5 seconds. This is not really a problem if you are listening to music, but it's a real problem if you need video because of lip sync problems. It can make your system feel sluggish and unresponsive, for e.g. if you change track or adjust the volume, it will happen after a short delay.

As to FIR vs. IIR, FIR is easier to design and more powerful. For e.g. all symmetrical LPF's and HPF's sum to flat, no need to check if certain configurations will have a non-flat sum due to phase interactions. It can do everything IIR can and more. It is the DSP of choice if you like tinkering. And not to mention, it's linear phase. The jury is out whether linear-phase sounds better. I think it does, but that's very much a subjective opinion.
 
Last I checked FIR needs both RAM and CPU/GPU to give it enough taps to do anything with....
 
I hope the Flex or SHD series will include some new MiniDSP Tide16 features, such as faster processing and Dirac Bass Control or ART licensing.

Currently, 2-channel Dirac Live requires REW for SW integration. You must integrate the Sub/subs in REW/MSO before Dirac to ensure time and phase alignment between Subs and mains for each channel.
 
It's not about IIR and FIR in the way you see it. While FIR based on lot taps is computing demanding it's just one pass. On the other hand IIR with every PEQ adds to requirement and treat every separately. Of course you need both PEQ especially for shaving filter (self, pass) and it will be more precise in first three octaves which you use the most to EQ actual frequency response. On the other hand you need to do impuls matching with FIR to start with and later if need be highs (where high tap FIR is more precise as soon as you reach high Q area for PEQ) and phase manipulation.
To get latency down 96 KHz 24 bit FIR is advised and it certainly can be used for video that way keeping latency low. My biggest surprise (from bunch of DAC's) whose with Apple USB dongle DAC (CS based) that doesn't suport more than 48 KHz 24 bit as latency whose almost not existing.
Very low tap filter 256~512 banks are very vide spread today. From all AVR's to popular EQ apps like Wavelet (on Android). They oppositely have very low hardware requirements.
What most of you need to understand is how high tap FIR (to it's precision) can absorb all (changes made by both and anything else) in again single FIR. We use to call such as convolution kernel. Even high resolution FIR can be done to run on low end general purpose cores (A5 x2 @1 GHz about 2000 Mips) quite good and it whose all done before like with Viper4Android (which is alive even now embbed in other projects).
Been fiddling with DSP processing on PC for at least 25 years now and I can tell you just one thing good software stuck with general purpose CPU cores rule and it's best front end interface there is (contrary to how difficult it is to program actual DSP) and even your very old phone had performance to do all what needs to be done. You won't get it from audio equipment manufacturer's (PEQ + FIR) in HiFi, buttique form, you will in PA and still restrained and with bad interface (which still ain't big problem if you cook it on PC and then insert it to embedded one). Hopefully one day that will change.
 
Last edited:
@Keith_W do you need linear phase dependends on how you aligned impulse response to start with and how much it went off with minimum phase PEQ's, it doesn't sound better nor really can correct it. Don't get me wrong I am for phase manipulation just not that severe.
 
MSO is based on convolution kernel basic idea. Averaging to death in its case. You have current value and a new one and you sum it to get average, more times you do it you get better average and then you do correction to it in form that you like. So basically all measurements are a big large FIR to number of taps (In M in REW) determining it's precision. This also shatter the idea some have how you will aim it better with PEQ as you can aim it only to a precision it's mesured.
 
Is this a typo? How does bit depth reduce latency?
The way it's finally determined to DAC output (PCM) and you are inserting tap based limited modulation to it but you ain't limited to 16 bit precision when transformed. You can try to future optimise it by adjusting dithering to TPDF and milage will vary regarding latency regarding DAC's and interfaces they use with minimum phase still. I use it with MConvolutionEZ. JRiver embedded one is rather bad and of course there is no substitute for square resolution.
Edit: I use it in; games, for videos across different analog reacivers with different DAC's with loads of other stuff and never had concerns regarding either (rather low end) CPU or buss utilisation. We live in the time when high end video (2K~4K HDR 10 AV1/H265) can be 60 FPS low latency encoded and then streamed trough Wi-Fi to receiving device to reproduce still keeping latency low enough even for interactive gaming. But getting a DSP with deacent interface which works is still a hard one to get.
 
Last edited:
The way it's finally determined to DAC output (PCM) and you are inserting tap based limited modulation to it but you ain't limited to 16 bit precision when transformed.

I'm sorry, I don't think that is correct. Bit depth has no effect on latency as far as I am aware. Only tap count and sampling rate. Do you have a resource I can read?
 
After a long learning period I think I've got my system close - if not at - the end point of no more tinkering. This was done using REW and MiniDSP SHD Studio at every step of the way. Specifically, this entailed
  1. Optimize positioning (MLP, mains, 2 subs)
  2. Time aligning subs with each other
  3. EQing respective subs and creating a single virtual sub
  4. EQing mains
  5. Setting XOs
  6. Minor room response tweaks to manage impulse response and decay
  7. Repeat for finer tuning,
The end result was that the system sounded far better than ever before, just glorious, and measures well too. When I started I knew next to nothing about DSP of any sort except Audessy (as built into AVRs) and Dirac. My focus was on room correction/EQing and all that entails. More recently I've become more aware of the whole computer audio thing and all the associated tools, particularly HQPlayer. In some circles the REW-measure-MiniDSP approach is held in disdain in favor of FIR methods, often without measurements and, seeming, often with little emphasis on room correction.

So, can anybody comment on respective pros/cons of each approach? To be clear, my main interest is getting the system working well in the room it's in and making measurable improvements. Thanks and cheers,
So how does your decay look after all this tweaking? And how are your subs positioned in the room?
 
Back
Top Bottom