• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Digital EQ/FX hifi setup

onlytomonly

New Member
Joined
Sep 20, 2024
Messages
3
Likes
0
Hi all. First post on here - thank you for having me

I’m trying to work out how to establish a relatively unusual digital front-end for my hifi system. I think the answer may lie in the world of professional computer audio so would appreciate some advice. (I’m only moderately familiar with pro audio systems and terminology, having used Ableton to make DJ mixes.)

I would like to be able to route any digital audio signal through a software/plugin-based EQ+effects processor that can:

(1) be modified in real time/‘live’, e.g. changing filter passes and slopes or changing the wet/dry on an effect plugin

(2) perform with the lowest possible latency

(3) be controlled with a USB/MIDI hardware controller

(4) use digital in and out (no ADC) - ideally coaxial SPDIF on the input, as I want the flexibility to connect to my Lindemann streamer or the digital out from my Pioneer DJ mixer.)

I currently use a DEQX HDP-5 preamplifier for room+speaker correction, sub crossover and DAC duties. As the DEQX already introduces 2.5-15 milliseconds’ latency, I don’t want to add any more latency than absolutely necessary.

I’ve seen there are some audio interfaces with digital i/o that process plugins onboard, such as the UAD x6, which claims near-zero latency. However, I don’t know if it’s possible to control those onboard plugins with a USB/MIDI controller. I am also open to solutions where a computer does the EQ/FX processing.

Any advice you may have on products/setups that could enable this would be very welcome! Or, if there’s a better forum in which to ask these questions, feel free to redirect me.

Thank you

Tom
 
Welcome to ASR!

What you need is software capable of hosting VST plugins. I can think of a few options.

1. JRiver. Inexpensive and it does a lot. Most importantly, it can host VST plugins. This is what I use.

2. Foobar with VST adapter. It's free. I don't use this because I am not a fan of Foobar's interface.

3. Hang Loose Convolver. HLC includes Hang Loose Host which can host VST plugins. You will need third party software to route audio into HLHost like VB-Cable or VB-Matrix.

4. A DAW (Digital Audio Workstation). These are pro-level tools and I find them exceedingly difficult to use. There are a number of free DAW's on the market. I do not have much knowledge of them so I can't recommend one - sorry.

As for the USB/MIDI hardware controller, I know that DAW's can be controlled by some of those third party dials or control decks, but I have never set one up myself.
 
I assume my PC-DSP-based multichannel multi-SP-driver multi-amplifier fully active audio system/project would be of your interest and reference.
You would please visit here my post #931 on my project thread for the details of my latest setup (as of June 26, 2024) covering almost all of your requirements.
 
Last edited:
Thanks all for your responses! It sounds like a computer-based solution is feasible. I guess I would need a decent SPDIF-in audio interface, plus a computer with good processing power to achieve super low latency. Any particular hardware recommendations?

Would there be performance downsides (e.g. noise/distortion/jitter) in taking the computer route vs. an audio interface that has an onboard DSP function to apply EQ/effects (e.g. the UAD Apollo x6)?
 
You would please find the spec of two "completely silent" Windows PCs in Fig.33 in my post #931 on the project thread, especially the CPU and size of memory.
Fig33_WS00007503 (1).JPG
They are rather outdated PCs as of today, but much more than enough for all of (1) the digital music play (JRiver MC in my case), (2) ASIO routing (VB-AUDIO MATRIX), and (3) system-wide DSP center (EKIO) on 8-Ch I/O DSP operation.
The average CPU utilization is always less than 20 % even when I convert, on-the-fly, DSD(DSF) track into PCM by JRiver MC.
 
I install OpenHardwareMonitor (OHM) in all of my PCs and PC-Workstations; the desktop gadget of OHM is nice having opacity control, as I shared here.
WS00005511.JPG
 
Thanks all for your responses! It sounds like a computer-based solution is feasible. I guess I would need a decent SPDIF-in audio interface, plus a computer with good processing power to achieve super low latency. Any particular hardware recommendations?

If you want SPDIF in, then you will need an interface (examples: RME Fireface, RME Babyface, Motu Ultralite Mk5, Merging Anubis). And if you are getting an interface, you may as well sell the DEQX HDP-5 because it would now be redundant. Use your PC to do the DSP and send the output to the interface. DSP has very low overheads - FYI I convolve 10 channels at 96kHz with 131000 taps each, and it uses 10% CPU. I am using an i9-9900K. People are able to achieve similar with a Raspberry Pi, albeit with fewer channels, fewer taps, and a lower sampling rate. Really, any modern entry level PC will do. My only recommendation with the PC is to think about cooling, because you do not want any fans in there.

Would there be performance downsides (e.g. noise/distortion/jitter) in taking the computer route vs. an audio interface that has an onboard DSP function to apply EQ/effects (e.g. the UAD Apollo x6)?

The downside of using an audio interface with onboard DSP is that those are more likely to use minimum phase IIR's because they are limited in CPU power. You can achieve much superior DSP with a PC using linear phase FIR's. Jitter is a solved problem and you will not hear it. Make sure you buy an interface that supports ASIO (very important!!!) so that you can avoid WASAPI Shared Mode.
 
If you want SPDIF in, then you will need an interface (examples: RME Fireface, RME Babyface, Motu Ultralite Mk5, Merging Anubis). And if you are getting an interface, you may as well sell the DEQX HDP-5 because it would now be redundant. Use your PC to do the DSP and send the output to the interface. DSP has very low overheads - FYI I convolve 10 channels at 96kHz with 131000 taps each, and it uses 10% CPU. I am using an i9-9900K. People are able to achieve similar with a Raspberry Pi, albeit with fewer channels, fewer taps, and a lower sampling rate. Really, any modern entry level PC will do. My only recommendation with the PC is to think about cooling, because you do not want any fans in there.
Thanks! If I do room and speaker correction on the computer instead of DEQX, I assume this is not done in the DAW? There would be separate software that applies DSP using audio routed from the DAW? Do you recommend any particular software best suited to this?
 
If latency is a concern, a computer based system may not be the best fit.

Many use computer based DSP to implement high tap, linear phase filters which will add latency as a result of fundamental filter characteristics, this cannot be solved by using a more powerful computer. The linear phase, 131000 tap @ 96 kHz example given previously in this thread will have (131000 - 1) / (2 x 96000) = 0.682 s (682 ms) latency. It is possible to achieve low latency with high tap counts, but the filter will need to be minimum phase.

In addition, many computer based DSP solutions have high inherent latency compared to hardware solutions. I use CamillaDSP and get ~10 ms latency as a baseline.

Michael
 
Thanks! If I do room and speaker correction on the computer instead of DEQX, I assume this is not done in the DAW? There would be separate software that applies DSP using audio routed from the DAW? Do you recommend any particular software best suited to this?

When you play back audio files, this is how the pipeline would work:

[SOURCE] --(SPDIF)--> [INTERFACE] --(ASIO)--> [VST HOST] --(ASIO)--> [CONVOLVER] --(ASIO)--> [INTERFACE] --> [AMP/SPEAKERS]

or

[PLAYBACK SOFTWARE] --(ASIO)--> [VST HOST] --(ASIO)--> [CONVOLVER] --(ASIO)--> [INTERFACE] --> [AMP/SPEAKERS]

The purpose of the convolver is to mix the music you are listening with the room correction filters which you need to generate with third party software. You will notice that there is some software which has a few of these functions built-in. For example, JRiver and Foobar can play music, host VST's, and has a built-in convolver (albeit both are rather feature limited). Hang Loose Convolver can host VST's and convolve. Roon can play music and do convolution, but it has no VST host.

You will notice that your DEQX HDP-5 is a convolver and interface built into one box. Think of software pipelines as similar to traditional hi-fi separates. You could choose an "all in one" which has a streamer, DAC, and amp built into one chassis. Or you could choose a product with a streamer and DAC in one chassis and a separate amplifier. Or you could separate them out into individual boxes. The advantage of separate software is flexibility and choice. The disadvantage is increased complexity and reduced robustness and reliability. You have to launch several pieces of software, sometimes in the correct order, before you can listen to music. And if there is no sound, it can be really annoying trying to diagnose what went wrong.

You can see that this solution is vastly more powerful and flexible than a DEQX HDP-5. You are no longer limited to 6 DAC channels, you can have hundreds of DAC channels. You could configure your system any way that you wish, run millions of taps, take input from dozens of sources, and so on. All limited by your budget and whether you took your medication or not ;) There is something to be said for the DEQX and similar products though - at some point the craziness must end, and a practical and robust solution for the average home consumer should be offered - that's where the DEQX comes in.

Software to generate filters: in general there are two styles of filter generation software. Those that offer more automation (Dirac, Audiolense, Focus Fidelity) and those that are toolbox style (REW/RePhase, Acourate, DRC-FIR). The automated style is much easier to learn and can give excellent results. The toolbox style is harder to learn but more flexible. I recommend either Audiolense (if you want automation) or Acourate. IMO Acourate is the most powerful and flexible package on the market whilst still being reasonably easy to use, but it does have a steeper learning curve than Audiolense and you need to know what you are doing.

Note that all this can be done for free. You could choose Foobar as your playback software/VST host/convolver, or CamillaDSP as the convolver. Then use REW/RePhase to make your filters, and VB-Audio Matrix as your digital cable.

Of course, what I have described is only one option. You could also choose to do this:

[SOURCE] --(SPDIF)--> [INTERFACE] --(ASIO)--> [PC with VST HOST] --(USB)--> [DEQX HDP-5] ----> [AMP/SPEAKERS]
 
Just for sure and reminder, and only if someone has confusion, OP @onlytomonly (and all of us) would need to clearly understand the difference between relative latency difference (=time-alignment) among the SP drivers and total (absolute) latency, as I shared/reminded at the top of my post #493 on my project thread.
In my post #493, I wrote;
.....
First of all, please note that this "time alignment discussion" here is limited to pure audio-only system, and excluding audio-visual system where you need "time alignment adjustment" not only for the SPs but also for visual images/movies.


You well know that, throughout this project thread, I have been using digital music players, such as JRiver, in PC, and feeding the digital signal in digital XO/EQ "EKIO" for crossover, and then sending the divided digital signals into DAC8PRO for multi-channel multi-driver multi-amplifier stereo music listening.

In the digital signal processing, we have so many buffers or latencies; JRiver output buffer, ASIO4ALL's I/O buffers, EKIO's processing buffer, DIYINHK USB ASIO driver's buffer, and so on. Consequently, it is not straightforward to exactly measure the "absolute delay" between the JRiver's "shout" and the final air sound kick-up by SP.

I usually set all the buffers in the digital domain in rather large size, so that I should not have any latency or delay problems; in our audio setup, we have no problem at all if all the bunch of the digital and analog signal (15Hz - 30 kHz) have identical amount of delay time from the signal origin at JRiver, and this is always the case in our digital (PC based) audio system.

The relative delay between the SP units, or "time alignment" in multiple SPs, however, is always one of the critical issues in audio system, especially the multichannel multi-driver multi-amplifier system, as you may agree.

I have been always take my attention and care on this issue, and in my very early posts #18 through #21, by using REW's wavelet analysis, I briefly checked that all of my SP units, super-tweeter (ST), Be-tweeter (TW), Be-squawker (SQ) and woofer (WO) have essentially no delay with each other, while my sub-woofer (SW) has 10 - 20 ms delay against the other SP units.


Now, I became really would like to establish my own simple, reliable and reproducible precision method for "time alignment" or "relative delay" measurement, and fine adjustment(s) if needed.
.....
If you would be interested, under this spoiler cover, please find total signal path in my "time-aligned" PC-DSP-based multichannel multi-SP-driver multi-amplifier fully active audio setup.
Fig03_WS00007533 (3).JPG
 
Last edited:
Back
Top Bottom