Only stumbled upon this thread now, and am not willing to read through 16 pages, so please forgive if this has already been asked or commented upon:
What the hell is it with those "taps"? Who the fuck cares? All you, we, in short: the user should care about is two things: 1) What processing things can it do at once and 2) at what latency.
Ideally, the throughput latency is at or below 1ms, which is good enough for any time-critical real-time application like live sound, live performance, and oh I dunno, action gaming. In the computing world, 1ms is a LONG time.
The humble Behringer DCX2496 Ultradrive (speaker management system with 3x analog in and 6x analog out) does EQ, crossover, freely configurable routing, and more, at 24/96 with a latency of less than 1ms between analog in and analog out.
This is trivial DSP technology. The usual processor chips are so fast and have been for so long, for so cheap, that any talk about "taps" and limitations in processing and how "using too many EQs" somehow "raises latency" is laughable. 20 years ago, any desktop PC with a proper audio interface could run 24/96 multichannel EQ and compression and dozens of filters at once, at 1-2ms stable latency, in realtime. You have a DSP (or CPU used as such) running at a few megahertz or a few hundred, giving it time to execute thousands of instructions per sample cycle, which means a LOT of stuff at once because the math involved in audio calculations is piss easy and utterly efficient if coded correctly.
Am I missing something here? I have a feeling I do. Please educate me about these severe digital processing limitations that presumably exist till today, even though that was a solved problem long ago.