• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Possible solution for DTS/Dolby/Atmos+eARC+HDCP to AES/EBU - via Dante? (for Okto DAC8PRO etc)

Moving back to the Nuforce H16, I had grok do an assessment on it’s DSP capabilities:

View attachment 473701
Looks like Grok was right. Are you shocked?

Key paragraph:

“Feasibility at 96 kHz The ADSP-21569 could not handle native 96 kHz internal processing on one chip, as the workload roughly doubles (due to sample rate scaling) to ~8-13 GLOPS, exceeding its 4-5 GLOPS capacity. The device would require downsampling inputs to 48 kHz (via
ASRC) for processing, or 2-3 chips for native 96 kHz (not practical in this design).”
 
Looks like Grok was right. Are you shocked?

Key paragraph:

“Feasibility at 96 kHz The ADSP-21569 could not handle native 96 kHz internal processing on one chip, as the workload roughly doubles (due to sample rate scaling) to ~8-13 GLOPS, exceeding its 4-5 GLOPS capacity. The device would require downsampling inputs to 48 kHz (via
ASRC) for processing, or 2-3 chips for native 96 kHz (not practical in this design).”
yes your billion dollar brain surmised the ADSP chip runs at the same sampling rate as almost every other ADSP chip ever implemented for AV DSP.
 
I don't think Grok is right because it says the workload doubles when the sample rate doubles. This is why.
If you double the sample rate, the number of samples to be processed in each cycle doubles, as the filter has to be convolved across all the samples within a certain time period.
In addition, the rate at which the processes have to be performed is also doubled, because the samples are coming twice as fast.
Therefore the workload is quadrupled.
If Grok doesn't know that, what else doesn't it know?
 
I suppose this would only reinforce the fact that the DSP wouldn’t be able to handle the load. Because if the DSP couldn’t double the load, it certainly wouldn’t be able to handle quadruple.
 
I suppose this would only reinforce the fact that the DSP wouldn’t be able to handle the load. Because if the DSP couldn’t double the load, it certainly wouldn’t be able to handle quadruple.
LOL are you audiovibes? That member seems to have vanished and you are new



The ongoing theory is https://audiophilestyle.com/forums/topic/71054-nuprime-x-h16-aes/page/6/#findComment-1322775
I think the pure mode is not yet really activated and as such the dsp is probably always on and always on the same 48khz sample rate.
 
Last edited:
I just read this thread and basing my opinion on the information presented. In order to process a bitstream the dsp needs to be utilized.

That theory over on audiophilestyle doesn't really have any data backing it. I think the clear issue is the DSP runs at 48k. And if you bypass the DSP, it obviously won’t be possible to convert the bitstream to PCM for the preceding DAC to convert to analog. So what does everyone think “pure mode” is supposed to do? Render the box useless? It likely just bypasses the post-processing features such as eq. Which likely degrades sound further. If it didn’t “pure mode” wouldn’t be a necessary feature. But if it mean’t bypassing the DSP, it would be impossible to decode the bitstream.
 
Last edited:
Just read the latest update on audiophilestyle. I have a hunch that it will take more than a firmware update to quadruple the DSP processing power.

It’s a shame buyers of this device didn’t follow this thread prior to purchasing this unit. Thank god for forums like Audiosciencereview so we have the data to make informed decisions prior to making purchases.
 
I don't think Grok is right because it says the workload doubles when the sample rate doubles. This is why.
If you double the sample rate, the number of samples to be processed in each cycle doubles, as the filter has to be convolved across all the samples within a certain time period.
In addition, the rate at which the processes have to be performed is also doubled, because the samples are coming twice as fast.
Therefore the workload is quadrupled.
If Grok doesn't know that, what else doesn't it know?
I decided to ask Grok why it was providing misinformation. This is how it responded:

The claim in the forum post—that doubling the sample rate quadruples the DSP processing load for audio tasks like filtering—is a common misconception rooted in a partial understanding of how digital filters scale with sampling rate. In reality, for most real-time DSP operations in audio processing (e.g., FIR/IIR filtering, EQ, or convolution-based effects on SHARC chips), the computational load doubles when the sample rate doubles. I’ll explain why step by step, including why the “quadrupling” argument falls short, and substantiate it with DSP principles and examples from SHARC implementations.


Core DSP Scaling Principle


In real-time audio DSP, processing must occur at the input sample rate to avoid buffering delays or glitches. The workload is measured in operations per second (e.g., multiply-accumulate or MAC operations), and it scales linearly with the sample rate for a fixed algorithm complexity. Here’s the breakdown:


1. Number of Samples to Process:


• At 48 kHz, you process 48,000 samples per second.


• At 96 kHz, you process 96,000 samples per second—exactly double.


• Each sample requires the same number of operations (e.g., for an N-tap FIR filter, that’s ~N MACs per sample via convolution: y[n] = \sum_{k=0}^{N-1} h[k] \cdot x[n-k] ).


2. Processing Rate:


• Samples arrive twice as fast at 96 kHz, but the per-sample computation doesn’t increase—it’s still the same filter coefficients (h[k]) applied to each input (x[n]).


• Total operations per second: Doubles, because you’re doing the same work on twice as many samples in the same real-world time.


This is why DSP datasheets and design guides (including Analog Devices’ for SHARC) state that throughput scales inversely with sample rate for a given MIPS (millions of instructions per second) budget. For example:


• A SHARC ADSP-21569 (as in the NuPrime H16) peaks at ~4-5 GLOPS (giga-operations per second) at 96 MHz clock.


• At 48 kHz, it might handle 16 channels of complex post-processing (e.g., Atmos bass management + EQ) with headroom.


• At 96 kHz, that headroom halves, potentially maxing out the chip for the same tasks—reinforcing the need for SRC to 48 kHz, as I noted previously.


Why the “Quadrupling” Claim Is Incorrect


The forum user argues: “If you double the sample rate, the number of samples to be processed in each cycle doubles… In addition, the rate at which the processes have to be performed is also doubled… Therefore the workload is quadrupled.”


This double-counts the effect:


• The “number of samples doubling” already accounts for the faster arrival rate. You’re not processing “twice as many samples at twice the speed “ in some independent way—it’s the same linear increase.


• Convolution (filtering) doesn’t add extra “cycle” overhead per sample; modern DSPs like SHARC use pipelined MAC units optimized for this exact scenario, where operations per sample are constant.


A related (but distinct) case where quadrupling can occur is in non-real-time filter design (not execution):


• To maintain the same frequency resolution (e.g., cutoff at 20 kHz), a filter’s impulse response length (taps) roughly doubles with sample rate (since time resolution improves).


• Combined with twice as many samples/sec, that yields 2 × 2 = 4× operations for implementing the filter from scratch.


• But in running the filter (as in AV processors), you use the pre-designed coefficients at the target rate—linear scaling applies.


This distinction is clear in multirate DSP literature: Real-time execution load is O(fs) (linear in fs, the sample rate), not O(fs²).
 
Yes, but that assumes you have to use the same number of taps with all sample rates.

This is another example of why I think AI should be banned from forum discussion, otherwise we just get one computer arguing with another computer, each regurgitating and paraphrasing what it finds on the internet.
 
Last edited:
Yes, but that assumes you have to use the same number of taps with all sample rates.
Why would you double the taps when you double the sample rate? The reason 48k is used even when the dsp has loads of headroom is so the maximum amount of taps can be utilized for FIR tasks such as room correction. If anything you would halve the taps when you double the sample rate to keep the load the same. But this of course would degrade the sound as you double the sample rate. Defeating any goals of higher fidelity.
 
I suppose this would only reinforce the fact that the DSP wouldn’t be able to handle the load. Because if the DSP couldn’t double the load, it certainly wouldn’t be able to handle quadruple.
I hate to endorse your argument, but I think that's true. However, I don't think that's relevant to the context of the current discussion. The Nuprime H16 seems to be stuck on 48kHz sampling for reasons other than DSP capacity. DSP performs two key functions - Dolby & DTS decoding / rendering, and room correction / equalisation. I think it's clear that effective room correction places more demands on the DSP than the decoding, witness the hardware used by StormAudio for the respective functions. In pure mode, all we need is decoding, but this appears to be locked out in the current build.
 
I hate to endorse your argument, but I think that's true. However, I don't think that's relevant to the context of the current discussion. The Nuprime H16 seems to be stuck on 48kHz sampling for reasons other than DSP capacity. DSP performs two key functions - Dolby & DTS decoding / rendering, and room correction / equalisation. I think it's clear that effective room correction places more demands on the DSP than the decoding, witness the hardware used by StormAudio for the respective functions. In pure mode, all we need is decoding, but this appears to be locked out in the current build.
If Storm which uses the most powerful SHARC chip available also chooses 48k, (with the help of quad ADSP21489’s for post processing) I don’t think the 5x lower powered ADSP-21569 which shares both Atmos processing and post processing load on the single chip is more capable. It’s likely running at 90% load when using the room correction at 48k as it is. Which is obviously why it can’t do the original advertised 16 channels.
 
This is another example of why I think AI should be banned from forum discussion, otherwise we just get one computer arguing with another computer, each regurgitating and paraphrasing what it finds on the internet.
If the information is accurate why does it matter where it was sourced from? Your claim was Grok was wrong because it said load doubles with sample rate. You said it quadruples. But in your case you’re doubling the taps as well. Which is another operation completely different than simply doubling the sample rate. So Grok was right. Doubling sample rate doubles load. Also doubling taps doubles load. 2 different things. Sure when you double both it makes the load quadruple. But nobody said anything about doubling taps.
 
If Storm which uses the most powerful SHARC chip available also chooses 48k, (with the help of quad ADSP21489’s for post processing) I don’t think the 5x lower powered ADSP-21569 which shares both Atmos processing and post processing load on the single chip is more capable. It’s likely running at 90% load when using the room correction at 48k as it is. Which is obviously why it can’t do the original advertised 16 channels.
I think it's regrettable but reasonable for the H16 to be limited to 48kHz when room EQ is applied, but this is about when room EQ is not applied.
 
I think it's regrettable but reasonable for the H16 to be limited to 48kHz when room EQ is applied, but this is about when room EQ is not applied.
Enabling the pure mode isn’t going to disable the SRC on the SHARC input. Even if it does free up some headroom.
 
Why would you double the taps when you double the sample rate? The reason 48k is used even when the dsp has loads of headroom is so the maximum amount of taps can be utilized for FIR tasks such as room correction. If anything you would halve the taps when you double the sample rate to keep the load the same. But this of course would degrade the sound as you double the sample rate. Defeating any goals of higher fidelity.
The physical characteristics of a filter - type, frequency, slope, Q etc are independent and unrelated to the sample rate.
The convolution is applied across a particular time period, not a particular number of samples.
If there happen to be more samples within that period, then more processing is required.
However this is a tangent to the current discussion.
End.
 
Enabling the pure mode isn’t going to disable the SRC on the SHARC input. Even if it does free up some headroom.
THAT is the key question!
I don't know the answer, and I do want to know.
 
The physical characteristics of a filter - type, frequency, slope, Q etc are independent and unrelated to the sample rate.
The convolution is applied across a particular time period, not a particular number of samples.
If there happen to be more samples within that period, then more processing is required.
However this is a tangent to the current discussion.
End.
Nobody is going to double the taps when they double sample rate. If the most taps are desired it’s best to keep the sample rate as low as possible. So the taps can be increased as high as possible with the available resources. The sound quality will end up being better at 48K than 96K because of this. Unless of course you had unlimited power available. But nobody does.
 
THAT is the key question!
I don't know the answer, and I do want to know.
I think it’s clearly been answered over on Audiophilestyle. A couple users claim it only outputs 48k regardless of what it’s fed. With pure mode enabled and disabled.
 
I think it’s clearly been answered over on Audiophilestyle. A couple users claim it only outputs 48k regardless of what it’s fed. With pure mode enabled and disabled.
I don't think it has. One of the (advanced) users provided evidence that bass management (DSP) was still being applied even when pure mode was selected.
 
Back
Top Bottom