• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Review and Measurements of Benchmark DAC3

Yes indeed. I looked and I can't find a single independent measurement of it.

Let's have a few of you pester them to send one in for review. :)
I suspect Bruno would rather have you buy one. I would be surprised if he loaned one to JA at Stereophile, too. Until independent measurements are made, I remain highly skeptical.

But, it is refreshing to see Benchmark stepping up to the plate with a loaner to Amir for measurement purposes. That says a lot of things about them, all good.
 
Hi John,

thank you for your detailed explanation. What i am not unterstanding is why the ess chip choose the the values randomly. Would it not be better to choose the value in case of a average value? Should this approach randomize the error? My background is from the digital side, if there are some reason in the analog side a short answer would be enough for me.

Greetings
Fu

Welcome to The World of Analog. :) Or at least of data converters, where digital and analog meet, for better or worse...

Not John, but I'll take a shot at it...

Think of each bit source of having its own (random) error value. It is not practical to make them perfect (they are not digital cells but contain analog voltage or current sources of the right, or nearly-right, value). By randomly selecting bit sources at each sample, the individual bit errors are randomized, turning what would otherwise be a correlated (larger) distortion spur into less objectionable random noise.

HTH - Don

p.s. Years ago I designed a 16-bit DAC for a much higher-frequency application. It was a segmented design, with the upper segments unary (unit-value) current sinks and lower bits using an R-2R ladder on top of similar unit sinks. All the current sinks were trimmed using a combination of coarse "fuse" trims followed final "flag" laser trims. It took about two minutes to trim each DAC, an eternity in the production test world. It was truly 16-bit linear and could be clocked at several hundred MHz back almost twenty years ago. It was not cheap.
 
Last edited:
I see the idea promoted that sigma-delta DACs upsample and actually operate like DSD internally. Therefore subjective claims are made that upsampling/converting everything to DSD in software to feed to sigma-delta DACs is feeding them in their native format and improves sound quality. Gaining some of the 'benefits' of DSD which is usually promoted as inherently superior.

So when the DAC3 does DSD over DOP 1.1 and its said to be native conversion how is the ESS chip handling this. Does it only use 1 of the 4 bits in the converter or does it parallel all of them then function in 1 bit mode only?
The number of bits used in parallel for DSD is a function of the volume control setting. The massively parallel structure of the ESS provides a convenient method of controlling DSD volume without converting to PCM. Given 32 equally-weighted 1-bit converters you have 32 volume steps before dithering the 5 bits at a 2.8 MHz rate. Basically you have 1 bit for the audio and 5 bits for the volume control operating at a 2.8 MHz rate. The dither noise from the volume control is 24 dB lower than the dither noise from the 1-bit audio, so it is completely insignificant. The volume control is 8 bits dithered to 5 bits. Each DSD volume control step is 0.5 dB. Given the fact that we sum 4 ESS channels together, the number of 1-bit converters running in parallel is actually 128.
 
This test makes for an easy to understand graphic. Still there is nothing more revealed than a noise floor measurement. Feed it -60 or -90 db signals and notch that out and see what is left. You have the noise floor.
I agree. It is much easier to determine resolution with a noise measurement. In contrast, it is nearly impossible to determine resolution from a linearity test. Use the linearity test to look for linearity problems in multibit converters, but do not attempt to use it to determine resolution. The use of a -60 or -90 dB tone is a good idea as this will expose the noise modulation issues of some converters. Noise modulation can raise the noise floor when a signal is present which effectively reduces the resolution. This would be missed when running a simple idle channel noise measurement. But, noise modulation problems are not as common as they were a few years back.
 
I agree. It is much easier to determine resolution with a noise measurement. In contrast, it is nearly impossible to determine resolution from a linearity test. Use the linearity test to look for linearity problems in multibit converters, but do not attempt to use it to determine resolution.
I've been trying to say this..... o_O:D
Thanks....BENCHMARK.
 
In contrast, it is nearly impossible to determine resolution from a linearity test. Use the linearity test to look for linearity problems in multibit converters, but do not attempt to use it to determine resolution.
Well, here is a the linearity of a sigma-delta converter using the FFT method which excludes all but the signal level at the test frequency:

index.php


Deviations start from target voltage fairly early and become worse and worse until we get to -125 dB where correlation between input and output is all but lost. Would you say this is not a useful test and doesn't show the drop in accuracy as we go down in level?
 
Well, here is a the linearity of a sigma-delta converter using the FFT method which excludes all but the signal level at the test frequency:

index.php


Deviations start from target voltage fairly early and become worse and worse until we get to -125 dB where correlation between input and output is all but lost. Would you say this is not a useful test and doesn't show the drop in accuracy as we go down in level?


And what is th reason for loosing the correlation? Hitting the noise floor? If that is so than this graph is not really showing anything new that hasn't already been shown on noise graph.

Btw, at -125 it seems perfectly accurate to me! :D
 
And what is th reason for loosing the correlation? Hitting the noise floor? If that is so than this graph is not really showing anything new that hasn't already been shown on noise graph.
Noise is broadband. This high-resolution FFT is extremely selective around the test tone so noise doesn't explain it.

Btw, at -125 it seems perfectly accurate to me! :D
The error unfortunately starts much earlier than that. It is harder to see on the FFT graph than where it is plotted differentially:

index.php


We have over 1 dB of error at -110 dBFS. And the accuracy error starts at less than -90 dB.

The larger point here is that we do see differentiation in how products -- almost all with sigma-delta DACs. Hence the reason the measurements have value. If they all had the same error, sure, we wouldn't bother but that is not what we see.
 

I'm aware that noise is broadband but the point is that noise and some other garbage do exist around that tone of 200Hz so I was asking if that was the cause for loosing linearity.

So, what IS the reason for linearity error shown on these graphs?

Would that graph look the same if you run it on some other frequency, say 1kHz?
 
I'm aware that noise is broadband but the point is that noise and some other garbage do exist around that tone of 200Hz so I was asking if that was the cause for loosing linearity.
And addition of garbage is exactly the type of thing we want to measure. We do not want to rely on theoretical aspect of a DAC *chip* not having those issues. We want to know if a system that is built on top of it is free of distortion and noise.

Same garbage exists in THD+N and IMD+N distortions by the way. I am not seeing a call for not running those just the same.
 
Would that graph look the same if you run it on some other frequency, say 1kHz?
I have only tested that in the case of Schiit Yggdrasil and it did not make a difference.
 
And addition of garbage is exactly the type of thing we want to measure. We do not want to rely on theoretical aspect of a DAC *chip* not having those issues. We want to know if a system that is built on top of it is free of distortion and noise.

Same garbage exists in THD+N and IMD+N distortions by the way. I am not seeing a call for not running those just the same.

But is that garbage coming from DAC chip or from the other sources (USB, analog stage etc)?
And how comes that noise+garbage don't add to the signal making it higher instead of lower?

Or you're saying that this non-linearity is coming from the DAC chip itself?

Btw, I'm not advocating that you drop this measurement, I'm asking how to interpret it correctly and what is causing this error.
 
I have only tested that in the case of Schiit Yggdrasil and it did not make a difference.

Does that lead to conclusion that the linearity graph will look the same on all frequencies?
 
But is that garbage coming from DAC chip or from the other sources (USB, analog stage etc)?
It doesn't matter to us as consumers where it is coming from. That is for the designer to figure out and fix.

Would that graph look differently if you run it on another frequency?
Here it is at 1 kHz comparing to the previous 200 Hz one:
1532369150932.png

As you see, the broadly agree with each other.
 
It doesn't matter to us as consumers where it is coming from. That is for the designer to figure out and fix.


Here it is at 1 kHz comparing to the previous 200 Hz one:
View attachment 14174
As you see, the broadly agree with each other.

Yes, they are pretty much the same. Interesting..

At the end you are right, but it would be nice to know.
@John_Siau, maybe you can share your thoughts on the cause of this deviation from linearity? :)
 
@amirm, have you discovered a way to measure linearity via USB port? If I remember correctly that was the reason why you didn't measure it on Topping D10, right?
 
@amirm, have you discovered a way to measure linearity via USB port?
Don't need to anymore as my new Audio Precision APx555 analyzer can drive the USB DAC as easily as any other input on it. That is how these linearity measurements were performed. Since USB is so popular, I use that now as the primary input on the DAC unless there is a reason otherwise.

Whenever you see a graph with white background, it is from my new analyzer.
 
Don't need to anymore as my new Audio Precision APx555 analyzer can drive the USB DAC as easily as any other input on it. That is how these linearity measurements were performed. Since USB is so popular, I use that now as the primary input on the DAC unless there is a reason otherwise.

Whenever you see a graph with white background, it is from my new analyzer.

Excellent! :)

Is there any chance you can test linearity on D10?
 
Is there any chance you can test linearity on D10?
Here is a snapshot:

1532414758686.png


I am not happy with it though. The output from D10 at -100 dB and lower fluctuates which means the above results are not repeatable from run to run.
 
Back
Top Bottom