• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Review and Measurements of Benchmark DAC3

Really awesome to have @John_Siau helping with this investigation!!!

The noise issue has been raised before. Pulling signals out the noise floor is always a challenge, and in this case when the noise is the same magnitude as the signal or larger then sometimes even filtering and averaging gets dicey. Signal and distortion terms should be correlated, but noise decorrelation works against you in this case.

Still very interested to see if the new unit measures the same as the previous under the same conditions in Amir's test. Be nice to know if he can refine the test or if the first unit was simply defective. Of course, such low-level spurious products would probably be impossible to tell in normal listening... Back when I was a tech it was not the blown amplifiers that were hard to fix; it was the ones measuring 0.05% THD instead of 0.005% that were a bear to debug.

Edit: Was typing while John was posting. There are several posts about differential (balanced) operation that reinforce what John said above, along with some discussion of various "balanced" schemes that are only quasi-differential and do not provide as good (if any) common-mode rejection (or even-order harmonic suppression).

IMD is usually a bigger problem than HD (or THD) because IMD spurs are higher for a given input amplitude and the IMD spurs are not in general harmonically related to the signals so are easier to hear (it's bad when distortion is easier to hear).
 
Last edited:
Really awesome to have @John_Siau helping with this investigation!!!

The noise issue has been raised before. Pulling signals out the noise floor is always a challenge, and in this case when the noise is the same magnitude as the signal or larger then sometimes even filtering and averaging gets dicey. Signal and distortion terms should be correlated, but noise decorrelation works against you in this case.

Still very interested to see if the new unit measures the same as the previous under the same conditions in Amir's test. Be nice to know if he can refine the test or if the first unit was simply defective. Of course, such low-level spurious products would probably be impossible to tell in normal listening... Back when I was a tech it was not the blown amplifiers that were hard to fix; it was the ones measuring 0.05% THD instead of 0.005% that were a bear to debug.
DonH56,

Here is one other thing that you need to be careful about when attempting to extract low-level signals from noise:

The slightest little bit of crosstalk between channels will completely confuse the results. Crosstalk at -130 dB will totally change the results in these so-called linearity tests.

Again, the linearity tests are only valid above the point at which the noise begins to bend the curve. This is 12 to 24 dB above the measurement noise floor. Any attempt to make conclusions about linearity below this point will be grossly incorrect. They are also completely irrelevant.
 
DonH56,

Here is one other thing that you need to be careful about when attempting to extract low-level signals from noise:

The slightest little bit of crosstalk between channels will completely confuse the results. Crosstalk at -130 dB will totally change the results in these so-called linearity tests.

Again, the linearity tests are only valid above the point at which the noise begins to bend the curve. This is 12 to 24 dB above the measurement noise floor. Any attempt to make conclusions about linearity below this point will be grossly incorrect. They are also completely irrelevant.

Agreed, along with power noise, clock noise, etc. etc. etc. Actually you make a very good point in that crosstalk, clock bleed (coupling), power-supply noise, and similar things are usually correlated and so can be a bigger problem than just the random noise floor.

My world deals with signals far above audio in frequency but with similar or sometimes greater dynamic range. The level of difficulty is similar and perhaps worse in some ways for audio; 60/120 Hz or even 1 MHz power supply noise is sometimes much less a concern when the carrier is GHz (but phase modulation from that low-frequency signal is a royal PITA!)

Don will do, save yourself a few letters...
 
Again, the linearity tests are only valid above the point at which the noise begins to bend the curve. This is 12 to 24 dB above the measurement noise floor. Any attempt to make conclusions about linearity below this point will be grossly incorrect. They are also completely irrelevant.
They are not irrelevant in the way I use them. For one thing, I stop at 120 dB/20 bits. I agree any attempt to characterize below this is fool's gold.

Below that limit, my interest it not academic as to what the DAC chip is doing. But rather, how well the whole system is doing. To that end, I don't attempt to fully filter everything. If there is some amount of power supply noise getting in there, let it be.

There are also a number of "multi-bit" DACs that don't do right and we want to see that. Here is an example:
Schiit Yggdrasil DAC vs Topping DX7s DAC Linearity Measurement.png


I have carefully created my filter so that the analyzer loopback is as good as the Topping graph there. Anything that deviates from that is then something to look at.

So in that sense, the linearity test is a practical figure of merit.
 
DonH56,

Here is one other thing that you need to be careful about when attempting to extract low-level signals from noise:

The slightest little bit of crosstalk between channels will completely confuse the results. Crosstalk at -130 dB will totally change the results in these so-called linearity tests.

Again, the linearity tests are only valid above the point at which the noise begins to bend the curve. This is 12 to 24 dB above the measurement noise floor. Any attempt to make conclusions about linearity below this point will be grossly incorrect. They are also completely irrelevant.
Yes I quickly learned doing linearity checks at lower levels to only have signal in one channel at a time.
 
They are not irrelevant in the way I use them. For one thing, I stop at 120 dB/20 bits. I agree any attempt to characterize below this is fool's gold.

Below that limit, my interest it not academic as to what the DAC chip is doing. But rather, how well the whole system is doing. To that end, I don't attempt to fully filter everything. If there is some amount of power supply noise getting in there, let it be.

But when you say "the whole system is doing"

Doing what?

Is it possible that the spuria you are not filtering has no impact on actual audio quality, and by including it just becomes misleading?
 
They are not irrelevant in the way I use them. For one thing, I stop at 120 dB/20 bits. I agree any attempt to characterize below this is fool's gold.

Below that limit, my interest it not academic as to what the DAC chip is doing. But rather, how well the whole system is doing. To that end, I don't attempt to fully filter everything. If there is some amount of power supply noise getting in there, let it be.

There are also a number of "multi-bit" DACs that don't do right and we want to see that. Here is an example:
View attachment 14063

I have carefully created my filter so that the analyzer loopback is as good as the Topping graph there. Anything that deviates from that is then something to look at.

So in that sense, the linearity test is a practical figure of merit.

If John is referring to the standard "Fundamental linearity test" which employs heavy filtering I don't see a contradiction between what he says and what you do.
 
They are not irrelevant in the way I use them. For one thing, I stop at 120 dB/20 bits. I agree any attempt to characterize below this is fool's gold.

Below that limit, my interest it not academic as to what the DAC chip is doing. But rather, how well the whole system is doing. To that end, I don't attempt to fully filter everything. If there is some amount of power supply noise getting in there, let it be.

There are also a number of "multi-bit" DACs that don't do right and we want to see that. Here is an example:
View attachment 14063

I have carefully created my filter so that the analyzer loopback is as good as the Topping graph there. Anything that deviates from that is then something to look at.

So in that sense, the linearity test is a practical figure of merit.
Amirm,

This is what can happen in a multibit sigma-delta converter where the converter is constructed from a 2, 3, or 4-bit converter inside a sigma delta loop when nothing is done to linearize the 2, 3, or 4 bit converter. This may be the cause. Do you know what converter chip was used?

The ESS converters have a 4-bit core (producing 16 levels) that is constructed from 16 equally weighted 1-bit converters. If the digital input to this converter is say 8, then 8 of the 1-bit converters must be turned on. The ESS randomly selects 8 out of the 16 available 1-bit converters.

On every sample (at the oversampled rate) the active 1-bit elements are randomly selected. As a result of this randomization, any mismatch in the element size produces noise that is shaped into ultrasonic frequencies instead of producing linearity errors.

To further improve upon this, the ESS has separate 4-bit converters for the + and - legs of each balanced output. On the ES9028PRO, there are 8 balanced output channels. Each of these channels has two 4-bit converters that are independently randomized. In the DAC3, 4 balanced outputs are summed to form each of the balanced outputs. This means that there are 16 4-bit converters per balanced output on the DAC3. Each of these 4-bit converters is built from 16 1-bit converters. If you do the math, there are 256 1-bit converters driving each output on the DAC3. All are driven by random mapping that changes on every sample at the oversampled rate. It is really an amazing system and it is very effective at providing virtually-perfect linearity.

BTW, the ultrasonic noise (produced by the noise shaping of element-size mismatches) is reduced by the massively parallel structure and is then completely removed with analog filters in the output stage.

Edit:

It turns out that the Yggdrasil uses 4 20-bit converters (AD5791). These may or may not be in a sigma delta loop. Obviously this configuration can have linearity problems and this shows up in Amirm's test. While it should have been possible to noise shape some of the errors, this may not have been done in this product.
 
Last edited:
Is it possible that the spuria you are not filtering has no impact on actual audio quality, and by including it just becomes misleading?
It is probable that none of this has impact on audible audio quality and is simply a measure of engineering excellence.
 
It is probable that none of this has impact on audible audio quality and is simply a measure of engineering excellence.

I largely agree with that. I think from the little testing I've done and seen that unless its a broken design sigma-delta DACs are inherently linear all the way down. So then your linearity test becomes essentially a noise test which is about engineering excellence. Measuring noise is pretty straight-forward and simple.

Now R2R ladder DACs, and other multi-bit DACs (which are often hybrids of sorts and not what the ad hype leads customers to think) aren't inherently linear. Those you can find linearity issues with beyond the effect of noise. I think I've written this before: can someone point to a ladder DAC or true multi-bit DAC that has linearity equal to sigma-delta DACs without costing 10x as much?
 
Amirm,

This is what can happen in a multibit sigma-delta converter where the converter is constructed from a 2, 3, or 4-bit converter inside a sigma delta loop when nothing is done to linearize the 2, 3, or 4 bit converter. This may be the cause. Do you know what converter chip was used?

The ESS converters have a 4-bit core (producing 16 levels) that is constructed from 16 equally weighted 1-bit converters. If the digital input to this converter is say 8, then 8 of the 1-bit converters must be turned on. The ESS randomly selects 8 out of the 16 available 1-bit converters.

On every sample (at the oversampled rate) the active 1-bit elements are randomly selected. As a result of this randomization, any mismatch in the element size produces noise that is shaped into ultrasonic frequencies instead of producing linearity errors.

To further improve upon this, the ESS has separate 4-bit converters for the + and - legs of each balanced output. On the ES9028PRO, there are 8 balanced output channels. Each of these channels has two 4-bit converters that are independently randomized. In the DAC3, 4 balanced outputs are summed to form each of the balanced outputs. This means that there are 16 4-bit converters per balanced output on the DAC3. Each of these 4-bit converters is built from 16 1-bit converters. If you do the math, there are 256 1-bit converters driving each output on the DAC3. All are driven by random mapping that changes on every sample at the oversampled rate. It is really an amazing system and it is very effective at providing virtually-perfect linearity.

BTW, the ultrasonic noise (produced by the noise shaping of element-size mismatches) is reduced by the massively parallel structure and is then completely removed with analog filters in the output stage.

Edit:

It turns out that the Yggdrasil uses 4 20-bit converters (AD5791). These may or may not be in a sigma delta loop. Obviously this configuration can have linearity problems and this shows up in Amirm's test. While it should have been possible to noise shape some of the errors, this may not have been done in this product.

I see the idea promoted that sigma-delta DACs upsample and actually operate like DSD internally. Therefore subjective claims are made that upsampling/converting everything to DSD in software to feed to sigma-delta DACs is feeding them in their native format and improves sound quality. Gaining some of the 'benefits' of DSD which is usually promoted as inherently superior.

So when the DAC3 does DSD over DOP 1.1 and its said to be native conversion how is the ESS chip handling this. Does it only use 1 of the 4 bits in the converter or does it parallel all of them then function in 1 bit mode only?
 
So then your linearity test becomes essentially a noise test which is about engineering excellence. Measuring noise is pretty straight-forward and simple.

John Atkinson at Stereophile assesses 'resolution' by checking the drop in the noise floor with dithered 16- and 24-bit data representing a 1kHz tone at –90dBFS (dividing the difference by 6 to get the increase in resolution over 16 bits).
Perhaps that might be a useful test to add? Would also allow cross checking of the Stereophile results.
 
  • Like
Reactions: trl
John Atkinson at Stereophile assesses 'resolution' by checking the drop in the noise floor with dithered 16- and 24-bit data representing a 1kHz tone at –90dBFS (dividing the difference by 6 to get the increase in resolution over 16 bits).
Perhaps that might be a useful test to add? Would also allow cross checking of the Stereophile results.
I am considering that. The problem is that I can't test it with USB. For some reason, Audio Precision software doesn't allow me to select bit depth over that connection.
 
John Atkinson at Stereophile assesses 'resolution' by checking the drop in the noise floor with dithered 16- and 24-bit data representing a 1kHz tone at –90dBFS (dividing the difference by 6 to get the increase in resolution over 16 bits).
Perhaps that might be a useful test to add? Would also allow cross checking of the Stereophile results.

This test makes for an easy to understand graphic. Still there is nothing more revealed than a noise floor measurement. Feed it -60 or -90 db signals and notch that out and see what is left. You have the noise floor.
 
My understanding was that the 'noise floor' for the 16bit dithered signal in JA's graphs is the dither noise itself below the lowest bit of the 16bit signal.
If the measured noise floor is, say, 30dB lower with 24bit data, doesn't that indicate an increase of about 5bits of resolution over 16 bit?
I may be misinterpreting what JA is saying with these graphs(?). I guess it's different to linearity (accuracy) of very low level signals though.
 
Noise gets a little tricky... The ideal quantization noise floor goes as roughly 9N dB, compared to SNR which goes as 6N dB. That is, every additional bit of resolution ideally increases the SNR by 6 dB and drops the noise floor by 9 dB. But, there are many other sources of noise than quantization noise, converters are rarely ideal, and SINAD (signal to noise and distortion) is often reported as SNR (and THD). Etc.

Lab measurements using FFTs and averaging require extra work to verify you are measuring the noise floor of the device and not the equipment (and ditto everything else, natch, like distortion spurs, clock spurs, power supply spurs, etc. etc. etc.)

Modern high-resolution ADCs and DACs make it easy for folk to generate plots with very high numbers for things like SNR and SINAD but when looking at very low levels it is very difficult to make a good measurement of the thing you actually want to measure, e.g. to determine what you are measuring is the device itself and not some artifact of the test system. There's a reason it takes things like those expensive AP units to provide reference quality measurements. What I find amazing is how close much (much!) less expensive gear can come to those measurements.
 
My understanding was that the 'noise floor' for the 16bit dithered signal in JA's graphs is the dither noise itself below the lowest bit of the 16bit signal.
If the measured noise floor is, say, 30dB lower with 24bit data, doesn't that indicate an increase of about 5bits of resolution over 16 bit?
I may be misinterpreting what JA is saying with these graphs(?). I guess it's different to linearity (accuracy) of very low level signals though.

I don't think you've misunderstand what JA does with it. My point is simply that this is an artifact of 16 bits and 24 bits. One could feed it 10 bit level noise and switch to 24 bit level and see how much it drops. In the end, no equipment will show 24 bit with dither levels of SNR because of thermal noise in the analog world. This would be lower than -140 dbFS for 24 bit while thermal noise limits us to maybe 22 bits at best. So this approach is effectively a round about way of seeing how much noise is in the DAC when reproducing a rather low level signal. So you aren't going to learn much for Amir's extra effort vs simply knowing the noise level of the DUT.

BTW, to my knowledge in audio DACs Mola Mola has the record in claiming a device that has 140 db SNR with THD, IMD unmeasurable though estimated as -150 db or less. We just need to get one shipped to Amir for testing.

https://www.mola-mola.nl/dac.php

https://www.hypex.nl/img/upload/doc/an_wp/WP_AES120BP_Simple_ultralow_distortion_digital_PWM.pdf
 
Last edited:
BTW, to my knowledge in audio DACs Mola Mola has the record in claiming a device that has 140 db SNR with THD, IMD unmeasurable though estimated as -150 db or less. We just need to get one shipped to Amir for testing.
Yes indeed. I looked and I can't find a single independent measurement of it.

Let's have a few of you pester them to send one in for review. :)
 
BTW, to my knowledge in audio DACs Mola Mola has the record in claiming a device that has 140 db SNR with THD, IMD unmeasurable though estimated as -150 db or less.
MSB is much funnier and claimed that their Select DAC has 28.5 effective bits:eek:
http://www.msbtech.com/products/dacSelect.php
The link above automatically redirects to the homepage for unknown reasons but I saved and attached the webpage here.

In one of the screenshots it says PicoScope 4262, I don't know why a scope like this can measure 28.5 effective bits:facepalm:
https://www.picotech.com/oscilloscope/4262/picoscope-4262-overview
 

Attachments

  • MSB Select.zip
    1.1 MB · Views: 236
Amirm,

This is what can happen in a multibit sigma-delta converter where the converter is constructed from a 2, 3, or 4-bit converter inside a sigma delta loop when nothing is done to linearize the 2, 3, or 4 bit converter. This may be the cause. Do you know what converter chip was used?

The ESS converters have a 4-bit core (producing 16 levels) that is constructed from 16 equally weighted 1-bit converters. If the digital input to this converter is say 8, then 8 of the 1-bit converters must be turned on. The ESS randomly selects 8 out of the 16 available 1-bit converters.

On every sample (at the oversampled rate) the active 1-bit elements are randomly selected. As a result of this randomization, any mismatch in the element size produces noise that is shaped into ultrasonic frequencies instead of producing linearity errors.

To further improve upon this, the ESS has separate 4-bit converters for the + and - legs of each balanced output. On the ES9028PRO, there are 8 balanced output channels. Each of these channels has two 4-bit converters that are independently randomized. In the DAC3, 4 balanced outputs are summed to form each of the balanced outputs. This means that there are 16 4-bit converters per balanced output on the DAC3. Each of these 4-bit converters is built from 16 1-bit converters. If you do the math, there are 256 1-bit converters driving each output on the DAC3. All are driven by random mapping that changes on every sample at the oversampled rate. It is really an amazing system and it is very effective at providing virtually-perfect linearity.

BTW, the ultrasonic noise (produced by the noise shaping of element-size mismatches) is reduced by the massively parallel structure and is then completely removed with analog filters in the output stage.

Edit:

It turns out that the Yggdrasil uses 4 20-bit converters (AD5791). These may or may not be in a sigma delta loop. Obviously this configuration can have linearity problems and this shows up in Amirm's test. While it should have been possible to noise shape some of the errors, this may not have been done in this product.


Hi John,

thank you for your detailed explanation. What i am not unterstanding is why the ess chip choose the the values randomly. Would it not be better to choose the value in case of a average value? Should this approach randomize the error? My background is from the digital side, if there are some reason in the analog side a short answer would be enough for me.

Greetings
Fu
 
Last edited:
Back
Top Bottom