• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Article: Understanding Digital Audio Measurements

I am just writing because the R2R ladder is not being given the response it deserves. What some people do not realize is it is far faster to set the correct output on R2R ladder vs Delta Sigma. The Delta Sigma requires a feedback loop to correct itself. From this we can see the response for Delta Sigma is "slow" vs the R2R ladder. If one can create very precise resistors, R2R will beat Delta Sigma hands down.
Delta-sigma converters use oversampling to obtain the desired bandwidth and noise floor within that bandwidth, and that oversampling also determines their transient response. An oversampled delta-sigma design can easily match the step response (for example) of an audio R2R DAC. It does get trickier for RF DACs, particularly those for wideband systems. Stability is pretty much a solved problem AFAIK, at least for completed designs (along the way, designers are pulling their hair out, as I can attest), and of course stability of the output buffer and filter amplifiers is always a concern. The feedback loop(s) settles quickly relative to the audio band since oversampling requires very high bandwidth. In any event, the anti-imaging filter will determine the maximum bandwidth and thus signal settling time (irrespective of thermal tails and such), meaning in most cases an oversampled delta-sigma design will in practice settle faster than a Nyquist-rate R2R DAC.

The purpose of reviews here is to present measured results of audio devices including DACs. It is not to provide detailed DAC performance measurements, but typical audio performance measurements to provide readers real-world results versus marketing claims. The target audience is audiophiles of various technical levels, thus SINAD, linearity, distortion, and such are appropriate whilst things like DNL/INL are not well-understood by audiophiles and are reflected in the higher-level measurements. Also note an Audio Precision audio analyzer is used that does not necessarily support the type of low-level DAC measurements a designer might perform, and since finished products are tested you cannot separate pure DAC performance from the buffers, supply, and such that are integrated in the box.

As for R2R precision, even trimming makes it very difficult to achieve as much as 16-bit performance, and virtually all high-resolution R2R DACs I have designed or seen are segmented designs with unary MSB cells to reduce the matching requirements to something practical and achievable in the real world. And the other parts of an R2R DAC, such as the switches and current cells (if used), reference circuit, voltage/current regulators, etc. must all be stable with high precision and linearity. At 16 bits and between, modulation of the internal switches (BJT or FET) due to changing voltage or current is a significant concern, as is their stability over temperature and such (local self-heating is a significant error source within the active switch devices, for example).

Delta-sigma converters were developed because few precision components are needed compared to other designs, and it is easier to implement them with digital processing circuits to provide error correction and compensation as well as filtering on modern IC processes. Oversampling and noise shaping provides incredibly high SNR by pushing quantization (and some other) noise out of the signal band. You could oversample an R2R DAC, using it as part of a delta-sigma design, to gain the benefits of noise shaping if desired.
 
Last edited:
Delta-sigma converters use oversampling to obtain the desired bandwidth and noise floor within that bandwidth, and that oversampling also determines their transient response. An oversampled delta-sigma design can easily match the step response (for example) of an audio R2R DAC. It does get trickier for RF DACs, particularly those for wideband systems. Stability is pretty much a solved problem AFAIK, at least for completed designs (along the way, designers are pulling their hair out, as I can attest), and of course stability of the output buffer and filter amplifiers is always a concern. The feedback loop(s) settles quickly relative to the audio band since oversampling requires very high bandwidth. In any event, the anti-imaging filter will determine the maximum bandwidth and thus signal settling time (irrespective of thermal tails and such), meaning in most cases an oversampled delta-sigma design will in practice settle faster than a Nyquist-rate R2R DAC.

The purpose of reviews here is to present measured results of audio devices including DACs. It is not to provide detailed DAC performance measurements, but typical audio performance measurements to provide readers real-world results versus marketing claims. The target audience is audiophiles of various technical levels, thus SINAD, linearity, distortion, and such are appropriate whilst things like DNL/INL are not well-understood by audiophiles and are reflected in the higher-level measurements. Also note an Audio Precision audio analyzer is used that does not necessarily support the type of low-level DAC measurements a designer might perform, and since finished products are tested you cannot separate pure DAC performance from the buffers, supply, and such that are integrated in the box.

As for R2R precision, even trimming makes it very difficult to achieve as much as 16-bit performance, and virtually all high-resolution R2R DACs I have designed or seen are segmented designs with unary MSB cells to reduce the matching requirements to something practical and achievable in the real world. And the other parts of an R2R DAC, such as the switches and current cells (if used), reference circuit, voltage/current regulators, etc. must all be stable with high precision and linearity. At 16 bits and between, modulation of the internal switches (BJT or FET) due to changing voltage or current is a significant concern, as is their stability over temperature and such (local self-heating is a significant error source within the active switch devices, for example).

Delta-sigma converters were developed because few precision components are needed compared to other designs, and it is easier to implement them with digital processing circuits to provide error correction and compensation as well as filtering on modern IC processes. Oversampling and noise shaping provides incredibly high SNR by pushing quantization (and some other) noise out of the signal band. You could oversample an R2R DAC, using it as part of a delta-sigma design, to gain the benefits of noise shaping if desired.

I would call the testing performed at ASR as independent laboratory because the real world has many pitfalls including group loops, noisy power that are not tested.

I use Roon and Raspberry Pi4 and Pi5 endpoints, and the USB inputs do not sound quire right, I prefer AES, IIS, COAX on various DACS. I suspect that these differences may be due to my particular configuration and environment.
That said, I comparted DACs level matched (Multimeter with 0.02 volts at 1lhz) with quick A/B switching and switched inputs and there are obvious (to me) sound differences

I now own 2 Topping D90 III Discrete DACS additional comparisons can be made between input type without too much worry about level matching, when time permits.
The D90 ||I Discrete sounds best (more revealing and natural to me) when compared with my Benchmark DAC 3 with the fowling signal chains:

Roon Rock -> ethernet -> Pi2AES -> AES -> D90 III Discrete -> Benchmark LA4 -> Revel Salon2s
Roon Rock -> ethernet -> Pi2AES -> COAX -> Benchmark DAC 3 -> Benchmark LA4 -> Revel Salon2s

For both DACs, USB sound less well defined. Piano notes seem muted, violins a bit off. All subjective.
Not blinded, but well controlled.
If this assessment is correct, then this means that USB and other factors, exist in the wild and are not covered by the measurements provided by ASR.
I suppose it is experience with delivering and support hardware/software products in customer environments that leads me not look for possible for reasons for differences rather than dismissing them. Dismissing customer issues leads is bad for business. ;)

I am not sure what the Topping D90 III Discrete actually is.
It is advertised to be a 1-bit DAC, others have said it is Delta-Sigma so maybe it is a hybrid of sorts.
It does roll-off 0.5 dB at 20kHz but I am certain that is well beyond my hearing upper limit.
Connected AES, the D90 ||| Discrete sounds fantastic.

- Rich
 
Not blinded, but well controlled.

I'm glad that you're trying to level-match. But, "well-controlled" implies the removal of all the major confounding variables, not just some. You're leaving a doozy of a confounding variable on the table if you're not doing it blind. Before you start questioning what's not covered by standard measurements and conjecturing as to why, try to verify that your perception of these differences is due to actual differences and not just your perception.
 
I'm glad that you're trying to level-match. But, "well-controlled" implies the removal of all the major confounding variables, not just some. You're leaving a doozy of a confounding variable on the table if you're not doing it blind. Before you start questioning what's not covered by standard measurements and conjecturing as to why, try to verify that your perception of these differences is due to actual differences and not just your perception.
Eventually, I will but I will either have to find a willing partner. I don't have family members that understand this particular hobby.
In my office system, I have a Shelly1 with web app that that is used to switch an ARX XLR switch. I *may* to get around to writing an application that randomizes and records the results. Still, my A/B switching is quite good. Primarily, I use this to make personal determinations.
I make them, sometimes I share them. FWIW.

There are other factors affecting effectiveness of blind testing. Skill, hearing ability, pressure, and fatigue among them.
They may be the gold standard, but there are issues. Maybe I suck at it. :p
I think you would want multiple trained individuals, like Harman. I have a system for establishing that there is a difference and, perhaps a preference.

I did a blind test listening to a single Salon2 bi-amped versus single-amped. 3 different participants could distinguish the difference.
The wires were switched out of view. There were some comments an AVSForum Kevin Voeks where bi-amping could be a reduction in distortion.
So, not impossible. I promise you this changed no minds. The same experts weigh in. Waist of time, wait of power, excess heat, fools bi-amping, etc.
This has greatly reduced my interest in proving anything to anyone.

Anyone willing to accept that there can be environmental impacts on DAC performance, should be capable of understanding that ASR is not testing that.
For example, ground loops can become audible though transducers, but do we really know the extent that some DACs may be degraded before buzz and hum set in?

- Rich
 
Looking into DAC architectures you can argue that R2R is an "old" technology but it is still the simplest out there.
I trust you will be throwing out your mobile phone and going with a wired phone. They are a million times simpler than a mobile phone!

I was going to suggest using an abacus after that instead of a calculator. But having used one, they are a lot harder to use so I don't suggest you do that....
 
Eventually, I will but I will either have to find a willing partner. I don't have family members that understand this particular hobby.
In my office system, I have a Shelly1 with web app that that is used to switch an ARX XLR switch. I *may* to get around to writing an application that randomizes and records the results. Still, my A/B switching is quite good. Primarily, I use this to make personal determinations.
I make them, sometimes I share them. FWIW.

There are other factors affecting effectiveness of blind testing. Skill, hearing ability, pressure, and fatigue among them.
They may be the gold standard, but there are issues. Maybe I suck at it
. :p
I think you would want multiple trained individuals, like Harman. I have a system for establishing that there is a difference and, perhaps a preference.

I did a blind test listening to a single Salon2 bi-amped versus single-amped. 3 different participants could distinguish the difference.
The wires were switched out of view. There were some comments an AVSForum Kevin Voeks where bi-amping could be a reduction in distortion.
So, not impossible. I promise you this changed no minds. The same experts weigh in. Waist of time, wait of power, excess heat, fools bi-amping, etc.
This has greatly reduced my interest in proving anything to anyone.

If you are as concerned with methodological rigor as you seem to be, you'd have followed up those blind test results with measurements.

That's assuming of course that the rest of your blind test was tight, methodologically, which we can't tell from your brief description.
 
If you are as concerned with methodological rigor as you seem to be, you'd have followed up those blind test results with measurements.

That's assuming of course that the rest of your blind test was tight, methodologically, which we can't tell from your brief description.
Then of course, you will be on board :p

- Rich
 
Don't discount the possibility that there just may be nothing to hear, and not that you're that bad at hearing it ;)
Certainly not. Don't discount that I do hear it. That would be fair would it not?

- Rich
 
Certainly not. Don't discount that I do hear it. That would be fair would it not?

- Rich

That’s not how it works. Science says you are likely misinterpreting the cause of your perception. I have to go by what science says, and not by what you say you might be hearing. At least until you can validate it with a proper ears-only test.
 
Last edited:
Certainly not. Don't discount that I do hear it. That would be fair would it not?

- Rich
Actually no. There is a well known history of such claims from experience with knowledge of what is likely when done blind.
 
Fair enough, I'll work on my blind setup.
It will require some programming, but I have the hardware.

Just curious, do believe that all green zone DACs are indistinguishable in a user environment.

- Rich
 
Just curious, do believe that all green zone DACs are indistinguishable in a user environment.

Of course not. A DAC can be misconfigured, connected to other devices in a way that cause ground loops, exceeding load limits, used with recordings that are outside normal/expected bounds or can be malfunctioning. Users can, and often do, all these things. Which is why it's important to understand how DACs work and to know how to troubleshoot your setup if it's not working as expected... for example if it happens to sound very different :)
 
Hello !
I still don't understand this SINAD chart! Are you saying I would enjoy listening more to a DAC ranked 'Excellent' than one in the 'Fair', 'Poor', or even 'Very Good' category?Putting enjoyment and cost aside, would the 'Excellent' one actually sound better ( connected to same equipments) in an audible way/ clear way to you Amir ?
 
This has greatly reduced my interest in proving anything to anyone.
There are hundreds of public, well-documented blind tests wherein the null hypothesis of no audible difference survived, but audiophiles mostly don’t change their minds. Vested sales interests and frantic cultists are in charge of most of the popular audio discussion outlets. This has greatly reduced scientific interest in DISproving any of it again.
 
Last edited:
Hello !
I still don't understand this SINAD chart! Are you saying I would enjoy listening more to a DAC ranked 'Excellent' than one in the 'Fair', 'Poor', or even 'Very Good' category?Putting enjoyment and cost aside, would the 'Excellent' one actually sound better ( connected to same equipments) in an audible way/ clear way to you Amir ?
Since the noise component of SINAD adds up a number of different non-signal ingredients, you should look at it is an overall indicator of engineering quality. See the post below another post on this site regarding “Threshold of Audibility” to learn more about what level of distortion or noise might be audible to humans.


The whole left third of the graph is reliably “transparent” (ie you wouldn’t hear a difference from the signal itself, even if you perceive one due to some psychological factor, as discussed above).
 
Last edited:
Hello !
I still don't understand this SINAD chart! Are you saying I would enjoy listening more to a DAC ranked 'Excellent' than one in the 'Fair', 'Poor', or even 'Very Good' category?Putting enjoyment and cost aside, would the 'Excellent' one actually sound better ( connected to same equipments) in an audible way/ clear way to you Amir ?
Your speakers, headphones, and ears (unless you’re 18 and under) all have worse resolving capabilities than a competent DAC. So as long as it’s Yellow and up (SINAD of about -90dB, as it happens) they all sound the same to humans.
 
Last edited:
Hello !
I still don't understand this SINAD chart! Are you saying I would enjoy listening more to a DAC ranked 'Excellent' than one in the 'Fair', 'Poor', or even 'Very Good' category?Putting enjoyment and cost aside, would the 'Excellent' one actually sound better ( connected to same equipments) in an audible way/ clear way to you Amir ?
Nobody is saying you will enjoy listening to it more. It's just technical measurement of distortion.
 
Back
Top Bottom