• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Review and Measurements of Benchmark DAC3

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,784
Likes
37,678
Correct. It is the main point I made to AP. That you can't exclude what is actually in the signal the user gets in linearity measurements. Noise and distortion should be included to the extent it doesn't screw up the analyzer too badly.
I've the reverse opinion. I agree with AP. If we're doing linearity then do that. We know the noise floor will interfere even if the chip puts out a signal amongst the noise of the proper level.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,349
Location
Seattle Area
I've the reverse opinion. I agree with AP. If we're doing linearity then do that. We know the noise floor will interfere even if the chip puts out a signal amongst the noise of the proper level.
Two issues:

1. You are measuring what we don't hear. We hear the total signal.

2. There are devices that nail the response to the level we are measuring.

Linearity is the ultimate test of a DAC: that it has a straight line transfer function between input digital samples and output analog. That output analog must be definition include all contributions including noise and distortion.

Checking just the level after removing all noise and distortion is an academic exercise devoid of real world value.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,784
Likes
37,678
Two issues:

1. You are measuring what we don't hear. We hear the total signal.

2. There are devices that nail the response to the level we are measuring.

Linearity is the ultimate test of a DAC: that it has a straight line transfer function between input digital samples and output analog. That output analog must be definition include all contributions including noise and distortion.

Checking just the level after removing all noise and distortion is an academic exercise devoid of real world value.
Couldn't a DAC with a negative error at low levels have that balanced by noise as you go lower to get a nice looking linearity plot. Admittedly not likely. Most of the time at lower levels you are measuring a noise issue more than linearity. The same reason I don't think much of the -90db sine wave. Pretty much look at the noise floor and you'll know how that is going to look.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,349
Location
Seattle Area
Couldn't a DAC with a negative error at low levels have that balanced by noise as you go lower to get a nice looking linearity plot.
Repeatability would suffer in that case. That was indeed part of the analysis I went through to develop the pre-filter. As I noted, there is some 50 dB of noise reduction going on outside of the 200 Hz tone.

Let's re-look at the time domain waveform and this time, let's forget about the Topping D50 and only pay attention to stereo output of the DAC3 on the right:

index.php


As you see, the amplitude in one channel is lower than the other. So regardless of what is being measured, one channel output must be in error. And that tracks in linearity measurement:

index.php


See how one channel has much more error than the other.

The differential is not in noise level as the two sine waves (post the high-q filter) look pretty clean.

BTW, thanks for the questioning. I want us to discuss this and arrived some kind of consensus on what is going on.
 

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,306
Location
uk, taunton
Can’t we provide both , the clean ( academic) value and the ‘real world ‘ value or is that too much work?

Just seems if we give both folks can decide what they prioritise and we avoid conflicting methodology wars.

I think providing both would give us a bit more to talk about too as we can explore any disparity in those two values.
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Repeatability would suffer in that case. That was indeed part of the analysis I went through to develop the pre-filter. As I noted, there is some 50 dB of noise reduction going on outside of the 200 Hz tone.

Let's re-look at the time domain waveform and this time, let's forget about the Topping D50 and only pay attention to stereo output of the DAC3 on the right:

index.php


As you see, the amplitude in one channel is lower than the other. So regardless of what is being measured, one channel output must be in error. And that tracks in linearity measurement:

index.php


See how one channel has much more error than the other.

The differential is not in noise level as the two sine waves (post the high-q filter) look pretty clean.

BTW, thanks for the questioning. I want us to discuss this and arrived some kind of consensus on what is going on.

Thanks for this discussion though I have not the capacity to understand all of it.
 

FrantzM

Major Contributor
Forum Donor
Joined
Mar 12, 2016
Messages
4,377
Likes
7,881
I am following this with interest. The cliche Made in China = Junk is slowly eroding. Them Topping are playing a mean game of objective correctness and at a more than "correct" price.
I would love to see measurements of some audiophile darlings... Things like MSB or dCS.
 
Last edited:

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,200
Location
Riverview FL
an academic exercise devoid of real world value.

Like...

LIGO
Supernovas, Neutron Stars, Magnetars, Black Holes
Neutrinos, Higgs Bosons
Exoplanets, SETI, UFOs
Shoutometer

Hey, I thrive on that kind of stuff...

I would love to see measurements of some audiophile darlings... Things like MSB or dCS.

JA has measured some of those.
 
Last edited:

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,200
Location
Riverview FL

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,208
Likes
16,954
Location
Central Fl

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,501
Likes
4,081
Location
SoCal
I don't quite understand how we can put a point on the linearity graph at low levels if we don't filter out the noise, noise is random, are you just capturing a momentary voltage at arbitrary time and call it a day? Well, it's got to be a/c, so you are averaging over X number of periods or something along these lines?
 
Last edited:

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,733
Likes
38,965
Location
Gold Coast, Queensland, Australia
The whole linearity curve should be considered IMO, not just the point where it veers off at one point by 0.1dB, as some will come right back on the axis for several more dB.

At the end of the day, it's all academic- we are well and truly down in the weeds (noise floor) of everything following the D/A.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,784
Likes
37,678
I don't quite understand how we can put a point on the linearity graph at low levels if we don't filter out the noise, noise is random, are you just capturing a momentary voltage at arbitrary time and call it a day? Well, it's got to be a/c, so you are averaging over X number of periods or something along these lines?

Just a description of my thought process in making a linearity measure. I thought of how multi-bit DACs work. 24 bits would means 24 discrete levels of output. You want to know what each bit level puts out. You turn one bit on and measure what the level is. You turn it off and turn the next lower bit on and measure it. If analog noise or something intrudes I wanted to filter that out. So I decided to create a 1/4 sample rate tone (12 khz at 48 khz sampling). Why? Because that has one positive, one negative and two zero bits. You are literally asking for only one single output value at each level if you make those levels in even increments of 6.02 db.

Now multi-bit DACs have issues with that and such a test is likely very good. Probably a sensible test for the Yggy in addition to Amir's method. Which sounds like where AP, Jude and Amir disagree somewhat.

On the other hand, it was just this issue that sigma-delta DACs were designed to fix. And fix it they appear to have done. Even the rather poor Emotiva Home Theater results were near perfect till 17 bits down and obviously noise was the issue below that for less than a 1 db off. I think everything else I've measured which is a handful of DACs is pretty much dead on the money to the 20 bit level. Plus delta-sigma DACs aren't turning on each individual bit for each level as that just isn't how they work. Another thing that could cause linearity issues with multi-bit DACs is when they turn on and off (how quickly). Again sigma delta DACs are running much faster at higher internal rates and that isn't much of an issue.

So I understand Amir saying everything involved should be included because you need linearity for an accurate transfer function of the wave. Even if the perfect wave leaves the DAC chip, but subsequent analog issues inject noise, leak power supply modulation etc etc it is all of this you hear. And those issues can kink the transfer function so to speak.

Yet if the truth, which we need to verify as boutique designs make odd mistakes or do unusual things, is that nearly all sigma delta chips are putting out unkinked, straight accurate signal and only subsequent areas put the kinks in, then it is those other areas we should look at for differences. The linearity with noise in my way of thinking about it just confuses the real issues. Power supplies, low noise, low distortion analog circuits are the real issues. Showing the -90.3 sine wave to make it easily visual how clean a given design is or is not even if you don't know much about such things is maybe a good idea. You can see directly how one unit gets it right and another has it messed up.

All of that is also why I like showing the sweeps. You set the background level at -100 db or minus -120 db and anything above those levels is visible. Jitter sidebands, idle tones, distortion, imaging, anything above those levels is visible.

And yes, I am not being contentious with Amir or anyone here. Hope it doesn't come off sounding that way. It is a matter of discussion about how things are done. I learn a lot that way (especially when I put up something wrong and it is pointed out). BTW, glad to see an explanation of how AP does linearity using 200 hz tones and 2 db steps.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,349
Location
Seattle Area
Can’t we provide both , the clean ( academic) value and the ‘real world ‘ value or is that too much work?
You held the key to the issue Thomas and you just didn't know it!!! :D

FIrst, here is the academic value. To get that, I perform a high resolution FFT and just look at our excitation frequency of 200 Hz, ignoring everything else. I do that at 0 dBFS (full amplitude), -100, -110 and -120 dBFS:

DAC3 FFT 200 Hz unfiltered.png


There are stereo sweeps. So if everything checks out in both channels, the two graphs for left and right should land right on top of each other. Such is the case at 0 dBFS (full amplitude).

At -100 dBFS I think the level has dropped a bit but let's call that still good.

At -110 dBFS, we see the two channels have started to drift. So clearly one of them is showing an incorrect value.

By -120 dBFS where I have circled, the two channels drift away quite a bit with one at -125 dBFS and the other maybe -122 dB. Looking at my linearity measurements we see the same thing:

1530069364567.png


So at the end of the day, there is no difference in my test and the ideal case. Both are correctly showing that the level output from the Benchmark DAC3 using its unbalanced analog starts to lose linearity and by -120 dBFS the error is significant.

The linearity graph shows this error as a difference so the problem is much more obvious than looking at FFTs.

So what is academic is our current argument. :) In this case, there is no difference. One channel is worse than the other so that alone indicates accuracy error at these low levels.

Thanks for staying on me folks to get this sorted out. :)
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,349
Location
Seattle Area
I don't quite understand how we can put a point on the linearity graph at low levels if we don't filter out the noise, noise is random, are you just capturing a momentary voltage at arbitrary time and call it a day? Well, it's got to be a/c, so you are averaging over X number of periods or something along these lines?
Good point. There is actually a "settling algorithm" behind the scenes in Audio Precision software. Indeed I have had to optimize this to get stable values at very low measured levels. It has choice of averaging or attempting to arrive at minimal error using multiple measurement point. There is also settling time to make sure everything becomes stable before being measured after a parameter change.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,349
Location
Seattle Area
Now multi-bit DACs have issues with that and such a test is likely very good. Probably a sensible test for the Yggy in addition to Amir's method. Which sounds like where AP, Jude and Amir disagree somewhat.
FYI I created Jude's/AP's setting for this latest analysis and there is no practical difference. Here is my method and theirs overlayed on top of each other:

1530069874610.png


You can see there is a problem with his method creating those dips all along the line to the right but overall message below -90 dB remains the same.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,349
Location
Seattle Area
And for one more sanity check, here is the same FFT this time augmented with Topping D50 being tested at 203 Hz as to shift its response to the right so that we can see both:

1530070593022.png


Notice how the Topping peaks land on the correct values and the two channels are identical in value down to -120 dB line.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,784
Likes
37,678
Now how audible is a dip of a decibel or two below -90 dbFS? I agree if sub $500 DACs can regularly manage it a $2k and above should as well.

But are we making a tempest in a teapot. A 200 hz -60.2 db sine wave would only have 7 out of 240 samples with a bit value of -90db or less. If those are kinked by a db or so among all those other values it doesn't amount to much. Yes the Topping is an excellent result and puts pressure on everyone else justifying their expensive DACs. I'm hoping it indicates we soon need not even concern ourselves with DACs. Buy the good ones and move on.

PS-I liked the above way of graphing linearity results much better than what people normally use. You might not want all values on it, but the few where DACs go off course near the lower levels is good.
 
Top Bottom