• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

On DAC Linearity Measurement

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,359
Likes
24,660
Location
Alfred, NY
There's been a lot of discussion about DAC linearity measurements in various threads, so I thought I'd throw in my opinions and a few observations to support my thinking. I want to start by thanking Amir, who has generously shared project files, filters, and anything else I've asked for to try replicating what he's doing. For the measurements, I used an APx525 with every bell and whistle (with a major hat tip to AP and AudioXpress for making this wonderful equipment available to me).

The test mule here is a Behringer UMC404HD, which is a combo ADC/DAC with 4 mike preamps, selling for under $100. The focus here should be in what results are gathered and how they're interpreted, which seems to be the gist of the arguments.

The basic measurements are as shown here:

Distortion and noise- Behringer.png

(Don't mind the phase reversal between channels, that's because I only had two XLR-to-TRS cables and one was wired backward)

A couple things to note: this is at best a 99.5dB S/N device, good enough for CD, not exactly state of the art, though. And the output at full scale is pathetically low; having a reasonably healthy output (3-5V) would probably have squeezed out another 6-10dB of S/N.

Regarding jitter, I posted the J-Test results in an earlier thread, but here they are again for reference:

J-Test_ Mystery DAC (UMC404HD).png


So I think that the linearity stuff won't get overshadowed by other defects. And for purposes of clarity, I thought it might be nice to see what the baseline linearity of the APx525 was. Unfortunately, you can't do exactly the linearity test that you would with a DAC, but you can do an analog loopback, which is certainly going to be a worst-case. I used 0dBu as my reference, so to translate to dBV (which is close to the dBFS for the UMC404HD), add (or subtract) about 2.2 dB. I ran this with and without the narrow bandpass filter Amir uses. This is run with and without the bandpass filter so you can see the effect of the noise floor on this measurement.

Effect of Filtering- AP Loopback.png


OK, let's kick things off with Amir's Basic Linearity Test, which uses the bandpass filter. I ran it twice in order to get an idea of what the repeatability is:

Linearity_ Repeatability.png


By Amir's criterion, this loses linearity at about -95dBFS. But is it really becoming nonlinear? Let's take off the bandpass filter:

Effect of Filtering.png


So, it's clear from the last two graphs that the noise floor is a really big deal as expected in a broadband measurement- you expect this to increase at low signal levels. But what about the "linearity" with the filter on? How much is that affected by noise and how much by linearity? Does it matter? First, let's do a power average of multiple runs (16 in this case), which will tend to cancel noise (by a factor of sqrtN, where N is the number or runs) and bring out whatever nonlinearity is inherent to the DAC.

UMC404HD Linearity_ Power Average.png


Looks like the linearity now holds up to better than -116dBFS. And I say "better than" because more averages might reduce the deviation even further. I would have done 64 averages to see if the bump below -116dB halved in amplitude, but that's a slow process for old and software-stupid people who can't write macros to let it run unattended. But I think the point is adequately demonstrated- the deviation of the linearity curve in this kind of measurement is noise-dominated, even with the bandpass filter.

The linearity vs noise thing really came to mind when Amir showed spectra of a single tone taken at lower and lower levels from one particularly questionable DAC. At a certain point, reducing the level at the DAC didn't result in the analog output signal getting smaller, i.e., the transfer function breaks into a horizontal line. That is something I would call a nonlinearity!

So juuuust for fun, I used that trick to see how well the DAC tracks at low levels. And whaddaya know.

Single Tone Linearity_ UMC404HD.png


32 averages knocks down the noise floor and shows that the tracking of the tone's levels is really excellent.

It can be argued that, who cares about this, if the noise dominates below -99 dB, the inherent DAC linearity is unimportant. perhaps so, but it's not hard to demonstrate (try it yourself!) that we can detect tones well below a random noise floor. For example, let's look at a sine wave coming out of the DAC at -100dB:

-100 dB Sine_ UMC404HD.png


Still pretty sine-y. So it's not implausible that this would be clearly audible through the noise, assuming your gain is turned up high enough. Let's delve deeper:

-125 dB Sine_ UMC404HD.png


It's quite a bit harder to see the sinewave in here, but surprisingly, a sine this small with respect to the noise floor can still be perceived IF the gain is turned up (I proved this to myself at a somewhat higher overall level, headphones, and some fierce concentration). Is that realistic for normal listening? I don't think so, but others may have better ears and listening environments than I do.

The key point I want to make here is that the traditional "linearity" measurement is completely entangled with noise, but some additional measurement distinguishes noise (which at low levels may not be objectionable) from intrinsic linearity (which might well be objectionable). The linearity test that Amir and does and similar ones from hifi magazines is a necessary test, a good overall look, but a few extra measurements really tease out how a DAC is acting, how (and if!) it can be improved, and honestly don't take much more time.

All things being equal, in my opinion, I think it's better to have an intrinsically linear DAC and then work on reducing the noise floor, but that's just my own prejudices. Oh and by the way, other than the low output, for $99 this DAC is pretty good.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,302
Likes
233,653
Location
Seattle Area
And oh, on the scale, I use +-5 dB. It is not overly amplified. Or to the contrary, way under-amplified with +- 10dB and others use. The scale is the default from my old analyzer and one that JA uses in Stereophile (in the odd occasion he does a linearity test).
 
OP
SIY

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,359
Likes
24,660
Location
Alfred, NY
Note that, other than one graph, I used the same +/-5dB range that you did, just for consistency. Also note that i used the Behringer ASIO driver, which I should have mentioned given the recent accusation that ASIO4ALL might cause artifacts with some DACs.

The old Stereophile measurements always made me wonder if they were conflating noise and linearity, and your recent posts got me thinking about it again. The difference is that now I can go in and find out for myself...
 

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,455
Location
Australia
General question. Are the test results posted by various members in accordance with equivalent protocols and test equipment?
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,441
Likes
36,872
General question. Are the test results posted by various members in accordance with equivalent protocols and test equipment?

Speaking for my results, I used a 12 khz tone rather than 1 khz or 400 hz. The reason being such a tone has only one bit level positive and negative and a 0. I was attempting to only use one bit level of the available 24 bits at any time. This would be considered a DNL or differential non-linearity measurement. The other way of doing this would be INL or integral non-linearity if they slide between steps of -6.02 db per bit. With multi-bit DACs you usually get better DNL results than INL though neither is likely to be perfect at lower levels. With sigma-delta DACs you would expect better linearity as that was one purpose for going to those. I also check my levels for each bit turned on with an FFT which steeply filters out noise. To me noise is one issue and the DAC chip can be good though swamped in noise from other sources which is the purpose of this test according to my opinion.

Now, I've tested a 2004 Harman-Kardon AVR via its preamp outs. It was pretty much perfect to the 18th bit. It had a decibel of error at 19 bits and 2 db of error at 20 bits and that was more than just noise intruding. I've checked the 2014 Marantz AV pre/pro which was perfect to the 19th bit, a tiny bit off at 20th bit. I've checked 4 recording interfaces which are basically perfect to the 20th bit, and two which have lower noise are just as good to the 21st bit and not much off at the 22nd where the reading is being effected by noise even with the FFT being used. I've checked two of the recording interfaces using a 440 hz tone and the results were pretty much the same to 20 bits worth.

So I think SIY's, BE718's and my results would not be much different. Amir's method could differ more. So they aren't interchangeable.

AES17 is something of a testing standard. You could read about the 1998 version here.

https://www.ak.tu-berlin.de/fileadmin/a0135/Unterrichtsmaterial/KT-Labor_WS0809/1_ADDA/aes17.pdf

There have been some minor changes since then, but largely it is the same.

This shows doing a logarithmic gain measurement using 997 hz tones for gain linearity. Start at -5db, and go in 5db or smaller steps. Pass the result thru a 1/3 octave bandpass filter. They note if noise is a problem one could use tighter than 1/3 octave filtering. Continue until getting within 5 db of the noise level in the bandpass filter.

So what is the right method? I've noted it is my opinion Amir's letting noise interfere too much with his method. I suppose it is closer to AES17. As with measurements the usefulness is in all the details. Amir wishes to keep things simple for his reviews. I agree with that goal. Atomicbob's measurements make most people's eyes glaze over from too much detail (though for me I love that).

If everyone used AES17 at least the results are comparable. Manufacturer's of course sometimes leave out details to quote a great looking spec which may or may not indicate something useful. Independents can dig into the details in any number of ways to dig out any flaw. So as usual....................things are complicated.

Me, on linearity, pretty much any sigma-delta unit is good on that if you filter out noise. On any sigma-delta based DAC I'm more interested in the basic noise floor or any signal related sidebands that show up. Can anyone point to a multi-bit DAC that equals a sigma-delta on linearity without costing 10x more? In audio is there any possible audible benefit leading to better fidelity using mutli-bit DACs? Sorry, getting off topic now.
 

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,455
Location
Australia
Speaking for my results, I used a 12 khz tone rather than 1 khz or 400 hz. The reason being such a tone has only one bit level positive and negative and a 0. I was attempting to only use one bit level of the available 24 bits at any time. This would be considered a DNL or differential non-linearity measurement. The other way of doing this would be INL or integral non-linearity if they slide between steps of -6.02 db per bit. With multi-bit DACs you usually get better DNL results than INL though neither is likely to be perfect at lower levels. With sigma-delta DACs you would expect better linearity as that was one purpose for going to those. I also check my levels for each bit turned on with an FFT which steeply filters out noise. To me noise is one issue and the DAC chip can be good though swamped in noise from other sources which is the purpose of this test according to my opinion.

Now, I've tested a 2004 Harman-Kardon AVR via its preamp outs. It was pretty much perfect to the 18th bit. It had a decibel of error at 19 bits and 2 db of error at 20 bits and that was more than just noise intruding. I've checked the 2014 Marantz AV pre/pro which was perfect to the 19th bit, a tiny bit off at 20th bit. I've checked 4 recording interfaces which are basically perfect to the 20th bit, and two which have lower noise are just as good to the 21st bit and not much off at the 22nd where the reading is being effected by noise even with the FFT being used. I've checked two of the recording interfaces using a 440 hz tone and the results were pretty much the same to 20 bits worth.

So I think SIY's, BE718's and my results would not be much different. Amir's method could differ more. So they aren't interchangeable.

AES17 is something of a testing standard. You could read about the 1998 version here.

https://www.ak.tu-berlin.de/fileadmin/a0135/Unterrichtsmaterial/KT-Labor_WS0809/1_ADDA/aes17.pdf

There have been some minor changes since then, but largely it is the same.

This shows doing a logarithmic gain measurement using 997 hz tones for gain linearity. Start at -5db, and go in 5db or smaller steps. Pass the result thru a 1/3 octave bandpass filter. They note if noise is a problem one could use tighter than 1/3 octave filtering. Continue until getting within 5 db of the noise level in the bandpass filter.

So what is the right method? I've noted it is my opinion Amir's letting noise interfere too much with his method. I suppose it is closer to AES17. As with measurements the usefulness is in all the details. Amir wishes to keep things simple for his reviews. I agree with that goal. Atomicbob's measurements make most people's eyes glaze over from too much detail (though for me I love that).

If everyone used AES17 at least the results are comparable. Manufacturer's of course sometimes leave out details to quote a great looking spec which may or may not indicate something useful. Independents can dig into the details in any number of ways to dig out any flaw. So as usual....................things are complicated.

Me, on linearity, pretty much any sigma-delta unit is good on that if you filter out noise. On any sigma-delta based DAC I'm more interested in the basic noise floor or any signal related sidebands that show up. Can anyone point to a multi-bit DAC that equals a sigma-delta on linearity without costing 10x more? In audio is there any possible audible benefit leading to better fidelity using mutli-bit DACs? Sorry, getting off topic now.

Thank you. :)
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,302
Likes
233,653
Location
Seattle Area
Looks like the linearity now holds up to better than -116dBFS. And I say "better than" because more averages might reduce the deviation even further. I would have done 64 averages to see if the bump below -116dB halved in amplitude, but that's a slow process for old and software-stupid people who can't write macros to let it run unattended. But I think the point is adequately demonstrated- the deviation of the linearity curve in this kind of measurement is noise-dominated, even with the bandpass filter.
Actually, that is not the only explanation. Much of the time when I analyze the output of the DAC at very low amplitudes, what I see is that the voltage is being modulated. Often this is due to power supply hum but other sources could also do it. In that situation, averaging is doing just that: averaging the value that is jumping up and down. It is not getting rid of noise. Noise is non-existent with the filter I use.

When I get a chance I will post some animated GIFs showing the above.

Since we want to know about this kind of variability, averaging is not a good idea as it hides that.
 
OP
SIY

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,359
Likes
24,660
Location
Alfred, NY
Actually, that is not the only explanation. Much of the time when I analyze the output of the DAC at very low amplitudes, what I see is that the voltage is being modulated. Often this is due to power supply hum but other sources could also do it. In that situation, averaging is doing just that: averaging the value that is jumping up and down. It is not getting rid of noise. Noise is non-existent with the filter I use.

When I get a chance I will post some animated GIFs showing the above.

Since we want to know about this kind of variability, averaging is not a good idea as it hides that.

But that's exactly the point of adding a separate power averaged measurement to a single sweep- it teases out whether the "linearity" deviation of the single sweep is due to nonlinearity or noise. After all, if the value is "jumping up and down," that's noise. The averaging does indeed show the intrinsic linearity, which the sine amplitude vs level curves confirm, analogous to filtering out the noise in a THD+N measurement to isolate THD. Yes, there's other ways of finding this out (e.g., the stepped amplitude sine spectra), I just prefer the power average to supplement the single-run linearity trace because it's faster than doing the amplitude vs level (lacking a macro to do that automatically). Note that all through this, I use the term "supplement"rather than "substitute"!

I believe that the measurement time aperture for the AP is increased at low levels- do you know if this is indeed the case?
 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,005
Likes
8,802
My question is how are improvements in linearity perceived by our silly blood powered human ears? Is .1 db deviation the best measure, or would .5 db, make more sense being that 1 db is supposed to be the threshold of hearing? There really are some technical wizards around here. I wouldn't know how to hook that test equipment up, let alone understand any of it without the experts around here giving an explanation.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,302
Likes
233,653
Location
Seattle Area
After all, if the value is "jumping up and down," that's noise.
The jumping up and down is because the measurement sampling is random. Otherwise one of the main reasons they move up and down is due to pollution from single tone frequencies like mains. These degradations are real and we want them included in our measurements.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,302
Likes
233,653
Location
Seattle Area
My question is how are improvements in linearity perceived by our silly blood powered human ears? Is .1 db deviation the best measure, or would .5 db, make more sense being that 1 db is supposed to be the threshold of hearing? There really are some technical wizards around here. I wouldn't know how to hook that test equipment up, let alone understand any of it without the experts around here giving an explanation.
For the purposes of blind testing, we like to match levels to better than 0.1 dB. So in that context, the threshold is lower.

However, there is complexity here in that the linearity error gets higher and higher, as levels go lower and lower. So to hear that differential, you need to have low level signals without anything loud masking them.
 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,005
Likes
8,802
For the purposes of blind testing, we like to match levels to better than 0.1 dB. So in that context, the threshold is lower.

However, there is complexity here in that the linearity error gets higher and higher, as levels go lower and lower. So to hear that differential, you need to have low level signals without anything loud masking them.

That's the why use .1db part of the question. How do we perceive changes in linearity?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,302
Likes
233,653
Location
Seattle Area
How do we perceive changes in linearity?
Depends on nature of linearity. Before the Schiit Yggdrasil was taken away, I did some testing of playing low amplitude tones. It created a ton of harmonic distortion, completely changing the tone I was playing. It pitched it toward higher frequency. It also made it louder because it had generated additional spectrum due to its non-linearity.
 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,005
Likes
8,802
Depends on nature of linearity. Before the Schiit Yggdrasil was taken away, I did some testing of playing low amplitude tones. It created a ton of harmonic distortion, completely changing the tone I was playing. It pitched it toward higher frequency. It also made it louder because it had generated additional spectrum due to its non-linearity.

Perhaps this is the artificial detail some mention.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,302
Likes
233,653
Location
Seattle Area
Perhaps this is the artificial detail some mention.
The testing was done at levels below -90 dB. And reamplified back up by the same amount. I don't think anyone hears what I heard. :)
 

awdelft

Member
Joined
Jan 16, 2019
Messages
9
Likes
5
Does the DAC linearity test relate to build-in volume control (eg of the khadas tone) or is the (usb) test-signal generated to be less resolution?
Actually, isn't the xu208 processor doing the same thing when volume is decreased?
I suppose in case of a volume control (again like in case of the Khadas tone) it would be interesting to test it's quality too.

By the way, I am surprised how much different (I think?) the Khadas tone sounds compared to my Asus DX not-to-bad-either soundcard.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,619
Location
London, United Kingdom
Does the DAC linearity test relate to build-in volume control (eg of the khadas tone) or is the (usb) test-signal generated to be less resolution? .

It's the test signal. Volume control linearity can be an interesting measurement, but it's a different thing.

Actually, isn't the xu208 processor doing the same thing when volume is decreased?

Depends on whether the volume control is digital or if it controls analog gain.

This is also why I suspect the audibility of poor linearity is under-rated in typical scenarios: there are plenty of cases where a digital volume control is combined with a DAC that has linearity issues. Linearity issues at -90 dBFS might not be easy to hear, but if you use a digital volume control to reduce the sample amplitude by, say, -30 dB, then the non-linearity de facto starts at -60 dBFS, which is something else entirely. (Though to be fair, dynamic range might be even more of an issue at this point, especially if the noise floor is not "clean".)

Nice discussion, by the way. I recently built a set of tools to measure linearity using SoX (and Python scipy/matplotlib for visualization). However I did not yet try to use it in a real-world practical measurement scenario (aside from testing my QA401 in loopback).

One thing that I'm not sure I understand about the linearity measurements that @amirm and others are doing is which bit depth is the DAC running at during the measurement - 16-bit or 24-bit? I'm asking because to me that seems like a pretty fundamental piece of information to understand and interpret the results. While writing these scripts I seemed to hit some kind of hard limit of -110 dBFS in the presence of pure digital 16-bit dithering noise (not even going through a DAC/ADC), even if I try to tweak the bandpass filter. With 24-bit my "homemade" analyzer can theoretically resolve down to -160 dBFS, and I measured -125 dBFS on the QA401.
 
OP
SIY

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,359
Likes
24,660
Location
Alfred, NY
One thing that I'm not sure I understand about the linearity measurements that @amirm and others are doing is which bit depth is the DAC running at during the measurement - 16-bit or 24-bit?

In this particular case, 24 bit, as one can infer from the single tone spectrum using stepped amplitudes.
 
Top Bottom