• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

On DAC Linearity Measurement

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,451
Likes
36,880
Here are some linearity measurements I've done on a March DAC 1. With lower noise levels you can see further down in level.
First is a spectrogram of the lower bits. I used a 12 khz tone so that only one bit level is turned on at any time. I've marked the bits. The spectrogram is set so that the background goes light gray at -152 dbFS. Using a 32 k FFT pushes the noise floor down below this so the test tone can stand out. The vertical bars are just markers.
March lowest 5 bits spectrogram.png


Next is a table where I exported the 64K FFT results to a spreadsheet. You see the value at 12,000 hz and beside it the target value for each bit level. As you see these results are not far at all from what you are hoping to see. The 24th bit is less than half a db off in level and the others are less than .1 db off in expected level.
March lowest 5 bits spreadsheet.png


To get these results I made up a signal with these very low level tones. Sent it from the DAC into the microphone inputs of a recording interface. Used the gain of the microphone input to boost levels above the intrinsic noise floor of the interface. I then took the recorded result and adjusted a reference -60 db tone I had embedded so that is was at -60 dbFS again. Which should put all the other signals at their true level leaving the DAC. This seems to work well to separate noise from the tone levels coming from the DAC.

My tests of various sigma delta DACs indicates they get linearity correct until noise obscures the results. Likely they are putting out the signal correctly below the noise.
 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,007
Likes
8,807
How many bits of linearity are needed before the deficiency becomes audible? I believe the standard around here changed from .1 db to .5 db. that would change the linearity of the topping D30 from 15 to about 17 bits. Please inform me if I am not correct.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,303
Likes
233,694
Location
Seattle Area
One thing that I'm not sure I understand about the linearity measurements that @amirm and others are doing is which bit depth is the DAC running at during the measurement - 16-bit or 24-bit?
24 bit of course. With devices that truncate to 16, the results are immediately visible with massive departure around -96 dB and later.
 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,007
Likes
8,807
It depends on how high you set the volume control and nature of non-linearity.

Nice answer. Then again, as the volume control is raised amplifiers clip, speakers reach their limits and, eventually you hear, honey, turn it down.
The bit about "nature of the non-linearity" interests me. I see all kinds of graphs but really don't know how to interpret them or even if there is something important which isn't being measured at that instant.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,619
Location
London, United Kingdom
24 bit of course. With devices that truncate to 16, the results are immediately visible with massive departure around -96 dB and later.

Thanks. The reason I asked is because it's theoretically possible to get, say, a -110 dBFS (18 bits) linearity result (+/- 0.2 dB) from a 16-bit DAC, as long as it's driven with a properly dithered test signal. Assuming, of course, the analyzer filters out the dither noise around the test tone, as my script does. That's why it wasn't necessarily obvious to me that this measurement was always done in 24-bit.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,530
Likes
38,127
Location
Gold Coast, Queensland, Australia
Thanks. The reason I asked is because it's theoretically possible to get, say, a -110 dBFS (18 bits) linearity result (+/- 0.2 dB) from a 16-bit DAC, as long as it's driven with a properly dithered test signal. Assuming, of course, the analyzer filters out the dither noise around the test tone, as my script does. That's why it wasn't necessarily obvious to me that this measurement was always done in 24-bit.

Not theoretical at all. Those numbers were achieved 30 years ago on CD players (16bit data).

Here's an example from 1989:

text 01.JPG


linearity 01.JPG

linearity 02.JPG

text 03.JPG


linearity 03.JPG


And this was achieved with 18bit BB PCM-58P-K multibit converters in an 8x OS 45 bit noise shaper...

The S/N on those D/A converters (bipolar zero to full output) was >126dB (A Wtd).
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,303
Likes
233,694
Location
Seattle Area
Thanks. The reason I asked is because it's theoretically possible to get, say, a -110 dBFS (18 bits) linearity result (+/- 0.2 dB) from a 16-bit DAC, as long as it's driven with a properly dithered test signal. Assuming, of course, the analyzer filters out the dither noise around the test tone, as my script does. That's why it wasn't necessarily obvious to me that this measurement was always done in 24-bit.
The analyzer is configured for filtering all but the tone itself. To the extent that tone has noise riding on it, then that will reflect in the linearity measurement.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,676
Likes
5,990
Location
Berlin, Germany
My tests of various sigma delta DACs indicates they get linearity correct until noise obscures the results. Likely they are putting out the signal correctly below the noise.
This is exactly what I found with the RME Adi-2 Pro, excellent linearity down to the LSB, visible once the random noise is removed: https://www.audiosciencereview.com/...re-always-offset-by-1.6865/page-2#post-157421
It even looks like the AK4490 et al might put out correct data when fed with more than 24bits since they are advertised as "32bit"-DACs. Anyone aware of a DAC that actually feeds the DAC-chip with 32-bit input data?
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,451
Likes
36,880
This is exactly what I found with the RME Adi-2 Pro, excellent linearity down to the LSB, visible once the random noise is removed: https://www.audiosciencereview.com/...re-always-offset-by-1.6865/page-2#post-157421
It even looks like the AK4490 et al might put out correct data when fed with more than 24bits since they are advertised as "32bit"-DACs. Anyone aware of a DAC that actually feeds the DAC-chip with 32-bit input data?
I think most ESS based DACs can feed 32 bit input data.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,173
Likes
16,930
Location
Riverview FL
I think most ESS based DACs can feed 32 bit input data.

Mine is 24bit with internal conversion to 32 - for volume control (and ASRC?)

1552532559335.png
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,676
Likes
5,990
Location
Berlin, Germany
Another important factor when measuring linearity at very low levels is to apply proper dithering.

Below a comparison for 24bit dithered vs. non-dithered, using the RME ADI-2 Pro FS.

I've used the bandpass function of DSP analyser on my AP (2322) as it has much higher selectivity (1/13oct bandwidth) than the analog bandpass, and I've ran 32-value averages and a rather long settling time. The level sweep was in 0.25dB increments to better see the trend line, after 2 passes of 3-point sliding window graphic averaging. Further, to get the analog noise down even more, I've paralleled the two output channels.

1621979114916.png

I did not expect the systematic level error of an undithered sine (blue) below -130dBFS come out that clearly.
The red curve for dithered input illustrates what we already know, any good Delta-Sigma DAC is inherently very linear down to the lowest bits.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,676
Likes
5,990
Location
Berlin, Germany
First, let's do a power average of multiple runs (16 in this case), which will tend to cancel noise (by a factor of sqrtN, where N is the number or runs) and bring out whatever nonlinearity is inherent to the DAC.
Is this a typo? Power Averaging decreases the crest factor of uncorrelated noise but does not reduce it, it's just smoother (in the spectrum view). Only sync'ed averaging would reduce noise level (but not crest factor). If so, how did you manage to set it up on the APx? To my knowledge it can only be used with the FFT/waveform analyzer, for System Two at least.
 
OP
SIY

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,360
Likes
24,663
Location
Alfred, NY
Is this a typo? Power Averaging decreases the crest factor of uncorrelated noise but does not reduce it, it's just smoother (in the spectrum view). Only sync'ed averaging would reduce noise level (but not crest factor). If so, how did you manage to set it up on the APx? To my knowledge it can only be used with the FFT/waveform analyzer, for System Two at least.

No, we're just looking at the word "noise" two different ways, either intrinsic noise or variance. It is the latter which is reduced by sqrtN, as is evident in the graphs.

For synchronous averaging in the APx analyzers, the Transfer Function enables it.
 
Top Bottom