The above is a linearity measurement on a Benchmark DAC3. This plot includes the unbalanced outputs (green trace - left, red trace - right) and the balanced outputs (yellow trace = left, magenta trace = right). The blue horizontal lines represent a linearity deviation of +/- 0.1 bit (the deviation used by Amirm in his tests). The cyan horizontal lines represent a linearity deviation of +/- 0.5 bit (+/- 3.01 dB). The raw data is plotted in the 4 diagonal curves. The upper pair of curves are the balanced outputs. The balanced outputs are 16 dB hotter than the unbalanced outputs (+24 dBu at 0 dBFS vs. +8.2 dBu at 0 dBFS).
At the bottom of these diagonal curves, we reach the noise floor of our measurements. This noise floor is a function of the output noise of the output, the input noise of the analyser, and the bandwidth of the bandpass filter in the measurement system. A narrower bandpass filter allows us to probe deeper into the noise to measure the linearity at lower levels. The lower operating levels of the RCA outputs make the measurements more difficult in comparison to measuring the balanced outputs. This can make it look like the XLR outputs have better linearity, but this is not the case. Inside the DAC3, both sets of outputs are derived from the same analog output of a differential amplifier that follows the ES9028PRO D/A converter. This means that the apparent differences in linearity shown in Amirm's linearity measurements of the DAC3 are a physical impossibility. This does not mean that his measurements were wrong, it just means that noise was interfering with his measurements. The solution is to use a narrower bandpass filter.
There are 4 different bandpass filters that can be used in the AP2522 that I used to make this measurement. Similar options are available in the AP2722 and the APx555.
1) Analog bandpass filter in the 'analog analyzer'
2) Digital bandpass filter in the 'DSP audio analyzer'
3) Digital bandpass filter in the 'Harmonic Distortion Analyzer' using the 'Fundamental Amplitude' measurement
4) FFT analysis using the 'FFT spectrum analyzer'
Note: I do not have an APx555 but I have the other two models. Amirm and I both have AP2522 analyzers, so I chose to use this box rather than the higher performance AP2722.
This above list is sorted by widest to narrowest filter. The best low-level linearity measurements can be made with the FFT spectrum analyzer but this is very slow and very cumbersome. There is also no automatic way to make a traditional linearity plot after the FFT measurements have been done on an Audio Precision test station. I have done very low-level high-precision FFT linearity measurements, but it can take hours to complete the tests. What these FFT measurement show is that most sigma-delta converters have virtually perfect linearity. For this reason, linearity measurements tend to be completely useless when measuring sigma delta converters. The deviations at the bottom of the curve are almost always due to the fact that the curve hits the noise measurement limit of the test and have absolutely nothing to do with the actual linearity of the converter. This can be proven by increasing the discrimination of the test by narrowing the bandpass filter so that accurate measurements can be made further down into the noise. At the end of the day, these efforts tend to prove that most sigma-delta converters have virtually-perfect linearity.
For the test above, I used method 3 ('Fundamental Amplitude' meter in the 'Harmonic Distortion Analyzer'). This method allows us to probe deep into the noise floor to examine the linearity. As I said above, the FFT technique (option 4) will allow a deeper analysis, but the results are not automatically plotted into a linearity curve.
On the DAC3, the unbalanced outputs operate at a signal level that is 16 dB lower than the balanced outputs. This means that it is much harder to measure the linearity of the unbalanced outputs. In terms of bits, 16 dB is about 2.6 bits. If our measurement shows that the 'linearity' is 2.6 bits 'better' on the XLR outputs, this is an indication that the measurement is being limited by the capabilities of our test equipment. In the test above, you can see that the 16 dB separation between the balanced and unbalanced curves is reduced to about 12 dB where at the bottom of the curves where we hit the noise limitations of our measurement. This means that the noise limitations will make the unbalanced outputs look about 4 dB (0.7 bits) worse than the unbalanced. We are almost at the point where our test equipment is not a limitation when comparing the XLR and RCA outputs.
So lets talk about the results:
Using the XLR outputs, we hit the noise limitations of our measurement at -150 dB relative to 24 dBu (right-hand scale) which is -150 dBFS (equivalent to about -25 bits). This means that the 'linearity' curve will reach a 1 dB deviation about 12 dB (2 bits) above this point (and this is clearly visible as a 1 dB deviation at -23 bits). This bend in the XLR 'linearity' curve starting at -19 bits and reaching 1 dB at -23 bits is due to noise and does not have anything to do with linearity.
Using the RCA outputs, we hit the noise limitations of our measurements at about -162 dB relative to 24 dBu which is -146 dB relative to the 8 dBu output of the RCA outputs at 0 dBFS. This means that we hit the noise limits of out measurement at -146 dBFS on the unbalanced outputs. As stated above, our measurements are degraded by 4 dB when measuring the unbalanced outputs. Doing the same calculations that we did on the XLR outputs, we would expect the 1 dB deviation to occur at a level that is 4 dB higher (about 0.7 bits higher). So the RCA outputs should reach a 1 dB deviation at -22.3 bits instead of the -23 bits that we measured with the XLR outputs). If we examine the linearity plots for the RCA outputs, we can see that we reach a 1 dB deviation at about -22 bits. This is the expected result. Again, the deviation is due to noise and does not have anything to do with linearity.
In the context of sigma-delta converters, 'linearity' measurements are simply a reflection of the SNR of the output being measured and the SNR of the detector in the analyzer. It is much better to measure the SNR of the output directly than it is to determine noise performance through the very convoluted use of a linearity plot.
The sole purpose of the linearity plot should be to verify that there are no linearity errors above the point at which noise begins to corrupt the measurement. From the tests above, we can see that the linearity is perfect above the point at which noise corrupts the measurement. Once we are within 24 dB above the noise floor of the measurements, (-150 dBFS for the XLR outputs) and (-146 dBFS for the RCA outputs) the linearity deviation is meaningless and will vary according to the noise discrimination of the test equipment . For this reason it is also meaningless to say that an output has 'x bits' of resolution on the basis of a linearity measurement.
Bottom line:
Use linearity measurements to linearity above the noise floor of the measurement, and then use a simple noise meter to determine the SNR of the output. The SNR can be expressed in dB and in bits of resolution measured over the entire 20 kHz bandwidth. This is the true resolution in terms of bits (as long as no linearity deviations are discovered at levels above the noise floor of the linearity test).
Sorry for the length of this post!
Edit: changed 'Once we are 12 dB above the noise floor of the measurements ...' to read '... 24 dB'. At 12 dB above the noise floor, the noise causes a +/- 1 dB deviation in the linearity curve. At 24 dB above the noise floor, the noise causes about a +/- 0.1 dB deviation in the linearity curve. We have to stay out of this noise-contaminated region when evaluating linearity. Improved test equipment allows us to push the noise contaminated region lower thereby extending our ability to look at the linearity. But once the noise contaminates the measurement, it is no longer a measurement of linearity.