KSTR
Major Contributor
Disclaimer: the issue I found has absolutely no effect on audio signal quality. This is only interesting for some specialist measuring applications.
I just made an interesting discovery while I was doing some low-level loopback measurements to check true (noise-less) 24-bit output depth of the RME's AK4490 DAC chips.
While it turned out the DAC-chip actually does output all bits down to the LSB and is pretty linear (as expected from a Delta-Sigma design) it looks like either the DAC-chip or the RME firmware are reducing all incoming positive (>0) sample values by one. This means a value sample sequence of, say, -2,0,1,0,2,3 gives -2,0,0,0,1,2.
The only reason I can think of why one would do that is the polarity inversion option the chip / device offers. Since the negative range of values is one larger that the positive range in the two's complement number system (like 8 bits go from -128 to +127), the largest negative value cannot be inverted (would give the same value, causing a nasty glitch in the output). By lowering all negative value magnitudes by one LSB, this problem would be avoided, though not in the smartest way (which would be to simply saturate negative values at -127, in the 8-bit example). It would also mean the DAC runs inverted in the RME and the analog section does another inversion to restore polarity? Hhm....
You might ask how this can be measured if the lowest 4...5 bits are buried in the noise?
Time-domain sample-synchronous averaging is answer here, in this case almost 30,000 block were averaged which gives a best-case attenuation by a factor of sqrt(30,000)=173 (=45dB) for any fully uncorrelated signal.
Test Setup:
- DAC output ref level +24dBu
- ADC input ref level +4dBu (so we have 20dB gain here to factor out the ADC influence. In the display this 20dB gain is corrected for, of course)
- XLR loopback cable
- 44.1kHz, 24bit (bit-identity checked with RME "bittest".wavs
- DAC filter NOS, ADC filter SLOW (choice of filters turned out to be irrelevant)
- Sample sequence was 1, 0,-1, 0, 2, 0, -2, 0, 3, 0, -3, 0 and so forth up to 16, 0, -16, 0. Each value is repeated 20 times.
The top trace is the input data for the DAC, the bottom trace is the result of the block averaging on the recorded data:
Yikes, the very first "+1" sample train simply does not exist! And at closer inspection we find all positive values are one lower than they should be.
Zooming in on the first few values:
This confirms the -1 offset for all positive values and also explains the zero values not being zero but shifted up a fraction of a bit (time integral must be zero, there are at least two highpasses in the ADC chain -- and none in the DAC, btw).
Doing a web search and studying the RME manual and AK4490 spec sheet I find no hints to this behavior.
This all is of course completely irrelevant for audio reproduction but for those who use the Adi-2 Pro as a measurement front-end this might be relevant. Notably if you set up a LSB-toggling (0/1) test and wonder why the LSB isn't toggling at all or why you get a different result when toggling (0/-1).
I just made an interesting discovery while I was doing some low-level loopback measurements to check true (noise-less) 24-bit output depth of the RME's AK4490 DAC chips.
While it turned out the DAC-chip actually does output all bits down to the LSB and is pretty linear (as expected from a Delta-Sigma design) it looks like either the DAC-chip or the RME firmware are reducing all incoming positive (>0) sample values by one. This means a value sample sequence of, say, -2,0,1,0,2,3 gives -2,0,0,0,1,2.
The only reason I can think of why one would do that is the polarity inversion option the chip / device offers. Since the negative range of values is one larger that the positive range in the two's complement number system (like 8 bits go from -128 to +127), the largest negative value cannot be inverted (would give the same value, causing a nasty glitch in the output). By lowering all negative value magnitudes by one LSB, this problem would be avoided, though not in the smartest way (which would be to simply saturate negative values at -127, in the 8-bit example). It would also mean the DAC runs inverted in the RME and the analog section does another inversion to restore polarity? Hhm....
You might ask how this can be measured if the lowest 4...5 bits are buried in the noise?
Time-domain sample-synchronous averaging is answer here, in this case almost 30,000 block were averaged which gives a best-case attenuation by a factor of sqrt(30,000)=173 (=45dB) for any fully uncorrelated signal.
Test Setup:
- DAC output ref level +24dBu
- ADC input ref level +4dBu (so we have 20dB gain here to factor out the ADC influence. In the display this 20dB gain is corrected for, of course)
- XLR loopback cable
- 44.1kHz, 24bit (bit-identity checked with RME "bittest".wavs
- DAC filter NOS, ADC filter SLOW (choice of filters turned out to be irrelevant)
- Sample sequence was 1, 0,-1, 0, 2, 0, -2, 0, 3, 0, -3, 0 and so forth up to 16, 0, -16, 0. Each value is repeated 20 times.
The top trace is the input data for the DAC, the bottom trace is the result of the block averaging on the recorded data:
Yikes, the very first "+1" sample train simply does not exist! And at closer inspection we find all positive values are one lower than they should be.
Zooming in on the first few values:
This confirms the -1 offset for all positive values and also explains the zero values not being zero but shifted up a fraction of a bit (time integral must be zero, there are at least two highpasses in the ADC chain -- and none in the DAC, btw).
Doing a web search and studying the RME manual and AK4490 spec sheet I find no hints to this behavior.
This all is of course completely irrelevant for audio reproduction but for those who use the Adi-2 Pro as a measurement front-end this might be relevant. Notably if you set up a LSB-toggling (0/1) test and wonder why the LSB isn't toggling at all or why you get a different result when toggling (0/-1).