Actually, isn't the xu208 processor doing the same thing when volume is decreased?
XU208 is doing the same thing as DAC digital volume control.
It's all well explained here.
Actually, isn't the xu208 processor doing the same thing when volume is decreased?
Does the DAC linearity test relate to build-in volume control
24 bit of course. With devices that truncate to 16, the results are immediately visible with massive departure around -96 dB and later.One thing that I'm not sure I understand about the linearity measurements that @amirm and others are doing is which bit depth is the DAC running at during the measurement - 16-bit or 24-bit?
It depends on how high you set the volume control and nature of non-linearity.How many bits of linearity are needed before the deficiency becomes audible?
It depends on how high you set the volume control and nature of non-linearity.
24 bit of course. With devices that truncate to 16, the results are immediately visible with massive departure around -96 dB and later.
Thanks. The reason I asked is because it's theoretically possible to get, say, a -110 dBFS (18 bits) linearity result (+/- 0.2 dB) from a 16-bit DAC, as long as it's driven with a properly dithered test signal. Assuming, of course, the analyzer filters out the dither noise around the test tone, as my script does. That's why it wasn't necessarily obvious to me that this measurement was always done in 24-bit.
The analyzer is configured for filtering all but the tone itself. To the extent that tone has noise riding on it, then that will reflect in the linearity measurement.Thanks. The reason I asked is because it's theoretically possible to get, say, a -110 dBFS (18 bits) linearity result (+/- 0.2 dB) from a 16-bit DAC, as long as it's driven with a properly dithered test signal. Assuming, of course, the analyzer filters out the dither noise around the test tone, as my script does. That's why it wasn't necessarily obvious to me that this measurement was always done in 24-bit.
This is exactly what I found with the RME Adi-2 Pro, excellent linearity down to the LSB, visible once the random noise is removed: https://www.audiosciencereview.com/...re-always-offset-by-1.6865/page-2#post-157421My tests of various sigma delta DACs indicates they get linearity correct until noise obscures the results. Likely they are putting out the signal correctly below the noise.
I think most ESS based DACs can feed 32 bit input data.This is exactly what I found with the RME Adi-2 Pro, excellent linearity down to the LSB, visible once the random noise is removed: https://www.audiosciencereview.com/...re-always-offset-by-1.6865/page-2#post-157421
It even looks like the AK4490 et al might put out correct data when fed with more than 24bits since they are advertised as "32bit"-DACs. Anyone aware of a DAC that actually feeds the DAC-chip with 32-bit input data?
I think most ESS based DACs can feed 32 bit input data.
Is this a typo? Power Averaging decreases the crest factor of uncorrelated noise but does not reduce it, it's just smoother (in the spectrum view). Only sync'ed averaging would reduce noise level (but not crest factor). If so, how did you manage to set it up on the APx? To my knowledge it can only be used with the FFT/waveform analyzer, for System Two at least.First, let's do a power average of multiple runs (16 in this case), which will tend to cancel noise (by a factor of sqrtN, where N is the number or runs) and bring out whatever nonlinearity is inherent to the DAC.
Is this a typo? Power Averaging decreases the crest factor of uncorrelated noise but does not reduce it, it's just smoother (in the spectrum view). Only sync'ed averaging would reduce noise level (but not crest factor). If so, how did you manage to set it up on the APx? To my knowledge it can only be used with the FFT/waveform analyzer, for System Two at least.