Dear Flo, would you mind illustrate to us the difference between the dithered and undithered signal in a future session of test ?

Sure, here you go:

1kHz @0dBFS

*without *dither:

1kHz @0dBFS

*with *dither:

You see that the noise floor increased, and so the noise is calculated 3.8dB higher.

Dither finds its interest at very low level, to minimize the mathematical errors of decimation.

Example 1kHz @-88dBFS

*without *dither:

We see distortion coming from mathematical imprecision at this very low level of CD (16bits).

Now the same 1kHz @-88dBFS

*with *dither:

Distortion has vanished below noise. Note also that I was requesting -88dBFS. With the 0.5dBFS headroom of the measurement interface, I should get -88.5dBFS. Look above, without dither, I got -89.84dBFS, which means we loose precision, or linearity (by 1.34dB deviation). With dither, I got -88.49dBFS, that's only 0.01dB deviation. So not only distortion decreased, but precision and linearity increased.

Was it about just pushing the limits of the measurement/medium in general or was it specifically needed to better differentiate between every device behavior?

Just because it's what is recommended in the standard

*AES17-2020 - Measurement of digital audio equipment:*
It makes sense to add dither since all recordings are produced this way (since 90s I think). That also helps indeed differentiating, to some degrees, the CD Players. In fact, those benefiting the most from dither are ancient, and not so much linear, R2R architectures that are really boosted by dither. Since recordings contain dither, it's fair to compare the CD players with it too. I suppose it's the reason for the AES recommendation.

--------

Flo