• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Why is 16-bit undithered a "rough" wave, while 24-bit is smooth?

I like to think that I understand how digital audio works pretty well, but please help me understand this:

I was reading these measurements from Stereophile:


Figure 6 shows an undithered 16-bit tone, whereas figure 7 shows the same with 24 bit:
"[T]he M51's reproduction of an undithered 16-bit tone at exactly –90.31dBFS was essentially perfect (fig.6), with a symmetrical waveform and the Gibbs Phenomenon "ringing" on the waveform tops well defined. With 24-bit data, the M51 produced a superbly defined sinewave (fig.7). "


Figure 6:

View attachment 340354


Figure 7:

View attachment 340355

Why are the two waves so different between 16-bit and 24-bit?

They look different because of resolution. OTOH this is not a big deal and it is mostly exploited because of marketing.
 
The graphs are somewhat misleading because at the last bit of 24-bit, it should look similar to the last bit of 16-bits. Below that is quantization noise. But it looks like the reviewer is testing something specific about sending an un-dithered signal through the device that I don't quite understand.

edit: Oy I forgot that there are many more steps in 24-bit than 16-bit.

In practice, the difference I've found between 16-bit and 24-bit is that with 24-bit, you can record audio at line level (around -18dbFS RMS) and not worry about quantization distortion creeping into the mix as multiple recorded elements are layered, compressed and loudened during mixing, and then loudness maximized during mastering. At 24-bit, preamp noise is going to be the bigger concern :)
 
Last edited:
Why are the two waves so different between 16-bit and 24-bit?

At such a low level, there's almost no resolution left at 16-bit. The measurement shows that the DAC is working well.

This is what an undithered 16-bit 1 kHz sinewave looks like in Adobe Audition, at -90.31 dB:

1 khz 16-bit audition.png


1. Generate a 0.006s 1 kHz sinewave, volume 0dB (max), at 32-bit, 44.1 kHz.

2. Use effects, amplitude, amplify/fade, constant amplification and -90.31 dB.

3. Edit, Convert Sample Type, 16-bit, dither disabled.

4. Convert sample type back to 32-bit.

5. Amplify it by 85 dB, and cut it to match the Stereophile measurement.

6. Use Effects, Filters, FFT filter and apply a brickwall filter at 20500 Hz (to match the M51 measurements).
 
Last edited:
I like to think that I understand how digital audio works pretty well, but please help me understand this:

I was reading these measurements from Stereophile:


Figure 6 shows an undithered 16-bit tone, whereas figure 7 shows the same with 24 bit:
"[T]he M51's reproduction of an undithered 16-bit tone at exactly –90.31dBFS was essentially perfect (fig.6), with a symmetrical waveform and the Gibbs Phenomenon "ringing" on the waveform tops well defined. With 24-bit data, the M51 produced a superbly defined sinewave (fig.7). "


Figure 6:

View attachment 340354


Figure 7:

View attachment 340355

Why are the two waves so different between 16-bit and 24-bit?
A belated response: I have been performing this test for a long time, because with the 16-bit data it reveals DAC linearity problems.

In the twos-complement encoding used by 16-bit digital audio, –1 least significant bit (LSB) is represented by 1111 1111 1111 1111, digital zero by 0000 0000 0000 0000, and +1 LSB by 0000 0000 0000 0001. If the waveform at exactly -90.31dBFS is symmetrical, this indicates that changing all 16 bits in the digital word gives exactly the same change in the analog output level as changing just the LSB.

With 24-bit data, the test reveals if the DAC's analog noisefloor is low enough for the sinewave not to be obscured by noise.

John Atkinson
Technical Editor, Stereophile
 
Back
Top Bottom