• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Question about Bit Depth Reduction and Quantization Errors

JulianW

Member
Joined
Jan 12, 2025
Messages
19
Likes
4
Hi everyone,


I have a question regarding bit depth reduc-tion, quantization errors, and dithering in a typical audio signal path.


As I understand it, dithering should ideally be applied only once-typically during the final export from a DAW, when converting from 64/32-bit floating point to 24/16-bit fixed point.


However, here's my concern:


If I play the exported file in a player like Roon or Audirvana, the signal is often upscaled to


64-bit floating point for internal processing.


When it reaches my audio interface, it is then reduced to 24-bit fixed point. If the signal is processed by a DSP monitor, it could be converted back to 32/40-bit floating point, only to be reduced again to 24-bit fixed point before reaching the DAC.

My questions:


  1. Is this constant bit depth switching problematic in any way?
  2. Does dithering automatically occur every time the bit depth is reduced, even if it's not explicitly shown in the signal path?
  3. If yes, wouldn't that introduce unnecessary noise or even audible artifacts?
  4. If no, does that mean quantization errors can accumulate over multiple bit depth reductions?

Am I misunderstanding something fundamen-tal, or is this a potential issue in high-end audio signal chains?


Thanks for any insights!
 
As I understand it, dithering should ideally be applied only once-typically during the final export from a DAW, when converting from 64/32-bit floating point to 24/16-bit fixed point.

The "rule is" dither whenever you down-sample.

But you can't hear dither or quantization noise at 16-bits under normal-reasonable listening conditions. It's not THAT important and at 24-bits it's even less important.

You CAN hear quantization noise at 8-bits and you might want to play-around with that so you know what "low resolution" actually sounds like. (I've never tried dither at 8-bits.... Who cares? I don't use 8-bits...)

If you recorded from a microphone (or almost anything analog) the analog noise is usually "worse" than quantization noise or dither. It can be considered "self dithered" and there's no need to add more dither noise, although you probably can't hear the added noise anyway so a tiny amount of dither doesn't hurt.

Is this constant bit depth switching problematic in any way?
You usually can't hear any difference as long as you say at "CD quality" or better (in a proper blind listening test). But of course, it's "good practice" to avoid unnecessary conversions.

Does dithering automatically occur every time the bit depth is reduced, even if it's not explicitly shown in the signal path?
That would depend on your DAW. Usually it's optional.

If yes, wouldn't that introduce unnecessary noise or even audible artifacts?
As above, you usually can't hear any difference as long as you stay at "CD quality" or better.

If no, does that mean quantization errors can accumulate over multiple bit depth reductions?
Quantization noise only gets worse when you actually down-sample. If you up-sample the quantization noise from the lower resolution remains, and if you later down-sample it still remains and it doesn't get worse.

Dither DOES accumulate. If you have dithered 16-bit and you up-sample, you shouldn't dither again if you later down-sample.

But 24-bit dither is about 48dB lower than 16-bit dither so if you dither at 24-bits, and then down-sample to 16-bits and re-dither, the super-super quiet 24-bit dither won't matter. (If we pretend dither at 24-bits matters at all under any conditions.)

P.S
Nobody is going to listen to your production and say, "You forgot to dither", or "You used the wrong dither". ;)
 
So, I can ignore any Dithering options in EQs or digital Volume Control and resampling?
Pretty much, yes. At these bitdepths, it should be completely inaudible. Dither used to be way more important in old 16bit systems. On higher bitdepths, it's still doing technically the same thing, but in way lower volume regions. Our ears aren't that finely tuned.

If you like, you can always do a double blind test to make sure. Take care following proper procedure. If you have a good quality audio interface, you could record the final output of your system (after DAC/preamp) using the same song (choose a high quality favourite you're highly familiar with) with different settings, and then ABX compare the recordings. There'd be the ADC THD+N in the recordings of course, but nowadays these are completely transparent as well.
 
Back
Top Bottom