I'm looking at a new DAC which is why I posted this here, but I have a more technical question about how the encoding works and why 16 bits is even close to being adequate. I've seen encoding described most often as recording the voltage on Y axis, which is the same as intensity (not dB). If encoding is linear, by my calculations, this will result in the first bit turning on at -45dB!
If we are encoding a ±1V signal, at 16 bits we have ±32767 units, for a total of (2^16) or 65536 possible values. The smallest encoding increment is 1/65536V. 10Log(1/65536) is -48.1dB. 1 bit is for polarity, so let's say it's 10Log(1/32767) or -45.1dB. Likewise, if we have 24 bits, the first bit turns on at 10Log(1/2^(24-1)) or -69.23dB, and if we have 32 bits, the first bit turns on at 10Log(1/2^(32-1)) or -93.3dB. Which seems like a barely acceptable value. I also know that certain 32 bit encoding specs include some exponent bits, giving a dynamic range of something over 1000dB (which is an inconceivably huge number).
So... Is the signal encoded using purely intensity, or some exponential function that will give more weight to higher intensity (and wouldn't that be a new type of distortion?) I do know where the 96dB signal to quantization noise figure comes from for 16 bits, but this doesn't address the more important question of the level where the first bit comes on (because you can't describe a signal smaller than that figure). If you have -96dB of quantization noise on top of a -45.1dB signal, clearly, that q noise is a tiny issue. But describing the original signal adequately at 45dB certainly would be anything but a small problem! Obviously, based on the testing shown here, small values CAN be described and measured, but how is this issue overcome?
Thanks!
If we are encoding a ±1V signal, at 16 bits we have ±32767 units, for a total of (2^16) or 65536 possible values. The smallest encoding increment is 1/65536V. 10Log(1/65536) is -48.1dB. 1 bit is for polarity, so let's say it's 10Log(1/32767) or -45.1dB. Likewise, if we have 24 bits, the first bit turns on at 10Log(1/2^(24-1)) or -69.23dB, and if we have 32 bits, the first bit turns on at 10Log(1/2^(32-1)) or -93.3dB. Which seems like a barely acceptable value. I also know that certain 32 bit encoding specs include some exponent bits, giving a dynamic range of something over 1000dB (which is an inconceivably huge number).
So... Is the signal encoded using purely intensity, or some exponential function that will give more weight to higher intensity (and wouldn't that be a new type of distortion?) I do know where the 96dB signal to quantization noise figure comes from for 16 bits, but this doesn't address the more important question of the level where the first bit comes on (because you can't describe a signal smaller than that figure). If you have -96dB of quantization noise on top of a -45.1dB signal, clearly, that q noise is a tiny issue. But describing the original signal adequately at 45dB certainly would be anything but a small problem! Obviously, based on the testing shown here, small values CAN be described and measured, but how is this issue overcome?
Thanks!