There are lots of DACs that advertise S/N ratios of 100dB or better and I'm trying to understand how that's possible.
Here's my confusion:
According to the Whittaker–Shannon interpolation formula, we can exactly reconstruct an analog signal as:
- Sum x[n] sinc(n - t/T)
where the sum extends to infinity in both directions. Real-world DACs need to truncate the sum into a finite (and typically short) window for both computational and latency reasons. This introduces error. In general, there's no way to reduce this error because it depends on the values of x[n] that the DAC doesn't have. Using a windowing function (e.g. the Blackman window) can improve the frequency distribution of the error, but actually increases the error.
To estimate the error, I'm going to assume a sample rate of 48kHz and a window of 32 samples on each size, resulting in a latency under 1ms. In that case, the size of the first term omitted from the sum is of the order of x[n] / 32 pi, which is only 40dB lower than x[n]. There is some cancellation between adjacent terms, as the sinc function changes signs, but it won't reduce the error nearly enough to get it below -100dB.
I can think of two possible explanations here. The first is that the way that the S/N ratio of a DAC is measured doesn't capture this error. The second is that DACs produce very little error for common test signals, but not for arbitrary signals. This seems quite plausible when the sampling rate is a multiple of the test signal frequency.
Can someone explain what is going on? Am I missing something here?
Thank you.
Here's my confusion:
According to the Whittaker–Shannon interpolation formula, we can exactly reconstruct an analog signal as:
- Sum x[n] sinc(n - t/T)
where the sum extends to infinity in both directions. Real-world DACs need to truncate the sum into a finite (and typically short) window for both computational and latency reasons. This introduces error. In general, there's no way to reduce this error because it depends on the values of x[n] that the DAC doesn't have. Using a windowing function (e.g. the Blackman window) can improve the frequency distribution of the error, but actually increases the error.
To estimate the error, I'm going to assume a sample rate of 48kHz and a window of 32 samples on each size, resulting in a latency under 1ms. In that case, the size of the first term omitted from the sum is of the order of x[n] / 32 pi, which is only 40dB lower than x[n]. There is some cancellation between adjacent terms, as the sinc function changes signs, but it won't reduce the error nearly enough to get it below -100dB.
I can think of two possible explanations here. The first is that the way that the S/N ratio of a DAC is measured doesn't capture this error. The second is that DACs produce very little error for common test signals, but not for arbitrary signals. This seems quite plausible when the sampling rate is a multiple of the test signal frequency.
Can someone explain what is going on? Am I missing something here?
Thank you.