• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Can you tell me if my understanding of the "upsampling or not upsampling" controversy is correct?

Yep... but its the only reason I can think of to get a beneficial effect from upsampling... to bypass a poor reconstruction filter, or the total lack of one, and kind of 'fix' that DAC.
 
It salvaged an onboard audio chip for me that otherwise measured fine by the standards of a line-level source (DR 99 dB(A), THD -92 dB, THD+N -87 dB), and without being too much of a hassle either, so I take that as a win.

It's also been a standard recommendation since the 2000s for those blessed with consumer-level ADCs and their crummy filters (a PCM1803A is a bit below average but not altogether atypical for the lot - passband ripple <±0.05 dB @ echo spacing 0.3 ms, stopband <-65 dB past 0.583fs, so for an "aliasing-free" 20 kHz you want 48 kHz min), and probably still applies to a bunch of onboard codecs to this day.
 
So, you don't understand the sampling theorem in other words...
Shannon Nyquist is not interpolation. It is exact reconstruction.

You have a finite number of samples. You interpolate the infinite number of values inbetween, in the ideal world by sinc interpolation, or Whittaker-Shannon's interpolation. In the real world if course you can't do that, and you'll use non-ideal lowpass filtrering. Not necessary to get all caught up in semantics, but how is this not interpolation? https://en.m.wikipedia.org/wiki/Whittaker–Shannon_interpolation_formula
 
Last edited:
When upsampling there is always interpolation in order to get simpler post filtering and exact reconstruction.
When not upsampling and using a very steep analog filter (only 1st gen CDP) there is no interpolation but exact reconstruction.
Of course, not all filters are created equal and 'perfect' does not exist. Good enough does.
 
When upsampling there is always interpolation in order to get simpler post filtering and exact reconstruction.
When not upsampling and using a very steep analog filter (only 1st gen CDP) there is no interpolation but exact reconstruction.
Of course, not all filters are created equal and 'perfect' does not exist. Good enough does.
I understand that you can oversample and digitally interpolate to be able to use a simpler analogue reconstruction filter. This is not what I am getting at.. but likewise we, in a world not bound by the laws of causality, we use Whittaker-shannon interpolation to interpolate all missing values with the help of a sinc function. This is what reconstronction is, it is interpolation. Practically, it can be made good enough to be called a perfect reconstruction for the use cases we are interrested in here, but it is still interpolation.
 
That's what I said.
All upsampling has interpolation unless they simply duplicate the previous sample till the next sample arrives (which mimics the 'filterless NOS DAC).
The used methods for interpolation may differ.
Because of this the reconstruction can be anything from 'near perfect' to 'not even close to near perfect' (the different filters one can often select).
It is all done by interpolation of course.

The first gen (non Philips) CDP did not have any interpolation as there was no upsampling, instead this was done with a steep analog filter.
These were the ONLY DAC circuits (and DSD) that do not use interpolation.

I get that you were triggered by SIY's remark 'interpolation is not needed' and were reacting to those words only,
 
Let me quote the wikipedia article on the sampling theorem since I don't have access to the original work, it's quite good actually.

"A mathematically ideal way to interpolate the sequence involves the use of sinc functions. Each sample in the sequence is replaced by a sinc function, centered on the time axis at the original location of the sample n ,
{\displaystyle nT,}
with the amplitude of the sinc function scaled to the sample value,
{\displaystyle x(nT).}
Subsequently, the sinc functions are summed into a continuous function. A mathematically equivalent method uses the Dirac comb and proceeds by convolving one sinc function with a series of Dirac delta pulses, weighted by the sample values. Neither method is numerically practical. Instead, some type of approximation of the sinc functions, finite in length, is used. The imperfections attributable to the approximation are known as interpolation error."
 
The first gen (non Philips) CDP did not have any interpolation as there was no upsampling, instead this was done with a steep analog filter.
These were the ONLY DAC circuits (and DSD) that do not use interpolation.
No see, here is where I do not agree. The interpolation is still happening, but in the analog electronics.
 
No that is simple low pass filtering.
Interpolation is calculating expected values between samples.
There is no calculation present in an analog filter.

As it is difficult, expensive and many components and good tolerance it was impractical.
Philips's solution was more ideal (4x oversampling in order to get to 16 bit resolution with just 14 bits R2R which was what they could make in those days) allowed for simpler post filtering. Interpolation was used here (calculated values at 4x the original bitrate) and after that a simpler post filter was possible.
That post filter was part of the reconstruction.
Interpolation + analog filtering.

Nowadays with DS much higher upsampling (and interpolation) allows for even less bits and very simple first, or second order analog filtering to get rid on the MHz switching garbage.

So post filtering is not interpolation (calculus)
 
Last edited:
No that is simple low pass filtering.
Interpolation is calculating expected values between samples.
There is no calculation present in an analog filter.
A brick wall filter is a an ideal low pass filter. In the time domain that is a sinc function. The convolution of each discrete sample with the sinc will give you the values inbetween the samples. It is interpolation. This interpolation type even has it's own name and it's the Whittaker-Shannon's interpolation formula. You had only discrete data before, after you have data for all times in between these samples because you have interpolated those values. In this case with the help of a low pass filter.
 
A brickwall filter can also be done analog. It just means it is a steep filter.
It is much easier to make it digitally (interpolation) and also more flexible (easy to shift the frequency depending on the used sample rate).
There are many different ways to calculate the 'inbetween' samples, hence the different filter types that exist. One is more accurate (true) to the sampling theorem than others.
 
In this case with the help of a low pass filter.
The analog low pass filter operates in continuous time, therefore it does not "interpolate" as such. Its input "data" is continuous analog, in this case usually the output of a zero-order hold function of the sample values.
 
A brickwall filter can also be done analog. It just means it is a steep filter.
It is easier to make it digitally and also more flexible (easy to shift the frequency).
There are many different ways to calculate the inbetween samples, hence the different filter types that exist.
Yes, exactly. I mean, there are many different ways, but what we always try to achieve is to get as close to the sinc/brickwall interpolation as possible. Mathematically, in the end.
 
The analog low pass filter operates in continuous time, therefore it does not "interpolate" as such. It's input data is analog, in this case usually the output of a zero-order hold of the sample values
Well it is made in two steps. First typically zero order hold or similar, then low pass filtering. Combined they are an approximation of sinc interpolation.
 
Yes, exactly. I mean, there are many different ways, but what we always try to achieve is to get as close to the sinc/brickwall interpolation as possible. Mathematically, in the end.
In practice there are a few types filters one can select in a physical DAC. Some are not even close to ideal and are made that way on purpose.
They all use interpolation but differ.
 
Well it is made in two steps. First typically zero order hold or similar, then low pass filtering. Combined they are an approximation of sinc interpolation.
Not quite, as the analog low-pass is not linear-phase and thus the impulse response is not symmetrical.
 
In practice there are a few types filters one can select in a physical DAC. Some are not even close to ideal and are made that way on purpose.
I tend to consider slow rolloff filters the "screw this, let's just have software upsampling do the heavy lifting and make sure the passband is fine" option.
Yes, a limit of practical implementation.
And not strictly just a limit either. While IIR filters exhibit group delay variation, their typical in-band group delay tends to be a lot lower than for an equivalent FIR filter (e.g. 19/fs --> 5/fs), which is why digital implementations have become fairly popular as a low-latency option now that computational accuracy tends to be high enough to eliminate the accumulated rounding error issues potentially associated with feedback-based filters. Their absence of pre-echo also means that periodic passband ripple is only associated with post-echo, which is masked far more easily, making substantially higher levels benign.
 
Back
Top Bottom