• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Digital Audio Demystified

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,713
Likes
6,001
Location
US East
[Note] I don't have the privilege to start a thread in the Audio Reference Library. If our moderators or @amirm think these posts are of sufficient value, please move this thread to the Audio Reference Library section. Thanks!

This thread is my attempt to demystify digital audio (or the basic parts of digital audio) for readers that aren't very familiar in this area of digital signal processing. The goal is to help the readers understand how digitized audio works, and thereby dispel some of the unfounded folklore around it.

This opening post is to explain what the (much maligned) sinc reconstruction filter is and how it works. First, let's see what an interpolation process is and why we need it.

When we digitize audio, we take “snapshots” of the audio waveform at discrete sampling points that are equally-spaced in time – and the interval between the adjacent samples is the sampling period. To convert the digitized samples back into a continuous time signal, we'll need to “recreate” the missing parts of the analog waveform between the digital samples. We know the recreated waveform, if it is true to the original, must match exactly to the digital sample at each of the sampling points – we need to somehow “connect the dots” to recover the missing parts. The mathematical term for this connect-the-dots operation is interpolation. Therefore, if we need to reconvert a digitized signal back to a continuous time signal, we need to interpolate. The rest of this post explains how the sinc function works as an interpolator. In the next post I'll compare the sinc function to 2 other interpolators.
Fig 1 digitized_waveform.png

Figure 1 shows an example of a short segment of an analog waveform (the original signal) and its digitized samples. To simplify matters, the sampling rate used in this example is normalized to 1 sample per some unit of time. This means the time stamps of the digital samples are at t = n, where n are integers = … -3, -2, -1, 0, 1, 2, 3, …
Fig 2 sinc_function.png


Figure 2 shows the sinc function. This is normalized form of the sinc function, and we will use this form of the sinc function in this discussion. [The unnormalized version is sinc(t) = sin(t)/t.] The sinc function gives zeros at all integer values of t, except at t = 0 it is 1. [Technically, from the equation shown in Figure 2 the sinc function is undefined at t = 0, as the equation gives 0 divided by 0. However, when t is very very close to 0, the value given by the equation gets very very close to 1. So we'd say as t approaches 0, sin(πt)/(πt) approaches 1. We follow this property and define sinc(0) = 1.] Note that the sinc function gives non-zero values when t is not an integer.

Since in all our sampling points we have integer values of time t, the property of the sinc function that it is zero at all the sampling points except one makes it very convenient for use as an interpolator. As it is shown in Figure 3, if we have a digital signal that has only one non-zero sample, and we scale and time shift a sinc function to match the non-zero sample, this sinc function will automatically pass through all the other samples (which are all zero). This scaled and time shifted sinc function is therefore a valid interpolation for our digital signal with a single non-zero sample.
Fig 3 sinc_scaling_shifting.png

If we have a digital signal with 2 non-zero samples, we can use 2 sinc functions to separately fit each of the samples (see Figure 4). The property that the sinc function is zero at all but one integer values of time comes in handy again. When we sum the 2 sinc functions together, the resultant sum is a valid interpolation of our signal with the 2 non-zero samples. Each of the sinc function only contributes to fitting its corresponding sample and they don't affect any of the other non-zero samples. The sum of the sinc functions therefore will interpolate all the samples – the 2 non-zero ones and the rest of the zero ones.
Fig 4 sinc_fitting.png

We can therefore split any digitized signal into a series of component signals – each component having only a single non-zero sample. The top plot in Figure 5a shows a small segment (11 samples) of our example digitized signal. The time stamps are labeled 0 to 10. Note that this signal started long before time 0 and continues long after time 10, which is to say our example signal actually is much longer than 11 samples. Below the top plot is our signal segment split into its 11 single non-zero sample component signals.

To the right (see Figure 5b), we fit a sinc interpolator to each of these component signals. When we sum up these sinc functions, we will get a function that interpolates the original digitized samples (see top plot in Figure 5b). The dashed curve is the sum of the sinc interpolators, and it is our reconstructed continuous time signal. As seen visually, the reconstructed waveform matches the original continuous time waveform quite precisely. When we have more than a few “active” samples, the “ringing” or oscillations seen in the 1 or 2 non-zero sample cases (Figures 3 and 4) disappears.
Fig 5 sinc_interpolate_series.png

This operation of taking the samples one at a time, fit the interpolator function to it, and then sums up each of these interpolators, is convolution.

Several comments:
  • It is evident, at least for this example, that the sinc interpolation is a pretty good one. The reconstructed signal is nice and smooth, and has no resemblance to the stair-steps that are often associate with reconstructed digital signals in advertisements. The fit also looks better than a linear interpolation where we connect the samples with straight lines.
  • The convolution process shown above is the continuous time equivalent of the convolution process we use in FIR filters. In the FIR filter case, we convolve the input digital signal with a specially crafted (finite) impulse response that is the filter “kernel”. Here we convolve the input with the sinc function that is the interpolator kernel.
  • The convolution process of the FIR filter is convolution in the digital domain, i.e. convolving a digital signal (input) with a digital impulse response (convolution kernel). The convolution in this post is a bit different. The input is a series of digital samples, but the convolution kernel is a continuous time function (the sinc function). This seemingly incompatibility between a digital input and a continuous time kernel is resolved mathematically by considering the input as an “impulse train” in continuous time.
    (An impulse train is a continuous time signal consisting of a series of impulses at regular intervals. Between the impulses the value of the impulse train is zero. These impulses align with the digital samples, and the “strength” of each impulse is equal to the amplitude of the corresponding digital sample. We'll revisit the concept of the impulse train in post #3 when we show that the discrete time to continuous time signal interpolation/reconstruction process is the same as passing an impulse train through a low pass filter. We'll also see why low pass “filtering” a “digitized signal” will give us its analog reconstruction.)
  • The method of computing the convolution shown in this example is not an efficient way to compute a convolution. It is, however, easier to understand how the convolution process operates with this method than with the more computationally efficient ones.
  • Only a very short segment of 11 samples are shown in the example, but the signal is much much longer. A one second CD quality audio is 44100 samples. There are many samples before and after the shown segment. The sinc function spread out horizontally (in time), and is theoretically infinitely wide. Its magnitude decays at a rate inversely proportional to the horizontal “distance” from its center peak. Therefore, the sample 500 sampling periods before the shown segment (i.e. t = -500) or the sample 500 periods after (i.e. t = 510) will affect the interpolated waveform in the shown segment. We often hear that we need an infinitely long sinc function for the “perfect” interpolation. However, all our digitized signals are quantized. For example, the value of the least significant bit of 16 bit resolution is 1/65536. We therefore don't really need infinitely long sinc functions as it will drop below the quantization noise floor in a finite distance. There are also clever ways (which we aren't getting into) to shape the sinc function to accelerate its decay that will cause very minor degradation to the interpolation accuracy but greatly reduces the length of convolution kernel.
  • The discrete time signal to continuous time signal reconstruction method shown here cannot be practically implemented in electrical circuits, and therefore can't be used for an actual D to A converter. However, we can easily see that we can use this method for (integer or non-integer multiple) sampling rate conversion. Most over-sampling D/A converters over-sample in integer multiples of the sampling frequency. They use simpler and more efficient methods.
 
Last edited:
OP
NTK

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,713
Likes
6,001
Location
US East
We saw how the sinc interpolator worked in post #1. What if we use other interpolating functions? We can surely construct functions that have the same property as the sinc function that makes it an efficient interpolator, namely giving zeros at all the sampling points except one.

In this post we will look at two other interpolators. The first is the unit square pulse, as shown in Fig 7. It is square because it has a width of 1 and a height of one. It wasn't mentioned in post #1, but one more important characteristic of these interpolators is that the “area under the curve”, or more precisely its integral from -∞ to +∞, is 1.
Fig 7 unit_square.png

Figure 8 shows the results of the interpolation using the square pulse, and it is the “stair step” waveform. With a small amount of imagination, one can visualize that the unit square pulse will give a decent reconstruction of a square wave. However, it still can't give a perfect reconstruction unless the half-period of the original square wave is an integer multiple of the sampling period. The unit square pulse can only reconstruct pulses that are integer multiples of the width of the unit square, which is a sampling period.

Fig 8 square_interpolate_series.png

We can also see that its reconstruction of our original waveform is inferior to that of the sinc interpolation. Therefore, while the unit square pulse may be better at reconstructing square waves than the sinc function, the sinc function is much better if the original waveform is “smooth”. (As will be discussed in a later post, “smooth” in our case means properly band-limited per the sampling theorem.)

This stair step interpolation is called zeroth order hold (ZOH). In most depictions, unlike this post, the digital samples are aligned to the leading edge of the pulse instead of the center. This is just a simple constant time shift of half a sampling period, and will cause no significant differences. Some DAC's will produce the ZOH waveform as an intermediate step of the D to A process. The stair step analog electrical signal is produced by a “sample-and-hold” process – basically generate the electrical signal at the amplitude of the digital sample, hold the signal output level for one sampling period, and repeat the process for the next digital sample. The ZOH D to A process will be covered in a later post.

The next interpolator we'll look at is the unit triangular pulse (see Figure 9).
Fig 9 unit_triangle.png

The interpolation result is shown in Figure 10. The unit triangle gives a linear interpolation – connecting the digital samples using segments of straight lines. This piecewise linear interpolation is also know as first order hold (FOH). We won't be looking further into the FOH interpolation.

So far, the indication is that the sinc interpolator seems to be a reasonable choice as an interpolator for reconstructing continuous time signals from discrete digital samples. It does a much better job at reconstructing our example signal than the stair step and piecewise linear interpolators.

Fig 10 triangle_interpolate_series.png
 
Last edited:
OP
NTK

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,713
Likes
6,001
Location
US East
In post #1 the impulse train was briefly mentioned. In this post we will investigate the relationship between the original analog signal, and the digitized signal as represented by an impulse train.


What is an impulse? The formal name of an unit impulse is the Dirac delta function. No, it is not named after the company that sells and license room EQ (and other audio) technology. It is named after Paul Dirac, who shared a Nobel Prize with Schrödinger.


The Dirac delta function (or simply the delta function), δ(t), is basically a pulse that exists only theoretically, which is infinitely narrow and infinitely tall, centered at t = 0, and have the area under the pulse (strength) of 1.


Figure 11 depicts the impulse train representation of a digitized signal. A delta function can be scaled and time shifted to match each digital sample as: α δ(t – τ), where τ is the time (in sample number) and α is the amplitude of the sample.
Fig 11 Analog waveform and Impulse Train.png

To understand the relationship between the impulse train and the analog signal, we'll perform a spectrum analysis on the impulse train.

We'll evaluate the spectral content of a signal through the Fourier transform. The Fourier transform of the delta function is simple: F[δ(t)] = 1. For a scaled and time shifted delta function α δ(t – τ), we will use the time shift and linear properties of the Fourier transform:
Fourier time shift.png

The Fourier transform of the impulse train is therefore the summation of the Fourier transforms of each of the scaled and shifted delta functions.

Let's take a look at an example of a single frequency sine tone. The spectrum of a single frequency sine wave is a single spike at the sine wave frequency. The frequency of the sine wave, f, in our example is 1/8 the sampling frequency (i..e. each cycle gives 8 samples), or f = 1/8 Fs.

Fig 12 Impulse Train 17-sample.jpg

First we look at an impulse train that covers 2 periods of the sine wave. The 17-sample impulse train (Figure 12) does not represent a continuous single frequency sine wave, but one that abruptly starts and abruptly stops after 2 cycles. There is going to be significant spectral leakage due to the discontinuities at the beginning and end. Figure 13 is the spectrum of this impulse train as computed by summing the series of the Fourier transforms of delta functions.

Fig 13 Spectrum of 17-sample Impulse Train.jpg

We can see peaks at 1/8 Fs (Fs = sampling frequency), 7/8 Fs, 9/8 Fs, 15/8 Fs, etc. The peaks are at the sine wave frequency of 1/8 Fs, with duplicated images at Fs ± f, 2 Fs ± f, 3 Fs ± f, …

If we increase the number of impulses to 65 (i.e. 8 cycles), the spectrum of the impulse train becomes (see Figure 14):

Fig 14 Spectrum of 65-sample Impulse Train.jpg

The spectral peaks are in the same frequencies but are more prominent (narrower main lobes and higher side band rejection). Figure 15 is the spectrum of the impulse train with 513 impulses (64 cycles).

Fig 15 Spectrum of 513-sample Impulse Train.jpg


The trend of the spectral peaks become more and more prominent continues. We can reason that (for a rigorous mathematical proof, consult a textbook :D) for a pure sine wave of f = 1/8 Fs, if the impulse train is infinitely long, we will only see non-zero values at f, Fs ± f, 2 Fs ± f, 3 Fs ± f, …, and zero elsewhere.

Figure 16 is the spectrum plot for a different signal frequency of 3/8 Fs.

Fig 16 Spectrum of 513-sample Impulse Train, f=3_8 Fs.jpg



We can conclude, for a single frequency sine wave:
  • When the discrete digital samples are represented as an impulse train, the impulse train is in effect the original signal plus an infinite series of images of the original signal (in the frequency domain).
  • The frequency of the lowest frequency image is Fs – f. Therefore, if f > 1/2 Fs, the first image will be “reflected” to a lower frequency than the frequency of the original signal. This is the problem of aliasing. To avoid aliasing, we must keep f below 1/2 Fs, which is the well known Nyquist frequency and band-limiting requirement of the sampling theory.
  • Here is the important part – we can recover the original sine wave if we pass the impulse train through a low pass filter to filter out all the images at Fs ± f, 2 Fs ± f, 3 Fs ± f, …
Now also comes the Fourier theorem, which says any physical signal can be decomposed into the summation of a series of sine and cosine waves (cosine waves are just sine waves with a 90 degrees phase shift). Since everything we have discussed here is linear, the method will work just as well for an arbitrary signal that is band-limited. Therefore, an arbitrary band-limited signal, when properly sampled, can be (almost) perfectly recovered, subjected to some minor imperfections due to the inevitable imperfect hardware.

The sinc function, when analyzed in the frequency domain, can be shown as an ideal brick-wall low pass filter (please consult a standard textbook). Therefore, it is the ideal choice for digital reconstruction and interpolation to recover the original continuous time signal. All the talks about the sinc function causing “pre-ringing” are just myths propagated by people without understanding the theoretical basis of digital sampling worked out years ago by Nyquist, Shannon, Whitaker, and others.
 
Last edited:
OP
NTK

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,713
Likes
6,001
Location
US East
The discussions in posts #1-3 have been entirely on mathematical computations. This post we'll look at how we may convert discrete digitized samples into continuous time electrical (analog) signals.

In post #3 we saw that when we take discrete samples of a band-limited continuous time waveform, and “convert” these samples into a continuous stream of impulses – an impulse train, we have in effect the original waveform plus an infinite series of images in the continuous time domain. If the original waveform is a sine wave of frequency f, the impulse train will be the summation of a series of sine waves, all with the same amplitude as the original waveform, at frequencies: f, fs ± f, 2fs ± f, 3fs ± f, … , where fs is the sampling frequency.

Therefore, theoretically, if we can produce a series of sharp narrow electrical pulses that matches the impulse train, and pass these electrical pulses through a brick-wall low pass filter with cutoff at fs/2, we will eliminate the images, and have faithfully reconstructed the original continuous time signal – the D-to-A process.

Alas, producing perfectly sharp narrow impulses isn't so easy. An ideal impulse is infinitely tall and infinitely narrow, but has a finite area (time integral) equal to the amplitude of the discrete sample. A much more practical way would be to squash the impulse down, while widening it to keep the area the same. We'll squash the impulses down until they extend half a sampling period to either side and just touch their neighboring samples. This can be done with an electronic circuit that outputs a voltage that matches the sample value, hold the voltage constant until it is time for the next sample, repeat the same for the next sample, and so on. This sample-and-hold circuit will produce the familiar stair-step waveform we saw in post #2.

Now, the question is how are this stair-step waveform, our impulse train, and the original waveform related. Recall in post #2 that when we convolve the digital samples with a unit square pulse, the results are the stair-steps. The same applies here – the continuous time stair-step waveform is the result of convolving the impulse train with a unit square pulse.

Let's examine a digitized sine wave in the time and frequency domains. Figures 17 show the time domain waveform and its corresponding frequency spectrum – we have only a single frequency component.
Fig 17a Original Continuous Time Domain.png

Fig 17b Original FT.png

Fourier transforms are what we use to obtain our frequency spectra. Fourier transforms give us the Fourier coefficients, which are the magnitudes and phases of the frequency components of the signal. Fourier coefficients can be given in magnitude and phase, or equivalently as complex numbers.

Typically in measurement reports we are only presented with the magnitudes. However, for analyses, we need the complete set. You may notice that in these examples I used a cosine wave and the sampling points are placed at the midpoints of the stair-steps. They are used because they make the mathematics less messy.

The cosine function and the centered unit pulse are what we call “even” functions, which means their values are mirrored by the y-axis (i.e. f(-t) = f(t)). The Fourier coefficients of even functions are real numbers instead of complex ones (i.e. they are zero phase). Unlike the typical frequency response magnitude graphs that show dB values (which cannot represent negative numbers), these frequency spectrum graphs show the complete Fourier coefficients and their values can be positive or negative. By using these even functions, we avoid having to deal with imaginary numbers.

Figures 18 show those for the impulse train. Note that the arrow heights represent the impulse strengths – the area they cover, not their amplitudes. The amplitudes of ideal impulses are infinite.
Fig 18a Impulse Train Time Domain.png

Fig 18b Impulse Train FT.png


In post #2 we saw that the stair-step waveform can be obtained by the convolution of the unit square pulse and the impulse train. In the frequency domain, the unit square pulse is our familiar sinc function. Since we convolved the impulse train and the unit square pulse in the time domain, in the frequency domain we multiply them together. Figures 19 show the time and frequency domain representations of the stair-step waveform. The dashed curve in Figure 19 b is the unit square pulse in the frequency domain.
Fig 19a FOH Time Domain.png

Fig 19b FOH FT.png


We now can see in the frequency domain relationships between the impulse train and the stair-step waveform. We still have the same frequency components, but when they are made into stair-steps, their amplitudes are modulated by the sinc function. Therefore, to reconstruct the analog electrical signal, we can pass the electrical stair-step signal through a low pass filter with a frequency response shown in Figure 20.
Fig 20 Anti-Imaging Filter NOS.png



For fun and to convince ourselves that Figure 19b indeed represents the stair-steps in the frequency domain, below is an animation constructed by adding the frequency components as shown in the figure one at a time, up to the 10,000th image. You can see the Gibbs phenomenon which is the results of excluding the higher frequencies part of the series. (There are some plotting artifacts in the animation as we go up higher in the image frequencies. Since the spikes are getting extremely narrow, the plotting program sometimes fail to render them precisely.)
stair_steps.gif



You can also see that the fundamental is of a lower amplitude than the original waveform. As shown in Figure 20, an EQ step is necessary to correct the frequency response error. The EQ can be done in the digital domain prior to the stair-step waveform generation, or at the analog low pass filter after the stair-step waveform generation.


Some “high end audio” DAC manufacturers chose to forgo the filter and EQ steps and output the stair-steps in the filterless NOS (non-oversampling) mode. Here is an example: Stereophile test report (see Figure 2).


Oversampling
Getting an analog filter with the brick-wall response as shown in Figure 20 isn't easy. A solution is oversampling. Here is a conceptual explanation of how oversampling works:
If our sampling frequency is fs, the sampling theorem says that our usable signal frequency bandwidth is 0 to fs/2, and we'll have images starting at fs/2. To eliminate the images, we'll need a perfect brick-wall filter that cuts everything off above fs/2, but not affects anything below fs/2, and we have to do this in the analog domain after the generation of the stair-steps – not so easy.​
What if we restrict our original signal bandwidth to, for example, 0 to fs/4? Now, the lowest frequency image is at fs - fs/4 = 3/4 fs. There is nothing between fs/4 and 3/4 fs, and our analog filter can transition from flat (or whatever EQ is required to counter the sinc attenuation due to the stair-steps) at fs/4 to full attenuation at 3/4 fs, a much more doable task. This is 2x oversampling. By “oversampling” we mean we are sampling at 2x over the theoretically required sampling frequency. We can make it even easier for the post analog filter if we use larger oversampling ratios, such as 4x, 8x, or higher.​
Since we already have the digital samples, we aren't going back to re-record them with higher sampling rates. However, we can use interpolation to generate arbitrarily more samples in between the existing ones, which is in effect oversampling.​

Does anyone notice all the DAC digital filter talk we have plenty of lately haven't made their appearance yet? Here is where they show up – they are used for the oversampling interpolations. The method in post #1 is rather computationally intensive. If we oversample by integer multiples, there is a simple way.

We just add zeros between the samples, and pass the samples through a low pass (interpolation) filter, all while we are still in the digital domain. Figures 22 show this process for 4x oversampling. The left figure shows the original samples with 3 zeros inserted in between each sample pair. The right figure show the convolution process, in this case a sinc filter kernel is used. Since the zero samples do not contribute, they are skipped in the convolution calculations.

Figures 22 only show 6 samples in the summation. The actual summation involves more than 6 samples, and the number depends on the width of the filter kernel, which we usually refer to as the number of filter taps.
Fig 22 oversample.png


Note that the requirement of the oversampling filter is the same as the anti-imaging filter in the non-oversampled case – a steep brick-wall. This is where the advantage of DSP is clear. It is much much easier to construct a very steep slope filter in the digital domain than in the analog domain. The oversampling operation replaces the requirement of a very difficult to realize brick-wall analog filter with a digital one that is much easier to achieve.

Audio DAC's have almost entirely moved to the sigma-delta (ΣΔ) modulation for the conversion process. For more details, please refer to this thread.
 
Last edited:

abdo123

Master Contributor
Forum Donor
Joined
Nov 15, 2020
Messages
7,446
Likes
7,955
Location
Brussels, Belgium
I think you should start with the concept of Fast Fourier Transform first .

i think the fact that any analog AC current can be represented (digitally) with a collection of pure sinewaves is very important to understand how digital audio works or why it works the way it does.

If you need me to delete this post to reserve more posts let me know.
 
Last edited:

antcollinet

Master Contributor
Forum Donor
Joined
Sep 4, 2021
Messages
7,708
Likes
13,001
Location
UK/Cheshire
Brilliant - just what I was considering I wanted to understand better. Looking forward to the next posts. Also happy to delete this post to get it out of the way of any subsequent posts of yours.
 
OP
NTK

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,713
Likes
6,001
Location
US East
Brilliant - just what I was considering I wanted to understand better. Looking forward to the next posts. Also happy to delete this post to get it out of the way of any subsequent posts of yours.
I haven't started on post #4 yet, so it may take another day. The first 4 posts will cover what I have intended. Post #4 will be on D/A conversion using first zeroth order hold (sample and hold, which will result in the stair step waveform), and how to recover the analog signal from the stair steps. I don't know nearly enough about sigma delta modulation, so I'm not going to write about it :) .
 
Last edited:
OP
NTK

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,713
Likes
6,001
Location
US East
I think you should start with the concept of Fast Fourier Transform first .

i think the fact that any analog AC current can be represented (digitally) with a collection of pure sinewaved is very important to understand how digital audio works or why it works the way it does.

If you need me to delete this post to reserve more posts let me know.
Don't know how to tie in FFT with the topics I have in mind, Much of what I wrote is in the continuous time domain, and the Fourier transform I used in post #3 was the continuous time version.

My intent was to explain that to (re)convert discrete digital signal to continuous analog signal, effectively what one needs to do is to "low pass filter" the digital signal, with the cutoff/corner frequency at the Nyquist frequency. Somewhere in me is quite annoyed by people maligning the sinc function, and makes me want to "restore its good name".

I did mention that we needed the Fourier theorem which states that all real life signals can be decomposed into a sine and cosine series, but I didn't elaborate beyond that.

May be when people raise questions we can get deeper into FT and FFT.
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,402
Likes
18,360
Location
Netherlands
I haven't started on post #4 yet, so it may take another day. The first 4 posts will cover what I have intended. Post #4 will be on D/A conversion using first zeroth order hold (sample and hold, which will result in the stair step waveform), and how to recover the analog signal from the stair steps. I don't know nearly enough about sigma delta modulation, so I'm not going to write about it :) .
Please then also add why sample and hold is not a good digital representation (even after filtering), at least not if you use the normal sampled values. One can actually improve on this a bit though.
 

dc655321

Major Contributor
Joined
Mar 4, 2018
Messages
1,597
Likes
2,235
However, we can easily see that we can use this method for (integer or non-integer multiple) sampling rate conversion.

Indeed, sinc interpolation is often the basis for resampling to arbitrary rates: convolve a digital signal with an appropriate sinc to produce an analog/continuous signal, then resample that at whatever rate desired with a pulse train. All without leaving the computational domain.

BTW are you trying to take Monty’s job?!? :D
 

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,833
Likes
9,573
Location
Europe
The sinc function, when analyzed in the frequency domain, can be shown as an ideal brick-wall low pass filter (please consult a standard textbook). Therefore, it is the ideal choice for digital reconstruction and interpolation to recover the original continuous time signal. All the talks about the sinc function causing “pre-ringing” are just myths propagated by people without understanding the theoretical basis of digital sampling worked out years ago by Nyquist, Shannon, Whitaker, and others.
Well said. Thanks for the explanations, I've learned something new despite having knowlege about sampling.
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,894
Likes
16,710
Location
Monument, CO
Nice job @NTK and a good complement/extension of the basic series of threads I created so long ago. Mine focused on the frequency domain so it is really great to have the time domain covered so well! I also have a very good idea of the number of hours it takes to put together a thread like this (it is a LOT of work, at least for my little pea brain).

Thank you! - Don

p.s. For my threads, I just click on the "Report" link at the bottom left and self report to the moderation team with a plea to move the thread to the appropriate place. Sometimes that's the Audio Reference Library, sometimes the dumpster... :)
 
OP
NTK

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,713
Likes
6,001
Location
US East
Thread Stickied and moved to Audio Reference Library. Nice work @NTK
Thank you!
I also have a very good idea of the number of hours it takes to put together a thread like this (it is a LOT of work, at least for my little pea brain).
You don't know the advantage that I have right now! LOL. I am travelling to Hong Kong and landed a few days ago -- which means I am locked up in a quarantine hotel room and have nothing to do for 7 days. (Today is my last full day, hooray!)

To add insult to injury, these are the 3 meals a day since I checked-in. WARNING: This is for everyone's amusement only, NO COVID TALK PLEASE!
BTW are you trying to take Monty’s job?!? :D
Yes! I am after that pot of gold of zero. I want at least 25% of nothing :D
 

Attachments

  • Prison Food.pdf
    1.5 MB · Views: 191

AlephAlpha001

Member
Joined
Mar 12, 2022
Messages
94
Likes
175
Location
Hong Kong
Thank you!

You don't know the advantage that I have right now! LOL. I am travelling to Hong Kong and landed a few days ago -- which means I am locked up in a quarantine hotel room and have nothing to do for 7 days. (Today is my last full day, hooray!)

To add insult to injury, these are the 3 meals a day since I checked-in. WARNING: This is for everyone's amusement only, NO COVID TALK PLEASE!

Yes! I am after that pot of gold of zero. I want at least 25% of nothing :D
Let me just say after having been out to the supermarket in the cold windy rain an hour ago that you're not missing much by being stuck in the quarantine hotel today. Now the food... that's another story.
 
  • Like
Reactions: NTK

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,833
Likes
9,573
Location
Europe
To add insult to injury, these are the 3 meals a day since I checked-in. WARNING: This is for everyone's amusement only, NO COVID TALK PLEASE!
Did you lose or gain weigth? :p
 

txbdan

Active Member
Joined
Apr 21, 2020
Messages
213
Likes
199
  • The discrete time signal to continuous time signal reconstruction method shown here cannot be practically implemented in electrical circuits, and therefore can't be used for an actual D to A converter. However, we can easily see that we can use this method for (integer or non-integer multiple) sampling rate conversion. Most over-sampling D/A converters over-sample in integer multiples of the sampling frequency. They use simpler and more efficient methods.

How does an actual DAC chip work? I've seen simple DAC circuits using resistors summed with an opamp to convert digital bits into voltages, but that wouldn't do any interpolation. Can you discuss the actual electrical circuits that implement interpolation on-chip?
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,894
Likes
16,710
Location
Monument, CO
Most audio DACs these days are delta-sigma types and interpolation is part of the digital logic circuits. I have designed ADCs that used (analog circuit, signal) interpolation to reduce the comparator count, but cannot recall doing anything similar for a DAC (*). It was all digital oversampling and filtering (interpolation) before the analog output. You could do the same by oversampling a conventional DAC (R2R or whatever) and applying a digital filter before the actual digital-to-analog conversion. That is actually what I did in the primordial past for a high-speed DAC.

(*) Edit: There is a voltage-mode DAC architecture that uses "cascaded" resistor ladders, with the first coarse ladder feeding a second that "interpolates" the levels (fine steps) between the taps of the coarse ladder. This greatly reduces the resistor count compared to a full-unary (one resistor/step, ~2^N resistors for an N-bit DAC), but not as much as a pure R2R design (or most mixed R2R/unary designs). The trade is extra switches and latency, plus potential issues with bandwidth and errors due to moving the second set of resistor taps up and down the ladder. It's easier to show in pictures than to explain, but bottom line is this interpolates to obtain smaller steps from larger steps, but the output before the filter it is still steps. I used the scheme for some biasing DACs but not in the high-speed signal path (that I recall).
 
Last edited:

txbdan

Active Member
Joined
Apr 21, 2020
Messages
213
Likes
199
But if each bit is being mapped to a discrete voltage, isn't that in fact a stairstep signal?

Monty's video showed the post-DAC signal as a perfect sine wave with no stair stepping. I think he even said that the smoothness isn't even due to filtering a stair step. He made it sound like the DAC chip can literally somehow fit the correct curve to the sample points and output it smoothly.
 
Top Bottom