• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Aliasing, imaging and upsampling a real example.

bennetng

Major Contributor
Joined
Nov 15, 2017
Messages
1,634
Likes
1,692
This is just the sort of thing I need to try and understand this stuff.

I can understand that we need to avoid aliases during the ADC process, because they not only pollute the audible range, but will also cause IMD. When it comes to potential images created during the DAC process, if they only occur *above* Nyquist, does that matter? They will waste a bit of power further down the chain, but will be inaudible anyway. I thought that a proper brick wall filter was essential in the DAC, but am I wrong to think that?
Images can induce IMD as well. Like this:
The first screenshot is the Realtek's original filter. The second and third screenshots are the filters from Windows resampler. You can see there are two signals, a tone and a white noise and the tone created IMD at 2kHz in the first screenshot. But then no reasonable music files contain high amplitude single tone at 21 to 23kHz and the IMD product at 2kHz is 100dB below the original tone, so kind of meaningless. You can also see that the tone seems to have higher amplitude than the white noise but you can look at the waveform view in Audition, the white noise is not 40dB quieter than the tone. Because white noise contains all frequencies evenly spreading throughout the whole spectrum, a tone on the other hand has all energy on a single frequency, and many people don't know how to read these things because for example, the white noise plots by Amir are often aligned to 0dB, it is just a graphical setting and has nothing to do with the actual noise amplitude.

Now read another post:
You can see that the steepest filters (apodizing and brickwall) have the worst passband ripple (frequency response below 20kHz). ESS's linear phase fast roll off, brickwall and apodizing filters are basically the same: They are all linear phase and have the same group delay (latency). The differences are passband ripple, attenuation depth and steepness. If you have to make one of them better, then it means the other two parameters will become worse. ESS linear phase fast roll off filter for example has the flattest passband ripple below 20kHz and deepest attenuation (120dB), the trade off is it can only have full attenuation beyond 24kHz. You may think that all filters only have 110dB attenuation by looking at the noise spectrum, it is not. See the Realtek examples from the previous example I posted. The 120dB attenuation can be identified by using a singe tone instead of noise, and I showed both. Stereophile also shows similar measurements as well, the tone is 19.1kHz though, a lower frequency, so less likely to induce strong imaging.

Here is an experiment you can try, I mean really download the files and use the software mentioned and try it out, don't just read, otherwise it is not possible to understand how these trade offs are made:

It is also important and beneficial too see measurement data from other sources as well, don't just rely on one or two websites. Here are some examples of third party measurements:
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,281
Location
Oxford, England
An upsampling example?

 

AnalogSteph

Major Contributor
Joined
Nov 6, 2018
Messages
3,338
Likes
3,278
Location
.de
An upsampling example?
Not just, no.
Preserve Details 2.0 uses advanced, "deep learning" artificial intelligence to detect and maintain important image details without oversharpening anything else.
So it's basically upsampling plus AI enhancement. You obviously cannot recreate missing frequencies, but given adequate training data, you can at least take a good guess what they would have been.

If you just want upsampling, you might be using a Lanczos algorithm, as implemented by XnView for example.

Imagine you could use AI to enhance a crummy telephone recording to sound like a full-spectrum voice. That would be neat. Or spice up old 78s or wax cylinders.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,281
Location
Oxford, England
Not just, no.

So it's basically upsampling plus AI enhancement. You obviously cannot recreate missing frequencies, but given adequate training data, you can at least take a good guess what they would have been.

If you just want upsampling, you might be using a Lanczos algorithm, as implemented by XnView for example.

Imagine you could use AI to enhance a crummy telephone recording to sound like a full-spectrum voice. That would be neat. Or spice up old 78s or wax cylinders.
Back in the day there used to be a Photoshop plug-in called Fractal that increased the resolution of a photo. But it was a dumb plug-in. I also remember using a sharpening set of actions which avoided areas of low contrast and sharpened only edges, I think that the Smart Sharpen tool which appeared some years later uses that same principle.

But it was more of a visual illustration that an image can look better if upsampled.
 
Top Bottom