RandomEar
Addicted to Fun and Learning
- Joined
- Feb 14, 2022
- Messages
- 757
- Likes
- 1,914
INTRO
I've recently encountered a couple of posts about pre-ringing in linear phase reconstruction filters in DACs. Most claim the usual: How they can definitely hear the difference between filters, how that evil pre-ringing makes linear phase much worse and so on. Impressions from uncontrolled, sighted listening tests of course.
In general, the members stating this appear to have misunderstood important aspects of the topic and it’s sometimes apparent that they are not very familiar with the frequency domain, FFTs and related technical details. One major problem is that there's a lot of bullshit about DAC filters on the net. Perpetuated by manufacturer marketing departments, dealers and misinformed or mistaken reviewers, but also by regular people on forums. There's also some good stuff like [1, 2, 3, 4, 5]. A lot of the good stuff focuses on frequency domain analysis, impulse responses and "illegal" signals like impulses, square waves or clipping. Archimago specifically also investigated real music upsampled using foobar2000 + SoX [2]. On ASR, I have also found one practical comparison using an ADC to capture DAC outputs.
For engineers, looking at frequency and impulse response plots is typically enough for an informed decision. But most people are not engineers. And without the specific knowledge about what a frequency response tells you, how to interpret impulse responses and how a Fourier series and the Nyquist frequency are related to all of this, that information might not be helpful or even misleading to non-engineer readers.
This post therefore focuses on time-domain analysis and real audio samples. It will likely not bring any surprises to those familiar with the math behind audio, but is hopefully insightful for those who are not. I also tried to avoid overly technical descriptions in the important parts to keep this write-up helpful to everybody.
I'd like to point out that I'm not a DSP expert, but some fellow forum members definitely are. If you happen to find any errors in this post, feel free to point them out politely.
FILTER DESIGN
The technical stuff. This is mostly for the people familiar with the math. If you are not one of’em, feel free to gloss over this section.
The filters are designed based on parameters (passband freq, stopband freq, etc.) from the attached ESS ES9039Q2M data sheet. I've made some effort to get them reasonably close to the originals, but for a multitude of reasons, they are not identical. The design process was as follows:
The plots show that the filters are not identical to the ESS ones, but qualitatively similar. The fast filters are essentially flat in the audible band up to 20 kHz. For the slow filters, the decline in the frequency response starts around 15 kHz (-0.2 dB, followed by -0.5 dB @ 16 kHz) for input signals with 44.1 kHz. This decline could be audible, depending on the content and the listening level. However, if you are older than about 40, this is likely not a concern for you anymore
[6]
The filter delays are 726 µs (linear fast), 181 µs (linear slow), 57 µs (minimum fast) and 43 µs (minimum slow). These numbers are lower than the delays given in the ESS data sheet, but from my understanding the ones from the data sheet represent the total delay through the DAC pipeline, including more than just the filters.
Now that our filters are ready, we need a data pipeline to use them in. It looks like this:
SYNTHETIC DATA
I know, I promised to look at real music. But let's start out with a look at synthetic data constructed from a mix of pure sine waves. We sum up four different sines and add amplitude modulation to the highest one to make the signal a bit more interesting. The following plots shows 1 ms of two signals with this sine mix: Our high res "ground truth" and the "downsampled" data points how they would be stored on a CD or in a typical file or stream.
The ground truth is what would be captured by a microphone in the recording studio or concert hall: A continuous signal with very high time and amplitude resolution. This is also close to how the signal looks like throughout the mastering process, assuming it is kept at a high sample rate and bit depth. The downsampled signal is what is provided to us as buyers and what we feed into our DACs. After upsampling and applying the reconstruction filter, we want to arrive back as close to the ground truth as possible on the analog output of our DAC.
A couple of things to note while looking at these signals:
It’s apparent that our reconstructions are both not perfect, but at least one of them is pretty close. The reason they are not perfect is that the filters don’t totally suppress all signals beyond the Nyquist frequency of 22.05 kHz and they also inflict some suppression on signals below that threshold. In addition, the minimum phase filter has different delays for signal components of different frequency. For filters with a finite number of taps and signals of finite length, you can’t avoid all of these deficiencies at once. All real filters are a compromise.
Looking at the graph a little closer, we can make out some differences between the filters: While the linear phase one follows the ground truth signal closely and deviations are hard to spot, the minimum phase filter is different: Sometimes, it is very close to the ground truth and at other times, it is pretty far away from it. We can also see that even though there's some high frequency 16 kHz content in our signal, there's no hint of pre-ringing visible in this example.
And just for fun, let’s have a look at what the unfiltered, non-oversampled (NOS / sample and hold) output would look like:
It should be pretty clear that NOS is not a good approximation of our ground truth signal at all. Contrary to some audiophile beliefs, NOS is the polar opposite of being “true to the original music”. Unless you’re doing some scientific experiments and explicitly need a square wave output or you sufficiently upsample and filter your audio before sending it to your DAC, don’t use NOS mode.
After this excursion into pure tones, let's now switch to actual music.
REAL AUDIO DATA
From now on, the downsampled data will be our "original" audio signal straight from the CD, from which we try to reconstruct something close to the now unknown ground truth. Let's begin with comparing all four reconstruction filters on a 10 ms sample from Eurythmics - Thorn In My Side (plot timestamp “0” starts at 9.445 s in the song):
You can see the different filter delays, which make it a bit hard to compare the filters. Let's zoom in more to the left side and compensate for the different delays by aligning the signals:
That shows us that all filters deliver pretty similar results on this part of the song. This is actually how most of the audio signal looks, but we will go on the hunt for the critical sections. The minimum phase filters appear to have a bit more overshoot – for example around the 0.6 ms mark. We can punch in even closer and look at the small region from 0.45 to 0.65 ms:
The linear phase fast and slow filters perform very similar in this example and are on average much closer to the data points of the original signal than the minimum phase filters. The latter appear to “swing” more in-between samples. There is also a bigger difference between the fast and slow filters for the minimum phase pair. This audio snippet wasn't too hard on our filters. Let’s take a look at a slightly more interesting section around 7.4 ms:
There’s a pretty steep gradient in the center of this plot, which is worth taking a closer look at. Let’s zoom in:
All filters appear to “wobble around” somewhat between the samples in front of the gradient (7.3-7.4 ms). It’s important to remember that the ground truth between two original audio samples is rarely a straight line. In consequence, the reconstructions all look plausible at first glance. The minimum phase filters do both overshoot towards the end of the gradient around 7.45 ms, though.
The section in front of this gradient is especially tricky for our filters, because all four original samples between 7.3 and 7.4 ms have almost the same amplitude. If we take a closer look at the linear phase reconstruction in that region, we can see that the curve completes one full (inverted) sine wave between samples one and three of the original audio. At 44.1 kHz, the section between three sample points is exactly 45.35 µs long, corresponding to a wave with a frequency of 22.05 kHz. Something we will keep in mind for later. For the minimum phase filter, the curve takes about 50% longer to complete a full wave in this region, corresponding to roughly 14.7 kHz.
Overall, Thorn In My Side didn’t pose much of a challenge to our reconstruction filters. Nonetheless, it allowed us to discover some distinct differences between linear and minimum phase filters and gave us a rough idea of how they perform. Let's now switch to a more difficult track: An excerpt from Michael Jackson – Beat It (timestamp “0” = 72.38 s):
Now that is much more interesting! There is big dip around 7.5 ms and some of our filters produce quite a bit of ringing trying to cope with it:
Some also reach the clipping threshold at an amplitude of -1.0. It is clear from the plot that all our filters do ring. But the minimum phase ones perform much worse in this case. Let’s take a closer look:
Both linear phase filters stay close to the original samples, but there is a bit of a depression at the foot of the gradient. The minimum phase filters “go wild” here, with the fast variety even reaching the clipping threshold just after the gradient. Clipping is always bad, because it introduces distortion and high frequency noise into the output. If we compare the relative change in signal amplitude over the gradient from 7.45 – 7.55 ms, we can calculate that the fast linear phase filter overshoots the original signal by 6.3%, while the fast minimum phase filter does so by 22.3% – more than triple the deviation.
If we take a look at the ringing period again, it comes out to two samples for the two linear phase filters, which corresponds to 22.05 kHz, as seen above. This is the Nyquist frequency for discrete signals with a 44.1 kHz sampling rate – the highest frequency that can be reproduced faithfully (without aliasing). It is also well outside of the audible range for anybody except young children, maybe a handful of lucky teenagers [6] and your dog. Apart from the fact that it is a mathematically valid reconstruction of the signal, the “evil ringing” in that section is therefore also inaudible for the vast majority of listeners. For the minimum phase filter, the ringing period after the gradient comes out to about 2.5 samples or 56.7 µs, which is equivalent to 17.6 kHz and could potentially be audible for young-ish listeners or those with excellent HF hearing up to maybe 35 years of age.
Small note: Our examples work with CD-quality music (44.1 kHz). For 48 kHz material, the potential ringing of linear phase filters will be inaudible for everybody and with "high res audio" (88.2 kHz +), you don't need to worry about ringing at all, regardless of the selected filter – unless you are a bat.
Back to the time domain: In our latest audio snippet, we are looking at a drop in amplitude. A rising edge might look totally different, right? Would be nice to have a direct comparison, wouldn’t it? Luckily, we don’t need to search hours of audio to find a gradient comparable to our above example. We can just flip the original track and process it again using the same settings. The output then looks like this (you can ignore the different runtime, what was 10 ms before is 0 ms now and vice versa):
As we can see, the linear phase filters do not care at all in which direction they are applied to the source material. Same result as before, same limited amount of ringing, no surprises. This is due to their symmetrical impulse response and it is a distinct advantage of this type of filter.
For the minimum phase filters, the results look much different: Gone are the heavy ringing and clipping. We are left with a mostly smooth reconstruction of the signal, albeit with significant overshoot at the top end of the gradient. Clearly, minimum phase filters are not symmetrical, which we can also see in their impulse response. As in our non-flipped example, the relative amplitude error over the gradient is also higher for the minimum phase filters compared to the linear phase ones.
There’s an additional perspective of looking at filter symmetry: Our linear phase filters put a lot of weight on the present, and a bit of equal weight on the past and future of the original signal when generating their output. This symmetry gives them the advantage that rising and falling amplitude signals are processed equally. In contrast, our minimum phase filters put some weight on the present, some weight on the past and no weight at all on the future. This asymmetry gives them the advantage of low latency, but comes with some disadvantages in other areas.
This concludes our investigation into real music snippets. It’s worth keeping in mind that I deliberately selected difficult sections of the two songs presented in this post. For the majority of these tracks, the differences between the reconstruction filters are less pronounced than presented here.
What does all this mean for the end-boss of all audiophiles: Pre-ringing?
VERDICT & TLDR
In conclusion, pre-ringing is not an effect of concern in actual music. All reconstruction filters can produce ringing under specific circumstances, but the effect typically represents a valid reconstruction of the audio signal. For fast linear phase filters, the ringing frequency for CD-quality audio is already outside of the audible band for nearly everybody except young children and some teenagers. For minimum phase filters, the ringing is typically slightly lower in frequency and potentially audible for a good portion of listeners in case of CD-quality material. Ringing is never an audible concern for high res audio (≥88.2 kHz).
Among the options available on DACs, fast linear phase filters on average deliver the most faithful reconstruction of ANY audio signal. Their symmetrical nature means they do not care about the direction of change in the signal amplitude. Minimum phase filters on average deliver a slightly less faithful reconstruction and e.g. exhibit higher overshoot, but have other advantages like a significantly lower delay.
Impulses and square waves are not music and the impulse response from a data sheet is not intuitive to read for most people. If you are not an engineer, listen to what independent audio engineers explain and don't get scared by bullshitters trying to sell you the next even more expensive piece of equipment you don't need.
Also, don't use NOS mode. It sucks
I've recently encountered a couple of posts about pre-ringing in linear phase reconstruction filters in DACs. Most claim the usual: How they can definitely hear the difference between filters, how that evil pre-ringing makes linear phase much worse and so on. Impressions from uncontrolled, sighted listening tests of course.
In general, the members stating this appear to have misunderstood important aspects of the topic and it’s sometimes apparent that they are not very familiar with the frequency domain, FFTs and related technical details. One major problem is that there's a lot of bullshit about DAC filters on the net. Perpetuated by manufacturer marketing departments, dealers and misinformed or mistaken reviewers, but also by regular people on forums. There's also some good stuff like [1, 2, 3, 4, 5]. A lot of the good stuff focuses on frequency domain analysis, impulse responses and "illegal" signals like impulses, square waves or clipping. Archimago specifically also investigated real music upsampled using foobar2000 + SoX [2]. On ASR, I have also found one practical comparison using an ADC to capture DAC outputs.
For engineers, looking at frequency and impulse response plots is typically enough for an informed decision. But most people are not engineers. And without the specific knowledge about what a frequency response tells you, how to interpret impulse responses and how a Fourier series and the Nyquist frequency are related to all of this, that information might not be helpful or even misleading to non-engineer readers.
This post therefore focuses on time-domain analysis and real audio samples. It will likely not bring any surprises to those familiar with the math behind audio, but is hopefully insightful for those who are not. I also tried to avoid overly technical descriptions in the important parts to keep this write-up helpful to everybody.
I'd like to point out that I'm not a DSP expert, but some fellow forum members definitely are. If you happen to find any errors in this post, feel free to point them out politely.
FILTER DESIGN
The technical stuff. This is mostly for the people familiar with the math. If you are not one of’em, feel free to gloss over this section.
The filters are designed based on parameters (passband freq, stopband freq, etc.) from the attached ESS ES9039Q2M data sheet. I've made some effort to get them reasonably close to the originals, but for a multitude of reasons, they are not identical. The design process was as follows:
- All filters are created as linear phase FIR filters using the Parks-McClellan design method
- Minimum phase filters are derived from the linear phase design using the cepstrum method
- Fast filters use an order of 512 (also referred to as 512 "taps")
- Slow filters use an order of 128
- The passband ripple is <0.0005 dB for all filters
- Fast filters offer a stopband attenuation of 100 dB or better
- Slow filters offer a stopband attenuation of 90 dB or better
- Fast filters are designed with an attenuation of less than 0.01 dB @ 20 kHz
- Slow filters are designed with an attenuation of 2.8 dB @ 19 kHz, which results in 4.3 dB @ 20 kHz
The plots show that the filters are not identical to the ESS ones, but qualitatively similar. The fast filters are essentially flat in the audible band up to 20 kHz. For the slow filters, the decline in the frequency response starts around 15 kHz (-0.2 dB, followed by -0.5 dB @ 16 kHz) for input signals with 44.1 kHz. This decline could be audible, depending on the content and the listening level. However, if you are older than about 40, this is likely not a concern for you anymore
The filter delays are 726 µs (linear fast), 181 µs (linear slow), 57 µs (minimum fast) and 43 µs (minimum slow). These numbers are lower than the delays given in the ESS data sheet, but from my understanding the ones from the data sheet represent the total delay through the DAC pipeline, including more than just the filters.
Now that our filters are ready, we need a data pipeline to use them in. It looks like this:
- Read real audio data from a file (44.1 kHz bit perfect, uncompressed CD rip) or generate a synthetic signal
- For real audio: Convert to mono by dropping one channel (reduces clutter in plots)
- Apply 8x upsampling to the audio by inserting zeros between samples, generating a signal with an effective sampling rate of 352.8 kHz
- Apply one of the specified reconstruction filters to the upsampled audio signal
- Do some analysis, like measuring the filter delay
- Plot the results
SYNTHETIC DATA
I know, I promised to look at real music. But let's start out with a look at synthetic data constructed from a mix of pure sine waves. We sum up four different sines and add amplitude modulation to the highest one to make the signal a bit more interesting. The following plots shows 1 ms of two signals with this sine mix: Our high res "ground truth" and the "downsampled" data points how they would be stored on a CD or in a typical file or stream.
The ground truth is what would be captured by a microphone in the recording studio or concert hall: A continuous signal with very high time and amplitude resolution. This is also close to how the signal looks like throughout the mastering process, assuming it is kept at a high sample rate and bit depth. The downsampled signal is what is provided to us as buyers and what we feed into our DACs. After upsampling and applying the reconstruction filter, we want to arrive back as close to the ground truth as possible on the analog output of our DAC.
Note: The rest of this post focuses on the sampling rate and ignores the bit depth, floating vs. fixed point formats and quantization errors. You can forget about those for now.
A couple of things to note while looking at these signals:
- The downsampled data points are part of the ground truth signal
- There are peaks and valleys in the ground truth in-between data points of the downsampled signal which are higher/lower than the closest downsampled data points. Or in other words: A straight line connecting the downsampled data points would not be a good representation of the ground truth signal.
- Since the sampling rate of our downsampled data stream is more than twice the maximum sine frequency contained in it (44.1 kHz vs. 16 kHz), a faithful reconstruction of the ground truth signal is theoretically possible (see Nyquist frequency, we will get back to this)
It’s apparent that our reconstructions are both not perfect, but at least one of them is pretty close. The reason they are not perfect is that the filters don’t totally suppress all signals beyond the Nyquist frequency of 22.05 kHz and they also inflict some suppression on signals below that threshold. In addition, the minimum phase filter has different delays for signal components of different frequency. For filters with a finite number of taps and signals of finite length, you can’t avoid all of these deficiencies at once. All real filters are a compromise.
Looking at the graph a little closer, we can make out some differences between the filters: While the linear phase one follows the ground truth signal closely and deviations are hard to spot, the minimum phase filter is different: Sometimes, it is very close to the ground truth and at other times, it is pretty far away from it. We can also see that even though there's some high frequency 16 kHz content in our signal, there's no hint of pre-ringing visible in this example.
And just for fun, let’s have a look at what the unfiltered, non-oversampled (NOS / sample and hold) output would look like:
It should be pretty clear that NOS is not a good approximation of our ground truth signal at all. Contrary to some audiophile beliefs, NOS is the polar opposite of being “true to the original music”. Unless you’re doing some scientific experiments and explicitly need a square wave output or you sufficiently upsample and filter your audio before sending it to your DAC, don’t use NOS mode.
After this excursion into pure tones, let's now switch to actual music.
REAL AUDIO DATA
From now on, the downsampled data will be our "original" audio signal straight from the CD, from which we try to reconstruct something close to the now unknown ground truth. Let's begin with comparing all four reconstruction filters on a 10 ms sample from Eurythmics - Thorn In My Side (plot timestamp “0” starts at 9.445 s in the song):
You can see the different filter delays, which make it a bit hard to compare the filters. Let's zoom in more to the left side and compensate for the different delays by aligning the signals:
That shows us that all filters deliver pretty similar results on this part of the song. This is actually how most of the audio signal looks, but we will go on the hunt for the critical sections. The minimum phase filters appear to have a bit more overshoot – for example around the 0.6 ms mark. We can punch in even closer and look at the small region from 0.45 to 0.65 ms:
The linear phase fast and slow filters perform very similar in this example and are on average much closer to the data points of the original signal than the minimum phase filters. The latter appear to “swing” more in-between samples. There is also a bigger difference between the fast and slow filters for the minimum phase pair. This audio snippet wasn't too hard on our filters. Let’s take a look at a slightly more interesting section around 7.4 ms:
There’s a pretty steep gradient in the center of this plot, which is worth taking a closer look at. Let’s zoom in:
All filters appear to “wobble around” somewhat between the samples in front of the gradient (7.3-7.4 ms). It’s important to remember that the ground truth between two original audio samples is rarely a straight line. In consequence, the reconstructions all look plausible at first glance. The minimum phase filters do both overshoot towards the end of the gradient around 7.45 ms, though.
The section in front of this gradient is especially tricky for our filters, because all four original samples between 7.3 and 7.4 ms have almost the same amplitude. If we take a closer look at the linear phase reconstruction in that region, we can see that the curve completes one full (inverted) sine wave between samples one and three of the original audio. At 44.1 kHz, the section between three sample points is exactly 45.35 µs long, corresponding to a wave with a frequency of 22.05 kHz. Something we will keep in mind for later. For the minimum phase filter, the curve takes about 50% longer to complete a full wave in this region, corresponding to roughly 14.7 kHz.
Overall, Thorn In My Side didn’t pose much of a challenge to our reconstruction filters. Nonetheless, it allowed us to discover some distinct differences between linear and minimum phase filters and gave us a rough idea of how they perform. Let's now switch to a more difficult track: An excerpt from Michael Jackson – Beat It (timestamp “0” = 72.38 s):
Now that is much more interesting! There is big dip around 7.5 ms and some of our filters produce quite a bit of ringing trying to cope with it:
Some also reach the clipping threshold at an amplitude of -1.0. It is clear from the plot that all our filters do ring. But the minimum phase ones perform much worse in this case. Let’s take a closer look:
Both linear phase filters stay close to the original samples, but there is a bit of a depression at the foot of the gradient. The minimum phase filters “go wild” here, with the fast variety even reaching the clipping threshold just after the gradient. Clipping is always bad, because it introduces distortion and high frequency noise into the output. If we compare the relative change in signal amplitude over the gradient from 7.45 – 7.55 ms, we can calculate that the fast linear phase filter overshoots the original signal by 6.3%, while the fast minimum phase filter does so by 22.3% – more than triple the deviation.
If we take a look at the ringing period again, it comes out to two samples for the two linear phase filters, which corresponds to 22.05 kHz, as seen above. This is the Nyquist frequency for discrete signals with a 44.1 kHz sampling rate – the highest frequency that can be reproduced faithfully (without aliasing). It is also well outside of the audible range for anybody except young children, maybe a handful of lucky teenagers [6] and your dog. Apart from the fact that it is a mathematically valid reconstruction of the signal, the “evil ringing” in that section is therefore also inaudible for the vast majority of listeners. For the minimum phase filter, the ringing period after the gradient comes out to about 2.5 samples or 56.7 µs, which is equivalent to 17.6 kHz and could potentially be audible for young-ish listeners or those with excellent HF hearing up to maybe 35 years of age.
Small note: Our examples work with CD-quality music (44.1 kHz). For 48 kHz material, the potential ringing of linear phase filters will be inaudible for everybody and with "high res audio" (88.2 kHz +), you don't need to worry about ringing at all, regardless of the selected filter – unless you are a bat.
Back to the time domain: In our latest audio snippet, we are looking at a drop in amplitude. A rising edge might look totally different, right? Would be nice to have a direct comparison, wouldn’t it? Luckily, we don’t need to search hours of audio to find a gradient comparable to our above example. We can just flip the original track and process it again using the same settings. The output then looks like this (you can ignore the different runtime, what was 10 ms before is 0 ms now and vice versa):
As we can see, the linear phase filters do not care at all in which direction they are applied to the source material. Same result as before, same limited amount of ringing, no surprises. This is due to their symmetrical impulse response and it is a distinct advantage of this type of filter.
For the minimum phase filters, the results look much different: Gone are the heavy ringing and clipping. We are left with a mostly smooth reconstruction of the signal, albeit with significant overshoot at the top end of the gradient. Clearly, minimum phase filters are not symmetrical, which we can also see in their impulse response. As in our non-flipped example, the relative amplitude error over the gradient is also higher for the minimum phase filters compared to the linear phase ones.
There’s an additional perspective of looking at filter symmetry: Our linear phase filters put a lot of weight on the present, and a bit of equal weight on the past and future of the original signal when generating their output. This symmetry gives them the advantage that rising and falling amplitude signals are processed equally. In contrast, our minimum phase filters put some weight on the present, some weight on the past and no weight at all on the future. This asymmetry gives them the advantage of low latency, but comes with some disadvantages in other areas.
This concludes our investigation into real music snippets. It’s worth keeping in mind that I deliberately selected difficult sections of the two songs presented in this post. For the majority of these tracks, the differences between the reconstruction filters are less pronounced than presented here.
What does all this mean for the end-boss of all audiophiles: Pre-ringing?
- We have seen in our theoretical investigation, that the ringing some see as a defect isn’t one per-se: It is a valid reconstruction of the ground truth signal in-between the stored samples. This is true regardless of the position of said ringing relative to its trigger (like a steep gradient) – it can appear before or after it.
- We have seen that the ringing frequency for the filter type most often criticized by audiophiles – fast linear phase – is about equal to the Nyquist frequency and thereby inaudible for the vast majority of humans if 44.1 kHz source material is played. For higher sampling rates starting at 48 kHz, it is inaudible for all humans.
- We have seen that our DAC-like minimum phase filters trade inaudible pre-ringing for lower delay and potentially audible post-ringing, which is also higher in amplitude compared to that of our linear phase filters.
VERDICT & TLDR
In conclusion, pre-ringing is not an effect of concern in actual music. All reconstruction filters can produce ringing under specific circumstances, but the effect typically represents a valid reconstruction of the audio signal. For fast linear phase filters, the ringing frequency for CD-quality audio is already outside of the audible band for nearly everybody except young children and some teenagers. For minimum phase filters, the ringing is typically slightly lower in frequency and potentially audible for a good portion of listeners in case of CD-quality material. Ringing is never an audible concern for high res audio (≥88.2 kHz).
Among the options available on DACs, fast linear phase filters on average deliver the most faithful reconstruction of ANY audio signal. Their symmetrical nature means they do not care about the direction of change in the signal amplitude. Minimum phase filters on average deliver a slightly less faithful reconstruction and e.g. exhibit higher overshoot, but have other advantages like a significantly lower delay.
Impulses and square waves are not music and the impulse response from a data sheet is not intuitive to read for most people. If you are not an engineer, listen to what independent audio engineers explain and don't get scared by bullshitters trying to sell you the next even more expensive piece of equipment you don't need.
Also, don't use NOS mode. It sucks
Attachments
Last edited:
.