• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Roon samplerate conversion filters minimum phase or linear phase

Mnyb

Major Contributor
Forum Donor
Joined
Aug 14, 2019
Messages
3,655
Likes
5,453
Location
Sweden, Västerås
Yes that’s the question.

Roon samplerate conversion filters minimum phase or linear phase ?

I’m trying out the roon system at the moment and for some reasons they offer settings where they should be none :)

Which of these filter types is the “correct” one . IE adhere to accepted theory as close as possibly and thus perform the conversion as lossless as possible.

I ask you guys at ASR to not have any FuD served as per usual audiophile wisdom but you give real answers.
 
I don't do Roon, but in general this is more of an audiophile preoccupation. In general the recording and mastering process don't get involved in those details, but for some reason some listeners have great concerns over it. In theory, you'd want playback to have linear phase. And even though music is mixed and mastered, typically, while monitoring with linear phase, and decisions about the sound has been made with linear phase, audiophile don't like that when you put an unrealistic non-musical signal through such a DAC, linear phase has pre-ringing if you look at the output.

I presume there are other choices for the filters. If one of them sounds better than the default, go with it. Otherwise don't worry about it. Just my opinion. Most of all, never get into the mind game of "this sounds fine to me, but I've learned that in theory I should be doing this instead." That's how people end up spending big bucks for audiophile power chords and cable isolators. :p
 
Yes I intend to set it once and forget about it . It’s the same silly game as some DAC’s have going.
I don’t think I’ll hear anything different unless I pick something really wrong . :)
But it’s not about taste this has a mathematical solution and should not really be a fiddle option , but I have forgotten what’s the optimal solution is usually called ?

I’ve set it up so sample rate conversion is rarely used it’s used just when needed if the speakers can’t handle it . So it’s not even used in 99% of the time .

Some ROON user convert everything to DSD or something ?? I have DSP speakers so there is another layer of processing after so I would not bother .
 
Reading between the lines in their doc “precision linear phase” seems to be the “normal” filter :)

They are invested in the apodising filter fud btw, hence not exactly clear documentation on their site…

Edit : the default is a precision minimum phase filter.
 
Here is a test that @Archimago ran. Interpret the results as you like but my take is it doesn't matter enough to worry about it. I use sharp linear phase and forget about it.

But it’s not about taste this has a mathematical solution and should not really be a fiddle option , but I have forgotten what’s the optimal solution is usually called ?
Google's AI says that linear phase is not required in the theorem but as a practical matter is required to preserve the waveform. The math folks on here could probably say if that is correct or not regarding what the theorem requires.

No, the Nyquist-Shannon sampling theorem itself does not require a linear phase; the core concept of the theorem is about the sampling rate needed to accurately capture a signal's information based on its bandwidth, regardless of the phase characteristics of the signal or the sampling process.

Key points to remember:

  • Focus on frequency content:
    The sampling theorem primarily focuses on the frequency components of a signal and the minimum sampling rate required to avoid aliasing, not the phase relationships between frequencies.
  • Linear phase for signal fidelity:
    While the sampling theorem itself doesn't mandate linear phase, in practical applications, especially when reconstructing a signal from samples, a linear phase filter is often used to preserve the signal's waveform shape by ensuring all frequencies experience the same time delay.
 
Google's AI says that linear phase is not required in the theorem but as a practical matter is required to preserve the waveform. The math folks on here could probably say if that is correct or not regarding what the theorem requires.

No, the Nyquist-Shannon sampling theorem itself does not require a linear phase; the core concept of the theorem is about the sampling rate needed to accurately capture a signal's information based on its bandwidth, regardless of the phase characteristics of the signal or the sampling process.

Key points to remember:

  • Focus on frequency content:
    The sampling theorem primarily focuses on the frequency components of a signal and the minimum sampling rate required to avoid aliasing, not the phase relationships between frequencies.
  • Linear phase for signal fidelity:
    While the sampling theorem itself doesn't mandate linear phase, in practical applications, especially when reconstructing a signal from samples, a linear phase filter is often used to preserve the signal's waveform shape by ensuring all frequencies experience the same time delay.
Yes, because the sampling theorem doesn't address filtering, per se. The sampling theorem basically defines sampling, and describes the results.

The sampling theorem describes what happens when we convert a continuous signal to discrete time using a fixed sample period. A result of the analysis is that data can only be preserved if the continuous signal has all frequency components below half the sample rate. (Note that the sampling theorem doesn't address "digital", only discrete time, but digital is our most convenient storage for discrete time signals.)

Note that it doesn't say anything about filtering, it just says the components need to be below half the sample rate. If you sample a 2 kHz sine wave at 20 kHz sample rate, no filter is needed. But as a practical matter, we build filters into our ADCs to ensure that an arbitrary signal complies. The primary objective is to ensure the limited bandwidth, but what kind of filter to use is an implementation detail.

So, we could model the results of the analog to digital conversion step by listening to a continuous signal through the equivalent filter used in the ADC.

Regarding the Roon question, we're concerned only with playback of a previously digitized signal:

A consequence of sampling is that the continuous signal has been modulated, resulting in frequency shifted images of the original signal—"sidebands" for someone used to AM radio (AM radio and sampling are closely related). So, it's an apparent requirement that we need to get rid of those images/sidebands to get back to continuous time.

And again, the requirement here is just "get rid of them". The portion of the spectrum of interest (the audio band) lies below half the sample rate. (Fun fact: How much below? Any amount at all...), so the requirement is to get rid of everything from half the sample rate upwards.

Similar to the ADC stage, we could model the results of a DAC by using the DAC's filter on the continuous signal...but for a fair test we'd have to recreate that first sideband by amplitude modulating with a sine at the sampling frequency before filtering.

Both the filter for the ADC and DAC have no specs dictated by theory. Obviously, we'd want a perfect lowpass filters in both cases, an instantaneous transition band and rejection to zero output in the stop band, with no effect on the signal in the pass band. Impossible, and would delay the output forever, but we don't need it to be perfect. The stop band rejection can just be low enough you can't hear it (the fact our hearing drops off up there definitely makes that part easy—we still need to do well so we aren't pumping ultrasonics into potential non-linearities...). The steepness only has to be enough to retain the audio band. If we call that 0-20 kHz, then we had a little space before half the sample rate at 44.1k, for instance, a little more at 48k, and at 96k we have lots of room and can decide whether we want to extend our audio band higher.

Lastly, and this is probably the aspect that most people are paranoid about, the choice of filter may have consequences near the top of the audio band. For isntance, it may have phase shift as it approaches 20 kHz. This is always true of minimum-phase filters. It's not a property of linear phase filters, put some people have concerns/paranoia about things like pre-ringing, and that's why some audiophile DAC makers give choices. (It's pretty telling that professional converters for the recording industry usually lack such features. The device makers generally just make decisions.)

Personally, I don't think pre-ringing is an issue but I'll leave that for another discussion. And as far as phase shift at the top of the audio band, that's a property of analog filter and people don't seem to have a problem with all the analog filtering that has occurred in analog mixing for the classics everyone loves. But then again, the filters are not so steep there, so pick your paranoia. :)
 
Yes, because the sampling theorem doesn't address filtering, per se. The sampling theorem basically defines sampling, and describes the results.

The sampling theorem describes what happens when we convert a continuous signal to discrete time using a fixed sample period. A result of the analysis is that data can only be preserved if the continuous signal has all frequency components below half the sample rate. (Note that the sampling theorem doesn't address "digital", only discrete time, but digital is our most convenient storage for discrete time signals.)

Note that it doesn't say anything about filtering, it just says the components need to be below half the sample rate. If you sample a 2 kHz sine wave at 20 kHz sample rate, no filter is needed. But as a practical matter, we build filters into our ADCs to ensure that an arbitrary signal complies. The primary objective is to ensure the limited bandwidth, but what kind of filter to use is an implementation detail.

So, we could model the results of the analog to digital conversion step by listening to a continuous signal through the equivalent filter used in the ADC.

Regarding the Roon question, we're concerned only with playback of a previously digitized signal:

A consequence of sampling is that the continuous signal has been modulated, resulting in frequency shifted images of the original signal—"sidebands" for someone used to AM radio (AM radio and sampling are closely related). So, it's an apparent requirement that we need to get rid of those images/sidebands to get back to continuous time.

And again, the requirement here is just "get rid of them". The portion of the spectrum of interest (the audio band) lies below half the sample rate. (Fun fact: How much below? Any amount at all...), so the requirement is to get rid of everything from half the sample rate upwards.

Similar to the ADC stage, we could model the results of a DAC by using the DAC's filter on the continuous signal...but for a fair test we'd have to recreate that first sideband by amplitude modulating with a sine at the sampling frequency before filtering.

Both the filter for the ADC and DAC have no specs dictated by theory. Obviously, we'd want a perfect lowpass filters in both cases, an instantaneous transition band and rejection to zero output in the stop band, with no effect on the signal in the pass band. Impossible, and would delay the output forever, but we don't need it to be perfect. The stop band rejection can just be low enough you can't hear it (the fact our hearing drops off up there definitely makes that part easy—we still need to do well so we aren't pumping ultrasonics into potential non-linearities...). The steepness only has to be enough to retain the audio band. If we call that 0-20 kHz, then we had a little space before half the sample rate at 44.1k, for instance, a little more at 48k, and at 96k we have lots of room and can decide whether we want to extend our audio band higher.

Lastly, and this is probably the aspect that most people are paranoid about, the choice of filter may have consequences near the top of the audio band. For isntance, it may have phase shift as it approaches 20 kHz. This is always true of minimum-phase filters. It's not a property of linear phase filters, put some people have concerns/paranoia about things like pre-ringing, and that's why some audiophile DAC makers give choices. (It's pretty telling that professional converters for the recording industry usually lack such features. The device makers generally just make decisions.)

Personally, I don't think pre-ringing is an issue but I'll leave that for another discussion. And as far as phase shift at the top of the audio band, that's a property of analog filter and people don't seem to have a problem with all the analog filtering that has occurred in analog mixing for the classics everyone loves. But then again, the filters are not so steep there, so pick your paranoia. :)
Yep the pre ringing should not affect music signals it appears with the test pulse ifaik and should not affect properly sampled and bandwidth limited music content . Much FUD is written about this .
I think ROON inherited this obsession from thier former parent company Meridian who thinks apodizing filters are the bees knees :)
 
Back
Top Bottom