• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

DAC measurements using DeltaWave

Sure. But guessing at the effects of ADC filters by applying different types of filters to reduce RMS null is just that, guessing.

Well, measuring as well. I'd start with a 6db/octave MPHP filter at, say, 0.06Hz and then go up in 0.001Hz increments, taking measurements in DW as I go. If, and only if, there happens to be a particularly deep null in the audio band at one particular setting, then that's the filter I'd keep. This filtered ref file would become the standard one used for all null measurements.

If the nulls do not improve in the audio band, I wouldn't use any filter.

I don't see why this wouldn't be valid.
 
Keep in mind that FFT based measurements do not fully characterize the time domain for non-repeating signals.
Fast Fourier analysis makes the assumption that the samples time waveform repeats an infinite times.

Fourier analysis requires the assumption that the sampled time waveform are periodic and time invariant at the non-localized level.

For most music, that is a poor assumption. It has little ability to differentiate localized events and is really an overall “average” over a block of time.

Nor is the inverse transform of the FFT to the time domain, be it real and complex, magnitude or phase, etc is not deterministic.

This is one reason why more sophisticated analysis tools such as waterfall, JTFA and wavelet were developed.
Your understanding of FT needs some work. If you're anywhere near western NY, you're invited to sit in on one of my classes where FT spectroscopy is analyzed in detail.
 
Well, measuring as well. I'd start at, say, 0.06Hz and then go up in 0.001Hz increments, taking measurements in DW as I go. If, and only if, there happens to be a particularly deep null in the audio band at one particular setting, then that's the filter I'd keep. This filtered ref file would become the standard one used for all null measurements.

I don't see why this wouldn't be valid.

Because you're measuring using a single, average value of the RMS null across millions of samples. You could, for example, introduce very large occasional errors (possibly even audible) due to the filter you pick, that, on average improves the null. I've seen this happen with real loopbacks. Also, you'd need to validate your "guess" using multiple loopback recordings of different material and different lengths to make sure your filter works across the board. You're welcome to experiment with this, of course, but I don't see how I can add this functionality to DW without introducing too many possible error scenarios.
 
Because you're measuring using a single, average value of the RMS null across millions of samples.

The 'league table' consists of the following measurements:
- RMS Δ (dBA)
- RMS PK Metric (dBFS)
- RMS of Δ spectra (dB)

I also show the charts.

You could, for example, introduce very large occasional errors (possibly even audible) due to the filter you pick, that, on average improves the null. I've seen this happen with real loopbacks. Also, you'd need to validate your "guess" using multiple loopback recordings of different material and different lengths to make sure your filter works across the board.

Happy to do all this, time permitting. Still easier than bypassing the caps in the ADC ;).

You're welcome to experiment with this, of course, but I don't see how I can add this functionality to DW without introducing too many possible error scenarios.

Sure, understood.
 
Your understanding of FT needs some work. If you're anywhere near western NY, you're invited to sit in on one of my classes where FT spectroscopy is analyzed in detail.

Windowed FFTs and STFT have been in use for as many years as audio has been around (applied extensively in DeltaWave, of course). Wavelets are not particularly good at time analysis, they simply compute the result at different scales that helps with time-boxing a specific frequency. Just like STFT does:

1736770862275.png
 
As much as I hate to remove new functionality, I decided to not add the DC filter to DeltaWave, after all. Instead, I extended the available frequencies for high-pass and low-pass filters to include frequencies below 8Hz, and also made it possible to enter fractional frequencies by typing them directly into the frequency selector. If you want, you can enter 0.0712345 as the desired filter frequency :)

https://app.box.com/s/h2u20jzre4bake02n5aj40kc5r2woaa0

This is more in line with my "do no harm" philosophy than the DC filter in v2.1.14. For example, here's an HP filter at 0.1Hz, applied to white noise comparison file:

1736775652922.png
 
Your understanding of FT needs some work. If you're anywhere near western NY, you're invited to sit in on one of my classes where FT spectroscopy is analyzed in detail.
I notice you failed to respond in specifics and instead made an ad-hominem attack.

Now, I do not know much about spectroscopy equipment, but on a first principles bases I am incredulous it is sampling the same photon traveling though the optics medium in an similar manner.

Take for the example the above discussed 0.07 Hz filter and the resultant effect on the time waveform. During a single period it is being sampled at 44.1 kS/sec, or 735kS.
What is the equivalent sampling requirements for a photon of light, let's be charitable and pick the longer wavelength of the visible spectrum of 750 nm. It's frequency is around 400 THz. To achieve a similar number of samples per period would require a sampling frequency of around 300 EHz. Now, if you have a spectrometer or any type of equipment which can sample the momentum of a single photon at that rate I will gladly visit. But I doubt I will because old Mr Heisenburg would get involved somewhere between the very first sampling and the second, but Michelson may show up.

What does all that mumbo jumbo mean in practical terms? The spectroscopy apparatus is in all probability sampling over an extremely large sample of photons. But yes, if one were to repeat the above 30 seconds a enough times, then the assumptions required for Fourier analysis will be strictly met and have virtually no divination between actual and theoretical.
 
As much as I hate to remove new functionality, I decided to not add the DC filter to DeltaWave, after all. Instead, I extended the available frequencies for high-pass and low-pass filters to include frequencies below 8Hz, and also made it possible to enter fractional frequencies by typing them directly into the frequency selector. If you want, you can enter 0.0712345 as the desired filter frequency :)

https://app.box.com/s/h2u20jzre4bake02n5aj40kc5r2woaa0

This is more in line with my "do no harm" philosophy than the DC filter in v2.1.14. For example, here's an HP filter at 0.1Hz, applied to white noise comparison file:

View attachment 420770
Out of curiosity, does the numerical method for the Fourier transform apply any zero padding or end correction?
I remember back in the days of MLSSA doing so made significant differnces in the results.
 
Out of curiosity, does the numerical method for the Fourier transform apply any zero padding or end correction?
I remember back in the days of MLSSA doing so made significant differnces in the results.

For convolution with a large FIR filter, DW uses an overlap-add method that requires a minimum of n + m - 1 size for the FFT. In effect, the data (size n) is padded with at least m - 1 zeros, where m is the size of the filter.
 
Windowed FFTs and STFT have been in use for as many years as audio has been around (applied extensively in DeltaWave, of course). Wavelets are not particularly good at time analysis, they simply compute the result at different scales that helps with time-boxing a specific frequency. Just like STFT does:

View attachment 420761
Sure, there always various pros and con's of the various analysis methods. I tend to have more interest in the localized behavior as that is more similar to how we experience sound.
And _if_ there are audible differences which are not reflected in the typical FFT results, I would expect they would still be measurable either in the time domain or in a localized frequency domain. And that is why I try to be careful about what information a single long-duration FFT can and not convey.
We used to have some fun by intentionally injecting transients which were engineered to be otherwise essentially masked.
 
For convolution with a large FIR filter, DW uses an overlap-add method that requires a minimum of n + m - 1 size for the FFT. In effect, the data (size n) is padded with at least m - 1 zeros, where m is the size of the filter.
Isn't modern technology great at automating things?
 
Fast Fourier analysis makes the assumption that the samples time waveform repeats an infinite times.

Fourier analysis requires the assumption that the sampled time waveform are periodic and time invariant at the non-localized level.

For most music, that is a poor assumption. It has little ability to differentiate localized events and is really an overall “average” over a block of time.

Nor is the inverse transform of the FFT to the time domain, be it real and complex, magnitude or phase, etc is not deterministic.

This is one reason why more sophisticated analysis tools such as waterfall, JTFA and wavelet were developed.

Response from ChatGPT o1

It is true that a single Fourier transform (FFT) over a large time block can obscure short-lived or transient events—especially in real music, which is not periodic or time-invariant. FFT-based methods essentially provide an “overall average” in the frequency domain for the selected block size, making it difficult to pinpoint exactly when certain artifacts occur.

However, the loopback null test approach (using tools like DeltaWave) is not limited to a single, long FFT. DeltaWave first time-aligns and level-matches two complete waveforms (the reference and the re-captured signal), then subtracts them sample by sample in the time domain. The resulting difference waveform (Δ) inherently reflects any transient or time-dependent inaccuracies the device introduced.

If you want to dig further into transient-specific details, you can apply more localized analyses (like short-time FFT, wavelet transforms, or waterfall plots) to the difference waveform itself. These methods indeed offer finer time resolution, ensuring you’re not “averaging away” localized events. So while standard FFTs remain extremely useful for identifying harmonic distortion or noise floors in steady-state signals, the combination of time-domain subtraction plus optional time-frequency analysis of the Δ file provides a way to capture transient behaviors that a single, full-span FFT might miss.

Thus, even though Fourier-based measurement is often described as “assuming periodicity,” nothing in the null test requires you to rely on that assumption for capturing device-induced time-domain colorations. Subtracting the entire time waveform from the reference is already a more direct way to see any differences—transient or otherwise—and we can still apply advanced tools to the difference file if needed.
 
Response from ChatGPT o1

It is true that a single Fourier transform (FFT) over a large time block can obscure short-lived or transient events—especially in real music, which is not periodic or time-invariant. FFT-based methods essentially provide an “overall average” in the frequency domain for the selected block size, making it difficult to pinpoint exactly when certain artifacts occur.

However, the loopback null test approach (using tools like DeltaWave) is not limited to a single, long FFT. DeltaWave first time-aligns and level-matches two complete waveforms (the reference and the re-captured signal), then subtracts them sample by sample in the time domain. The resulting difference waveform (Δ) inherently reflects any transient or time-dependent inaccuracies the device introduced.

If you want to dig further into transient-specific details, you can apply more localized analyses (like short-time FFT, wavelet transforms, or waterfall plots) to the difference waveform itself. These methods indeed offer finer time resolution, ensuring you’re not “averaging away” localized events. So while standard FFTs remain extremely useful for identifying harmonic distortion or noise floors in steady-state signals, the combination of time-domain subtraction plus optional time-frequency analysis of the Δ\DeltaΔ file provides a way to capture transient behaviors that a single, full-span FFT might miss.

Thus, even though Fourier-based measurement is often described as “assuming periodicity,” nothing in the null test requires you to rely on that assumption for capturing device-induced time-domain colorations. Subtracting the entire time waveform from the reference is already a more direct way to see any differences—transient or otherwise—and we can still apply advanced tools to the difference file if needed.
ChatGPT gets a lot of stuff wrong because it scrapes things from both good and ignorant sources without being able to tell one from the other. Any finite block of signal can be treated as periodic. And any continuous and finite function can be interchangeably transformed between time and frequency domain with no loss of information.
 
ChatGPT gets a lot of stuff wrong because it scrapes things from both good and ignorant sources without being able to tell one from the other. Any finite block of signal can be treated as periodic. And any continuous and finite function can be interchangeably transformed between time and frequency domain with no loss of information.

Response from ChatGPT o1

I appreciate the feedback, and it’s certainly true that:

  1. Any finite block of a signal can be treated as periodic for the purpose of performing an FFT (that’s how windowing works).
  2. In principle, a continuous, finite-length function can be fully represented in either the time or frequency domain without loss of information—if you keep track of both magnitude and phase across the entire spectrum.
However, the issue isn’t whether the transform is theoretically complete—it’s how that information is distributed, and whether a single long FFT meaningfully captures localized time events. When music signals vary quickly, a short-lived transient gets “smeared” across the FFT block, so if you’re only examining the overall frequency bins for one large window, it can be hard to pinpoint when that transient occurred or what impact it might have in the time domain.

This is why we often move beyond a single long FFT and use:

  • Short-time (or sliding) FFTs,
  • Wavelet transforms, or
  • Time-domain subtraction (like a null test, then analyzing the residual signal)
to retain both frequency and timing details.

None of this contradicts the fact that time and frequency representations are mathematically equivalent in an ideal sense. It’s simply about choosing a method that highlights where and when small differences occur—rather than integrating or averaging them over one large block.
 
I had a thought in the gym this morning - without ChatGPT ;).

What if I apply a high-pass filter (6 dB/octave, minimum-phase at ~0.07 Hz) offline to my reference file, to create a new 'compensated reference file', Ref_c. I'll then use Ref_c for the input to the DA/AD chain, and as also as the reference in DeltaWave.

The idea is that in using Ref_c, the RME’s own coupling filter should have little or nothing to do <10Hz.

Thoughts?
 
However, the issue isn’t whether the transform is theoretically complete—it’s how that information is distributed, and whether a single long FFT meaningfully captures localized time events. When music signals vary quickly, a short-lived transient gets “smeared” across the FFT block, so if you’re only examining the overall frequency bins for one large window, it can be hard to pinpoint when that transient occurred or what impact it might have in the time domain.
Bollocks. Now the bot is getting Shannon-Nyquist wrong and stirring it into the pot.

Automated Brandolini.
 
Back
Top Bottom