I want the effects of the DACs' filters to be reflected in the nulls.
Sure. But guessing at the effects of ADC filters by applying different types of filters to reduce RMS null is just that, guessing.
I want the effects of the DACs' filters to be reflected in the nulls.
Sure. But guessing at the effects of ADC filters by applying different types of filters to reduce RMS null is just that, guessing.
Keep in mind that FFT based measurements do not fully characterize the time domain for non-repeating signals.
Your understanding of FT needs some work. If you're anywhere near western NY, you're invited to sit in on one of my classes where FT spectroscopy is analyzed in detail.Fast Fourier analysis makes the assumption that the samples time waveform repeats an infinite times.
Fourier analysis requires the assumption that the sampled time waveform are periodic and time invariant at the non-localized level.
For most music, that is a poor assumption. It has little ability to differentiate localized events and is really an overall “average” over a block of time.
Nor is the inverse transform of the FFT to the time domain, be it real and complex, magnitude or phase, etc is not deterministic.
This is one reason why more sophisticated analysis tools such as waterfall, JTFA and wavelet were developed.
Well, measuring as well. I'd start at, say, 0.06Hz and then go up in 0.001Hz increments, taking measurements in DW as I go. If, and only if, there happens to be a particularly deep null in the audio band at one particular setting, then that's the filter I'd keep. This filtered ref file would become the standard one used for all null measurements.
I don't see why this wouldn't be valid.
Because you're measuring using a single, average value of the RMS null across millions of samples.
You could, for example, introduce very large occasional errors (possibly even audible) due to the filter you pick, that, on average improves the null. I've seen this happen with real loopbacks. Also, you'd need to validate your "guess" using multiple loopback recordings of different material and different lengths to make sure your filter works across the board.
You're welcome to experiment with this, of course, but I don't see how I can add this functionality to DW without introducing too many possible error scenarios.
Your understanding of FT needs some work. If you're anywhere near western NY, you're invited to sit in on one of my classes where FT spectroscopy is analyzed in detail.
I notice you failed to respond in specifics and instead made an ad-hominem attack.Your understanding of FT needs some work. If you're anywhere near western NY, you're invited to sit in on one of my classes where FT spectroscopy is analyzed in detail.
Out of curiosity, does the numerical method for the Fourier transform apply any zero padding or end correction?As much as I hate to remove new functionality, I decided to not add the DC filter to DeltaWave, after all. Instead, I extended the available frequencies for high-pass and low-pass filters to include frequencies below 8Hz, and also made it possible to enter fractional frequencies by typing them directly into the frequency selector. If you want, you can enter 0.0712345 as the desired filter frequency
https://app.box.com/s/h2u20jzre4bake02n5aj40kc5r2woaa0
This is more in line with my "do no harm" philosophy than the DC filter in v2.1.14. For example, here's an HP filter at 0.1Hz, applied to white noise comparison file:
View attachment 420770
Out of curiosity, does the numerical method for the Fourier transform apply any zero padding or end correction?
I remember back in the days of MLSSA doing so made significant differnces in the results.
Sure, there always various pros and con's of the various analysis methods. I tend to have more interest in the localized behavior as that is more similar to how we experience sound.Windowed FFTs and STFT have been in use for as many years as audio has been around (applied extensively in DeltaWave, of course). Wavelets are not particularly good at time analysis, they simply compute the result at different scales that helps with time-boxing a specific frequency. Just like STFT does:
View attachment 420761
I was very specific- the parts I quoted back to you are incorrect.I notice you failed to respond in specifics and instead made an ad-hominem attack.
Isn't modern technology great at automating things?For convolution with a large FIR filter, DW uses an overlap-add method that requires a minimum of n + m - 1 size for the FFT. In effect, the data (size n) is padded with at least m - 1 zeros, where m is the size of the filter.
I was very specific- the parts I quoted back to you are incorrect.
Fast Fourier analysis makes the assumption that the samples time waveform repeats an infinite times.
Fourier analysis requires the assumption that the sampled time waveform are periodic and time invariant at the non-localized level.
For most music, that is a poor assumption. It has little ability to differentiate localized events and is really an overall “average” over a block of time.
Nor is the inverse transform of the FFT to the time domain, be it real and complex, magnitude or phase, etc is not deterministic.
This is one reason why more sophisticated analysis tools such as waterfall, JTFA and wavelet were developed.
ChatGPT gets a lot of stuff wrong because it scrapes things from both good and ignorant sources without being able to tell one from the other. Any finite block of signal can be treated as periodic. And any continuous and finite function can be interchangeably transformed between time and frequency domain with no loss of information.Response from ChatGPT o1
It is true that a single Fourier transform (FFT) over a large time block can obscure short-lived or transient events—especially in real music, which is not periodic or time-invariant. FFT-based methods essentially provide an “overall average” in the frequency domain for the selected block size, making it difficult to pinpoint exactly when certain artifacts occur.
However, the loopback null test approach (using tools like DeltaWave) is not limited to a single, long FFT. DeltaWave first time-aligns and level-matches two complete waveforms (the reference and the re-captured signal), then subtracts them sample by sample in the time domain. The resulting difference waveform (Δ) inherently reflects any transient or time-dependent inaccuracies the device introduced.
If you want to dig further into transient-specific details, you can apply more localized analyses (like short-time FFT, wavelet transforms, or waterfall plots) to the difference waveform itself. These methods indeed offer finer time resolution, ensuring you’re not “averaging away” localized events. So while standard FFTs remain extremely useful for identifying harmonic distortion or noise floors in steady-state signals, the combination of time-domain subtraction plus optional time-frequency analysis of the Δ\DeltaΔ file provides a way to capture transient behaviors that a single, full-span FFT might miss.
Thus, even though Fourier-based measurement is often described as “assuming periodicity,” nothing in the null test requires you to rely on that assumption for capturing device-induced time-domain colorations. Subtracting the entire time waveform from the reference is already a more direct way to see any differences—transient or otherwise—and we can still apply advanced tools to the difference file if needed.
ChatGPT gets a lot of stuff wrong because it scrapes things from both good and ignorant sources without being able to tell one from the other. Any finite block of signal can be treated as periodic. And any continuous and finite function can be interchangeably transformed between time and frequency domain with no loss of information.
Bollocks. Now the bot is getting Shannon-Nyquist wrong and stirring it into the pot.However, the issue isn’t whether the transform is theoretically complete—it’s how that information is distributed, and whether a single long FFT meaningfully captures localized time events. When music signals vary quickly, a short-lived transient gets “smeared” across the FFT block, so if you’re only examining the overall frequency bins for one large window, it can be hard to pinpoint when that transient occurred or what impact it might have in the time domain.