• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Beta-test: DeltaWave Null Comparison software

OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,741
Likes
10,484
Location
North-East
I've been looking for a way to objectively evaluate sound quality all my life.
I think this software is a perfect comparison tool that integrates all elements of all verification methods.
However, my personal knowledge is poor and I am not able to fully master the software.
I want to use this software to quantify how much the original sound source (flac format) is encoded into various formats and how much it deteriorates compared to the original.
The simplest way to compare the match rate with the original is to look at the past logs and recognize that the higher the negative value of "RMS of the difference of spectra", the higher the match rate.
Do you agree?
Also, is the 60 day usage period for this software still valid?

There's no trial period -- the software is free for. as long as you want to use it. Donate only if you feel you must, I'm not demanding donations :)

RMS of the difference of spectra measures the difference between two spectral averages of the two waveforms. It's enough to judge whether there's a significant deviation in frequency response only. Use RMS Null difference for an overall measure of "equality" that includes timing, amplitude and frequency information. If you want a measure of the "audibility difference" between two files, take a look at PK Metric in DeltaWave.
 

sumeragi

New Member
Joined
Aug 27, 2023
Messages
4
Likes
1
I think the software is still free. You can make a donation to Paul.

Yes, a null of -80 db is a closer match than -50 db for instance.
Thank you for your prompt reply.
Looking at the [Difference (rms)] value, you can judge the approximate difference from the original.
As far as I know, no tool existed in the past that could quantify sound quality.
[deltawave] is a great tool!
 

sumeragi

New Member
Joined
Aug 27, 2023
Messages
4
Likes
1
There's no trial period -- the software is free for. as long as you want to use it. Donate only if you feel you must, I'm not demanding donations :)

RMS of the difference of spectra measures the difference between two spectral averages of the two waveforms. It's enough to judge whether there's a significant deviation in frequency response only. Use RMS Null difference for an overall measure of "equality" that includes timing, amplitude and frequency information. If you want a measure of the "audibility difference" between two files, take a look at PK Metric in DeltaWave.
Thank you for your prompt reply.
By looking at the value of [Correlated Null], you can judge the approximate difference from the original.
Surprised to hear that this amazing tool is free!
It seems that it has not been updated since the final version was released in December 2022, but it has more than enough functions at the moment.
 

Rantapossu

Addicted to Fun and Learning
Joined
Jul 21, 2022
Messages
528
Likes
377
It seems that it has not been updated since the final version was released in December 2022, but it has more than enough functions at the moment.

There's a test version of V2.0.9 available (March 25th, 2023), but @pkane hasn't released it officially yet.

I've used that version exclusively and it has worked flawlessly.

Here's the link:

 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,741
Likes
10,484
Location
North-East
Thank you for your prompt reply.
By looking at the value of [Correlated Null], you can judge the approximate difference from the original.
Surprised to hear that this amazing tool is free!
It seems that it has not been updated since the final version was released in December 2022, but it has more than enough functions at the moment.
There's been one update since December 2022, as @Rantapossu pointed out, that I've not made official yet simply because it was a very minor enhancement. If you have ideas or suggestions on what to add or improve, please post them.
 

sumeragi

New Member
Joined
Aug 27, 2023
Messages
4
Likes
1
There's been one update since December 2022, as @Rantapossu pointed out, that I've not made official yet simply because it was a very minor enhancement. If you have ideas or suggestions on what to add or improve, please post them.
Sorry for the persistent question.
If I want to check the match rate simply, which one should I focus on, [Difference (rms)] or [Correlated Null Depth]?
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,741
Likes
10,484
Location
North-East
Sorry for the persistent question.
If I want to check the match rate simply, which one should I focus on, [Difference (rms)] or [Correlated Null Depth]?
Difference (rms). Correlated null depth is a value that can help pinpoint large timing differences, but Difference (rms) includes all the differences, including timing.
 

introC

New Member
Joined
Aug 26, 2023
Messages
2
Likes
0
Hi, I've been using DeltaWave for various tasks, and it has been really helpful so far. I would like to report the problems that I encountered with the clock drift processing/matching:

1) The frequency response will gradually degrade at higher frequencies, depending on the clock drift value.

With the value of the clock drift: -144.145211926878 ppm, this will be the difference in the spectrum:
Snipaste_2023-09-09_00-46-26.png


With the lower ppm values, the processed file will still have degraded high frequencies, but it's less noticeable. The Level EQ isn't a solution for me since it will be applied to the clock drift-corrected result, which will be just a path to match the reference spectrum.

I'll note, that this value isn't large enough to make sense for this degradation in freq. response. Also, I was able manually stretch the clock drifted file and align it to the reference in Reaper DAW (with the `Preserve pitch when changing rate` checkbox disabled on the take), and the result was really close to the DeltaWave result, but the Reaper export didn't had this degraded frequency response, only the lowpass filter somewhere at 21 kHz. (And I prefer DeltaWave auto match for this task, since it quite bothersome to do it manually)

2) Sometimes the clock drift match needs a second pass to make a full match (probably because larger values have less precision in the float data type).

In order to make a final correct match, I sometimes needed to do a second match by passing the exported reference and the comparison. With the first pass, the value is -144.145211926878 ppm, and with the second pass, it is 0.0208148945337468 ppm, where it is finally fully matched.

3) When correcting the clock drift in a high sample rate, over time, it will gradually introduce artifacts in the hearable range.

I was trying to shift the high-frequency degradation by upsampling the input, and while it did kind of work, I then noticed a strange artifact that gets louder gradually over time:
Snipaste_2023-09-09_11-00-00.png

This is the delta of the same clock drift processed file (the -144.145211926878 value), but one was processed with the original 44100 Hz SR, and the other with 176400 Hz SR. It's more noticeable after the 4:20 mark.

Here is the same delta in the Delta Spectrogram:
Snipaste_2023-09-09_10-56-46.png



That's what I wanted to report on the clock drift issues I noticed. And I hope I didn't say something stupid :D
 
Last edited:
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,741
Likes
10,484
Location
North-East
Hi, I've been using DeltaWave for various tasks, and it has been really helpful so far. I would like to report the problems that I encountered with the clock drift processing/matching:

1) The frequency response will gradually degrade at higher frequencies, depending on the clock drift value.

With the value of the clock drift: -144.145211926878 ppm, this will be the difference in the spectrum:
View attachment 310841

With the lower ppm values, the processed file will still have degraded high frequencies, but it's less noticeable. The Level EQ isn't a solution for me since it will be applied to the clock drift-corrected result, which will be just a path to match the reference spectrum.

I'll note, that this value isn't large enough to make sense for this degradation in freq. response. Also, I was able manually stretch the clock drifted file and align it to the reference in Reaper DAW (with the `Preserve pitch when changing rate` checkbox disabled on the take), and the result was really close to the DeltaWave result, but the Reaper export didn't had this degraded frequency response, only the lowpass filter somewhere at 21 kHz. (And I prefer DeltaWave auto match for this task, since it quite bothersome to do it manually)

2) Sometimes the clock drift match needs a second pass to make a full match (probably because larger values have less precision in the float data type).

In order to make a final correct match, I sometimes needed to do a second match by passing the exported reference and the comparison. With the first pass, the value is -144.145211926878 ppm, and with the second pass, it is 0.0208148945337468 ppm, where it is finally fully matched.

3) When correcting the clock drift in a high sample rate, over time, it will gradually introduce artifacts in the hearable range.

I was trying to shift the high-frequency degradation by upsampling the input, and while it did kind of work, I then noticed a strange artifact that gets louder gradually over time:
View attachment 310845
This is the delta of the same clock drift processed file (the -144.145211926878 value), but one was processed with the original 44100 Hz SR, and the other with 176400 Hz SR. It's more noticeable after the 4:20 mark.

Here is the same delta in the Delta Spectrogram:
View attachment 310846


That's what I wanted to report on the clock drift issues I noticed. And I hope I didn't say something stupid :D

Clock Drift correction is a resampling operation that absolutely shifts higher frequencies. In effect, it's a change in the sampling frequency of the waveform. The frequency response doesn't degrade, but higher frequencies will have gradual slope to them due to resampling.

What length files are you using? Remember, DW must determine clock drift to a very high precision from the noisy data. The more samples you feed into the drift determination routine, the more accurate it'll be. But jitter and other sources of noise can still introduce uncertainty into the drift calculation. If very high precision to -140dB or better is required, you should think about synchronizing the clocks to eliminate drift. Unfortunately, jitter will still remain, and that can't (and shouldn't be) corrected by DW.
 

introC

New Member
Joined
Aug 26, 2023
Messages
2
Likes
0
Clock Drift correction is a resampling operation that absolutely shifts higher frequencies. In effect, it's a change in the sampling frequency of the waveform. The frequency response doesn't degrade, but higher frequencies will have gradual slope to them due to resampling.

What length files are you using? Remember, DW must determine clock drift to a very high precision from the noisy data. The more samples you feed into the drift determination routine, the more accurate it'll be. But jitter and other sources of noise can still introduce uncertainty into the drift calculation. If very high precision to -140dB or better is required, you should think about synchronizing the clocks to eliminate drift. Unfortunately, jitter will still remain, and that can't (and shouldn't be) corrected by DW.
Can I somehow mitigate this slope from occurring? I've got an idea to apply the difference in frequency response that it produces to compensate for the slope after processing (or Linear EQ after processing will actually be better?). However, there has to be a better way, like increasing the filter steepness of the resampler, right?

The file I'm correcting (shown in the example before) is 7 minutes long. It's a Drum n Bass track by Pendulum called "Midnight Runner" from the album "In Silico." The only noticeable difference in noise I can observe is the noise-shaped dithering at the high frequencies, while the rest of the noise floor frequencies look identical to the reference. About the need for a second pass to achieve a fuller or better match: if we try to pass the sum of the ppm values from 2 passes to Manual Correction like this one -144.124484792602 + 0.0212234087917026 = -144.1032613838102974, in the Manual Correction history, it will round this value to -144.10326138381, and the result will be worse than that of the second pass match. Even the DW report will change Drift computation quality from Good to Excellent for the second pass. However, the reason it might be necessary for such a precise value to get the best overall null, is that the comparison record clock drift isn't linear. I know there is a Non-linear drift correction checkbox in the DW settings, but sadly, it doesn't correct non-linearity in my testing and still requires a second pass for a final match.

On a side note, the pack of the recordings that I'm trying to correct consists of instrumental versions of the album "In Silico" by Pendulum. I'm particularly focused on the record called "Midnight Runner" since the instrumental export is the same as the release (the reference), and as all the instrumental recordings in the pack, it has non-linear clock drift, messed up loudness between the left and right channels, and probably differences in phase and frequency response. As I noticed, only the clock drift is different between records, while the other described differences seem to be constant. Since they're constant, in order to minimize the error in creating correction presets for the loudness, frequency response, and phase, I need to obtain the best null for the aligned "Midnight Runner" that will be corrected from those presets. However, the gradual slope noticeable at high frequencies after clock-drift correction messes up the frequency response correction preset, and I would like to avoid that, ideally.
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,741
Likes
10,484
Location
North-East
Can I somehow mitigate this slope from occurring? I've got an idea to apply the difference in frequency response that it produces to compensate for the slope after processing (or Linear EQ after processing will actually be better?). However, there has to be a better way, like increasing the filter steepness of the resampler, right?

The file I'm correcting (shown in the example before) is 7 minutes long. It's a Drum n Bass track by Pendulum called "Midnight Runner" from the album "In Silico." The only noticeable difference in noise I can observe is the noise-shaped dithering at the high frequencies, while the rest of the noise floor frequencies look identical to the reference. About the need for a second pass to achieve a fuller or better match: if we try to pass the sum of the ppm values from 2 passes to Manual Correction like this one -144.124484792602 + 0.0212234087917026 = -144.1032613838102974, in the Manual Correction history, it will round this value to -144.10326138381, and the result will be worse than that of the second pass match. Even the DW report will change Drift computation quality from Good to Excellent for the second pass. However, the reason it might be necessary for such a precise value to get the best overall null, is that the comparison record clock drift isn't linear. I know there is a Non-linear drift correction checkbox in the DW settings, but sadly, it doesn't correct non-linearity in my testing and still requires a second pass for a final match.

On a side note, the pack of the recordings that I'm trying to correct consists of instrumental versions of the album "In Silico" by Pendulum. I'm particularly focused on the record called "Midnight Runner" since the instrumental export is the same as the release (the reference), and as all the instrumental recordings in the pack, it has non-linear clock drift, messed up loudness between the left and right channels, and probably differences in phase and frequency response. As I noticed, only the clock drift is different between records, while the other described differences seem to be constant. Since they're constant, in order to minimize the error in creating correction presets for the loudness, frequency response, and phase, I need to obtain the best null for the aligned "Midnight Runner" that will be corrected from those presets. However, the gradual slope noticeable at high frequencies after clock-drift correction messes up the frequency response correction preset, and I would like to avoid that, ideally.
The difference introduced by clock drift is real and baked into the recording. It's a shift in frequencies due to the fact that the time interval, as reproduced by the DAC, is not the same as the time interval recorded by the ADC. Eliminating this would require precise adjustment of the slope of the frequency response of the measurement, but since we don't know which component caused the difference, such correction may mask an actual error. Non-linear level correction in DW will do this, but this is not very useful for measurements since it can correct for real errors in the DUT. For this reason, I do recommend using a synchronized clock from DAC to ADC, if at all possible.

And just philosophically speaking, correcting errors isn't what DW is designed for -- it's designed for measuring differences between recordings. By eliminating real differences (such as non-linear frequency and phase effects) you will affect the measurement. Non-linear correction is there primarily to check if phase or amplitude differences are causing the larger error. Non-linear corrections will only work for errors that are predictable, in other words, repeated throughout the measurement. Random or non-periodic amplitude errors, random jitter, etc., can't be corrected by this function.
 

matpowel

Member
Joined
Oct 23, 2023
Messages
11
Likes
3
Hey @pkane thanks for a great tool here. AudioDiffMaker badly needed this kind of next-gen development and the subsample and phase type alignment along is worth a donation so I'll send something your way!

I have a couple of questions though:
  1. I see the channel options for each file are "L", "R", "L+R" and "Stereo" but I can't see any documentation anywhere for them. L and R are obvious, I'm guessing L+R is either an averaging of the two or a processing of the summed signal, but what is stereo? Interestingly, for the files I tested "stereo" results are way worse than any of the other 3 (which are mostly comparable). Like if L alone is -60, Stereo might be -30 for straight RMS dB, with PK Metric results being similar. Are these options documented somewhere? If not it might be worth a very simple description of each under that section of the docs?
  2. What is the "RMS of the difference of spectra" in the Results tab? It's in the "DF Metric" section of the results but it's unclear from anything I can read what it is.
  3. What is the metric that you're using for the revised list of results you've posted using files from the GS page? You post the GS readings from Didier, then the standard RMS diff from DW, then a couple of other metrics like PK Metric. Which numbers are they from the Results tab? Maybe the PK Metric RMS and DF Metric Median?
Separately, I have to say I'm really struggling to get convinced by the PK Metric. I read your descriptions on this site and wherever I could find it and it sounded promising but in practice it doesn't seem to do as good a job as a straight diff with the various alignments/gain match etc. As an example, I took the Original.wav from the Gearspace test and applied a EQ high cut of 5dB in Pro-Q3 at 15khz and 24dB/octave and ran that into Softube Tape with maximum type A distortion. The result is very obviously different to the ear, and the standard diff/null gives -38dB whereas the PK Metric RMS is around -53dB (dBFS and dBr both around the same). Interestingly the standard diff in dbA is around -50. I chose those particular operations because they mimic what I "hear" when I listen to poor quality converters (like UAD Apollo etc) which is a noticeable degradation at the top end and lower sounding fidelity resulting in a slightly saturated sound. For this reason I think both the standard RMS across dBr and dBA, plus perhaps Correlated Null are more useful in measuring real world converter results, and probably also most other "gear testing". However, I'm completely open to something I'm missing in the methodology here. Note that when removing the tape saturation the PK Metric jumps back to slightly better (lower -ve value, I believe it was -61 vs -57) than the standard diff which would make it slightly better at predicting that the two files very obviously sound different to anyone with half decent ears because of the high cut going on.

Nevertheless, I'll be using this tool a ton as I'm just getting back into the studio after years away and rebuilding the core gear. One of the first thing I've used it for is putting hard numbers alongside the "ear tests" of gear like the Flock Patch is compared to a standard Switchcraft TT bay or similar as well as various my monitoring chain and various converters. Scientific measurements aren't particularly useful in much of the music world, but when getting the last few % of performance out of the studio it's awesome to have objective tools for items like converters and patching :)

Matt
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,741
Likes
10,484
Location
North-East
Hey @pkane thanks for a great tool here. AudioDiffMaker badly needed this kind of next-gen development and the subsample and phase type alignment along is worth a donation so I'll send something your way!

I have a couple of questions though:
  1. I see the channel options for each file are "L", "R", "L+R" and "Stereo" but I can't see any documentation anywhere for them. L and R are obvious, I'm guessing L+R is either an averaging of the two or a processing of the summed signal, but what is stereo? Interestingly, for the files I tested "stereo" results are way worse than any of the other 3 (which are mostly comparable). Like if L alone is -60, Stereo might be -30 for straight RMS dB, with PK Metric results being similar. Are these options documented somewhere? If not it might be worth a very simple description of each under that section of the docs?
  2. What is the "RMS of the difference of spectra" in the Results tab? It's in the "DF Metric" section of the results but it's unclear from anything I can read what it is.
  3. What is the metric that you're using for the revised list of results you've posted using files from the GS page? You post the GS readings from Didier, then the standard RMS diff from DW, then a couple of other metrics like PK Metric. Which numbers are they from the Results tab? Maybe the PK Metric RMS and DF Metric Median?
Separately, I have to say I'm really struggling to get convinced by the PK Metric. I read your descriptions on this site and wherever I could find it and it sounded promising but in practice it doesn't seem to do as good a job as a straight diff with the various alignments/gain match etc. As an example, I took the Original.wav from the Gearspace test and applied a EQ high cut of 5dB in Pro-Q3 at 15khz and 24dB/octave and ran that into Softube Tape with maximum type A distortion. The result is very obviously different to the ear, and the standard diff/null gives -38dB whereas the PK Metric RMS is around -53dB (dBFS and dBr both around the same). Interestingly the standard diff in dbA is around -50. I chose those particular operations because they mimic what I "hear" when I listen to poor quality converters (like UAD Apollo etc) which is a noticeable degradation at the top end and lower sounding fidelity resulting in a slightly saturated sound. For this reason I think both the standard RMS across dBr and dBA, plus perhaps Correlated Null are more useful in measuring real world converter results, and probably also most other "gear testing". However, I'm completely open to something I'm missing in the methodology here. Note that when removing the tape saturation the PK Metric jumps back to slightly better (lower -ve value, I believe it was -61 vs -57) than the standard diff which would make it slightly better at predicting that the two files very obviously sound different to anyone with half decent ears because of the high cut going on.

Nevertheless, I'll be using this tool a ton as I'm just getting back into the studio after years away and rebuilding the core gear. One of the first thing I've used it for is putting hard numbers alongside the "ear tests" of gear like the Flock Patch is compared to a standard Switchcraft TT bay or similar as well as various my monitoring chain and various converters. Scientific measurements aren't particularly useful in much of the music world, but when getting the last few % of performance out of the studio it's awesome to have objective tools for items like converters and patching :)

Matt
  • I see the channel options for each file are "L", "R", "L+R" and "Stereo" but I can't see any documentation anywhere for them.
L+R computes the average of the two channels. Useful in certain situations to reduce noise if both channels record the same input.
Stereo uses the left channel to measure, but applies the same level, drift, and offset to the right channel of the comparison waveform. This gives you a matched stereo recording that you can listen to or compare to the reference (say, in an ABX test).

  • What is the "RMS of the difference of spectra" in the Results tab? It's in the "DF Metric" section of the results but it's unclear from anything I can read what it is.
Root-mean-square of the difference of the two spectral lines (reference and comparison). Basically tells you how large the error is between the two spectra across the full frequency range. Not an audibility metric, in any sense, but lets you judge how close or far away the two spectra are from each other.
  • PK Metric RMS is around -53dB (dBFS and dBr both around the same). Interestingly the standard diff in dbA is around -50.
That just tells you that PK Metric, in this case, is primarily dominated by the equal loudness curve. dBA is an approximation of that curve, so it's not surprising both give similar results. And -53dB isn't inaudible, by any means. Also, a single number, even PKMetric RMS value, is still not sufficient to show all the possible ways that two waveforms can be different. For better approximation you'll need to look at the actual PKMetric plot.
 

Grooved

Addicted to Fun and Learning
Joined
Feb 26, 2021
Messages
682
Likes
441
Interestingly, for the files I tested "stereo" results are way worse than any of the other 3 (which are mostly comparable). Like if L alone is -60, Stereo might be -30 for straight RMS dB
Hi, I'm very surprised because I've used DeltaWave a lot, and even if I didn't always check Stereo, L and R, each time I did the three tests, I got stereo giving me a result that was matching the worst result of either L or R.
It always the same than one of L or R, always, I never got a result different than both, so surely not one at -30dB with L or R being at -60dB
 

matpowel

Member
Joined
Oct 23, 2023
Messages
11
Likes
3
  • PK Metric RMS is around -53dB (dBFS and dBr both around the same). Interestingly the standard diff in dbA is around -50.
That just tells you that PK Metric, in this case, is primarily dominated by the equal loudness curve. dBA is an approximation of that curve, so it's not surprising both give similar results. And -53dB isn't inaudible, by any means. Also, a single number, even PKMetric RMS value, is still not sufficient to show all the possible ways that two waveforms can be different. For better approximation you'll need to look at the actual PKMetric plot.
Thanks for the response.

Regarding the PK Metric plot comment, what am I looking for in this plot to use to compare "accuracy" of signal to an original? I've been using the various plots and many are useful (eg. the delta of spectrum shows variation by frequency), but all I really took from this PK Metric plot was that the perceived difference is fairly consistent through time relative to the signal (dBr).

1698164279521.png
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,741
Likes
10,484
Location
North-East
Thanks for the response.

Regarding the PK Metric plot comment, what am I looking for in this plot to use to compare "accuracy" of signal to an original? I've been using the various plots and many are useful (eg. the delta of spectrum shows variation by frequency), but all I really took from this PK Metric plot was that the perceived difference is fairly consistent through time relative to the signal (dBr).

View attachment 320936

The peaks represent the most audible portions of the null based on the perceptual weighting that's used by PK Metric. If you were to listen for differences between the two recordings, say in an ABX test, or listen to the null / delta file, you'd want to first focus on the portions that have the highest sustained peaks in this plot, as these are likely to be more audible than the rest.
 

matpowel

Member
Joined
Oct 23, 2023
Messages
11
Likes
3
Hi, I'm very surprised because I've used DeltaWave a lot, and even if I didn't always check Stereo, L and R, each time I did the three tests, I got stereo giving me a result that was matching the worst result of either L or R.
It always the same than one of L or R, always, I never got a result different than both, so surely not one at -30dB with L or R being at -60dB
Just for the reference, I re-ran the same comparison that I pasted for pkane above but in stereo, the difference wasn't as big as I've seen before but it was still extremely significant. PK Metric & RMS diff 10dB "worse", but also more than 20 dbA "worse". Remember, this is just the exact file with 2 plugins applied, but this matches the same results I've seen when testing loopback type captures.

The methodology I used was to add the two plugins to the Logic Pro channel and "bounce in place" to generate a new file. The reference is .wav and the comparison .aiff but I can't imagine that makes a difference as neither should be compressed.

1698164886010.png
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,741
Likes
10,484
Location
North-East
Just for the reference, I re-ran the same comparison that I pasted for pkane above but in stereo, the difference wasn't as big as I've seen before but it was still extremely significant. PK Metric & RMS diff 10dB "worse", but also more than 20 dbA "worse". Remember, this is just the exact file with 2 plugins applied, but this matches the same results I've seen when testing loopback type captures.

The methodology I used was to add the two plugins to the Logic Pro channel and "bounce in place" to generate a new file. The reference is .wav and the comparison .aiff but I can't imagine that makes a difference as neither should be compressed.

View attachment 320942
There appears to be possibly a mismatch in your set up, or something else going on. The huge clock drift detected and poor fit quality are an indicator that something isn't right. Don't know what plugins you've added, but these are messing up the signal considerably :)
 

matpowel

Member
Joined
Oct 23, 2023
Messages
11
Likes
3
There appears to be possibly a mismatch in your set up, or something else going on. The huge clock drift detected and poor fit quality are an indicator that something isn't right. Don't know what plugins you've added, but these are messing up the signal considerably :)

I mean, the same result happens no matter whether it's loopback or even just applying Pro-Q3 filter or anything else useful.

So for example, take the GS original WAV, apply a single Pro-Q3 instance (probably the single most used/respected EQ plugin in the Music production world right now) with a 5dB (24dB/octave) cut at 15khz and compare:

L - diff -59.02dB [-63.42dBA], PKMetric RMS=-56.8dBr
Stereo - diff -34.24dB [-34.19dBA], PKMetric RMS=-43.3dBr
 

Grooved

Addicted to Fun and Learning
Joined
Feb 26, 2021
Messages
682
Likes
441
I mean, the same result happens no matter whether it's loopback or even just applying Pro-Q3 filter or anything else useful.

So for example, take the GS original WAV, apply a single Pro-Q3 instance (probably the single most used/respected EQ plugin in the Music production world right now) with a 5dB (24dB/octave) cut at 15khz and compare:

L - diff -59.02dB [-63.42dBA], PKMetric RMS=-56.8dBr
Stereo - diff -34.24dB [-34.19dBA], PKMetric RMS=-43.3dBr
Did you test the R channel too?
 
Top Bottom