• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Beta-test: DeltaWave Null Comparison software

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
1612051948948.png

Updated, the Nonlin params were not visible
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,792
Likes
37,693
I don't know if this relates to what KSTR is doing. I earlier in messing about with DW found in Non-linear calibration things seem to work out better if I set the EQ threshold rather low. I use -160 db if you noticed in my earlier screenshots. I've not gone as low as -500 db so don't know how that would change anything. But 160 did seem more reliable about the result on some files than even -144 db.
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,724
Likes
10,418
Location
North-East
Based on a lot of good feedback, DeltaWave version 1.0.58 is now available.

Changes in 1.0.58b
  • Added: Separate setting for sub-sample alignment, independent of drift correction
  • Added: Additional values for phase and non-linear threshold in settings
  • Changed: setting changes are saved immediately when exiting settings window, not when exiting DeltaWave as before
  • Fixed: group-delay/phase trend plot should produce a better curve fit to the phase plot
  • Fixed: phase limit setting now works as expected, was ignored previously
  • Added: (experimental) 400ms window filtered error signal in PK Metric plot. Hold down Ctrl key while mousing over the main plot to update.
  • Fixed: File end trim/take settings are now enforced up to the sample. Previously could vary based on file buffer size.
Subsample setting (default=checked):
1612209283183.png

If Correct clock drift is selected, the subsample alignment will be performed regardless of whether Subsample offset option is selected.
Unselecting both, will not correct for clock drift and leave sample alignment up to +/- 1 sample.
 
Last edited:

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
I've continued fiddling around with my synthesized test case which was constructed to be representative for a typical "original vs. analog loopback recording" type of scenario, where the Non-linear Correction always must be used for any meaningful results. This will be usually sample-sync'd so no need for constant clock-drift correction (which is actual a constant time stretching factor applied), resampling, and nonlinear clock-drift correction (which can be thought of as a dynamic time-stretching/compression).
Without Non-linear Correction enabled the linear transfer function differences -- different complex-valued (magnitude and phase) frequency responses of the reference and the compare -- hugely dominate the residual, by first principles. This is assuming we want to look at errors like distortion, jitter, correlated noise etc, and are not interested to expose the minor frequency response changes. So this is application-dependent, of course.

The goal was to find settings that give a residual audibly and technically closest to the real error signal which is exactly known because of the synthesis. Which also implies it will give the closest to real values for RMS of residual and any other metrics.

Further, I wanted to optimize it so that any processing artifacts -- which are unavoidable due to the way Nonlinear Correction works -- can be readily identified so as not be mistaken for real error too easily (for example when the error signal is really containing "idle tones" like 8kHz USB packet noise, clock frequencies folding down, SMPS switching frequencies bleeding in, RF demodulation, etc).

Those settings are, for a 44.1Khz sample rate:
  • Filter 1 : for Ref and Comp, HP@start/end (pre- and post-processing), 16Hz or 20Hz. This is essential as otherwise there is a ultra-low frequency rumble (when the source signal is wide-band and high energy) -- basically a fast DC-drift -- in the residual, spoiling it.
  • Filter 2 : for Ref and Comp, LP@start/end, at a frequency slightly lower that Nyquist (fs/2), I've used 18kHz. Without it, there are clipping warnings, and it helps to clean up the residual from irrelevant HF content, also for the residual waveform display.
  • Filter creation settings: FFT size >= 128k, lower settings make the "subsonic" Filter 1 less effective (shallow filter slope). Filter type (IIR/FIR) and Window function turned out to have little influence, if any, with large FFT sizes. I've selected FIR and Hann. Filter type etc has significant impact when there are dropouts (zeroed sections) or other gross errors in the recording, giving a huge residual for that moment and then we will see the ringing and settling time characteristics of the filters.
  • Match Gain, Remove DC and Subsample Offset all set to ON.
Non-linear Calibration settings:
  • Level EQ and Phase EQ both set to ON. We need to correct for magnitude and phase contribution of the frequency response differences we want to undo here. Non-linear Drift Correction set to OFF.
  • FFT Size of 128k, together with Hann Window Function gave the best results where artifacts (idles tones) are low but can be told apart from real errors in that they change character suddenly at FFT block boundaries. 128k FFT gives about 3 seconds block size at 44.1kHz sample rate. The Hann window creates clearly audible and visible block boundaries while not spoiling the residual with glitches at these transitions (like when using rectancular/Dirichlet). Other windows like low-valued Kaiser types give strange level modulation effects, whereas the low-leakage types like higher-valued Cosine etc smooth out the block boundaries too much, risking false estimation of the true error.
  • EQ set to OFF (0/all), or at least set to values outside the range defined by Filter 1 and Filter 2. Otherwise compensation must fail by design as parts of the frequency range are not corrected and hence give the same large difference as if the whole thing were switched off. Actually, because of the filter ringing, the artifacts are even stronger.
  • EQ Threshold best set to OFF (-500dB). Otherwise, the residual can quickly be spoiled by some frequencies not corrected (similar with wrong EQ setting above). There is no strict relation to bit depth of the reference file so it is not correct to set this to -96dB for a 16bit file. The reason is that dithering allows for lower levels for a specific frequency and also it depends on how much of that frequency is contained within one FFT block which spans a larger time range. In my test case (16bit source) the setting needed to be lower than -140dB to avoid additional artifacts. So, with 24 bit source an even lower setting would be required. This all depends extremely on the actual source content.
 
Last edited:

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
Stereo mode for both ref and comp apparantly obtains matching parameters only for Left channels, then "blindly" applied to the Right channels. Any minor channel imbalance therefore does not yield optimal correction for R channel. Separate complete passes for L and R would be needed (with all consequences for display etc). So, atm, one should do those separate passes manually... and remove the stereo setting alltogether.
Also, the channel selection list should only display and use the channels actually present (which could be more than two, for example for surround data). So maybe a list like channel 1... channel N, plus an average (mono sum).
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
Further musings:
When an externally measured "delta IR" is available as a correction kernel, an additional option to load an apply it might be an option. To avoid IR inversion/division problems, one might better supply one IR for each of reference and compare and when selected criss-cross this would cancel the LTI's total effect. It would effectively create the same combined (A*B) LTI's for both files but that is only a minor drawback.
[snip]
Maybe I'm just brain-farting here, and basically I do admit just being lazy here and trying to off-load the convolution into DW rather than doing it myself directly on the input files....
Following up, this seems to work out great, in a first simple test, original vs. recording (actual recording, this time):

1 - Record the transfer function of my interface in loopback, obtaining an impulse response (with REW) -- long sweep and heavy 8x averaging to reduce noise and gain drift error (which translates into frequency response error) for a very clean and realistic IR.
2 - Convolve the original file with that IR (with Audition) --> reference for DW.
3 - Record original file in loopback --> compare for DW.
4 - Matching with Nonlin Correction OFF (other settings as described in #524) --> perfect residual (-120dB RMS null), no artifacts, only a rather clean signal with the remaining gain drift "pumping" effect stands out in the noise.

The first time ever I could max out the playback gain slider, yeehah!

Now only noise and gain drift is what's left to get a really deep linear Null.... a lot of time-domain averaging for step 3 will be my next try...
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
OK, guys and girls,

I think I have acheived a real breakthrough for extracting (most of) the real error (stripped of simple linear errors) of a DA-->AD process (analog loopback recording within the same device, or with seperate devices if they can be clock-synchonized). Cable distortion included, if any ;-) (I've used a short 1/4" patch cable).

I've extended the experiment from the previous post to a higher (actually quite insane) precision level by introducing block-averaging as mentioned.

The process:
  • Measure the impulse response with REW to the highest precision it is capable of, using 8 averages of a 4M log sweep from "DC" to fs/2 (22050Hz in this case). That took 12 minutes. Export with proper window, the longest possible one as the AC coupling time constant for the ADC is very long (the DAC is DC-coupled). Level was -6dBFS where distortion would be assumed low and signal-to-noise ratio is high. The noise is at -140dB thereabouts, so already at the 24bit (or 32bit float) limit. The frequency response error is minimal due to the long sweep and the 8 averages. After exporting the IR the 6dB gain is aplied to get back a unity gain IR (close to, as the DA-->AD gain is sligthly less than unity in the RME ADI-2 Pro).
  • Convolve that IR, using Adobe Audition, with the original source file later to be used as the reference for DW. This imprints the linear transfer function on the reference so that it cancels out with the transfer function embedded in the recording.
  • Play and Record the original file, some 84 takes (the file is 1:14 long, so that took 103 minutes (enough for having dinner meanwhile ;-)
  • Block-Average that sequence of 84 takes to "condense" it back to one single take, yielding a signal with any non signal-correlated error terms (including the static gain drifts) reduced by almost 20dB. Any strictly signal-correlated error (like nonlinear distortion but also including signal-correlated jitter etc, for example) of the conversions will not be reduced at all (which is the goal here). I have written a small C program which affords that. This is the comparison file for DW.
  • Run DW to perform matching, with the nonlinear calibration turned off, so only gain factor and (sub-)sample offset is calculated and applied.
It turned out that it is best to obtain a gain correction factor first using only the quiet sections of the file giving the best RMS null but then apply it to the whole file. The reason for this is that the DA-->AD conversion showed a small gain compression effect, louder sections having an ever so slightly less gain than quieter sections, a compressive distortion (not to be confused with a standard signal compressor effect). We'll see that later in the linearity plot.

Lets look at the DW plots.

Original Sprectrum overlayed with Spectrum of Delta:
1612302338376.png

Those are really a tight match, showing no signs of processing artifacts (idle tones, mostly) that we would normally get from using DW alone with Non-linear Calibration on.


Delta of Spectra (magnitude error):
1612302542268.png

That could certainly be rated as "zero error". Micro-dBs!


Delta Phase (phase error):
1612302648158.png

Same as above, micro-degrees.


Linearity:
1612302744498.png

Here we can see that the louder sections, using the higher 10 bits show a level drop, an extremely small compressive distortion (note the dB scale, again). And it is clean as it gets, down to bit 24.


Of course the most interesting data is the Delta Waveform (residual):
1612302956232.png

The matching (and residual noise) in the quiet section is breath-taking, whereas the louder sections are sticking out (because of the compression). We need to compare this with the original waveform to better see the effect of the less deep Null at the louder section from the compressive distortion:
1612303212419.png



While this gives some optical hints what's going on, the real deal is listening to that residual (download ZIP-file, also containing the input files).
  • The first thing we note is a sort-of ricochet sound right at the begining. This the recovery from a small marker pulse I've put into the file at the first sample for alignment and trimming the raw data. Root cause for this I can only speculate about at the moment.
  • In the medium loud section up to 0:23 one can hear a combination of distortion (notably in the bass), overlayed with some linear residue (undistorted music) at the more dynamic spots.
  • In the loud section from 0:25 to 0:36 the linear residue dominates the delta though the underlying distortion can still be readily identified. Right at 0:36 a wind chimes is played and that has some real nastyness to it.
  • In the quiet part from 0:35 onward things get interesting. This is the section the static gain correction was obtained from, so we have the deepest null. By this, it uncovers the (microscopic, mind you) distortion in such a clear way, probably like never before. I think I can hear some time-domain effects (recovery?) also, not just a pure static distortion....

Of course this was just the first experiment I've made and I will have to check if the results hold and maybe others will verify this but so far this looks tremendously promising. For example, because of the heavy averaging there can be errors that get lost or underestimated which certainly needs some investigation.

The approach can be extended for comparisons of like recordings rather than a match of original vs recording. When using the same DA-->AD and approximately equal levels (and loading of DAC's output), the error of a DUT (device under test) is fully exposed as all the DA-->AD errors will mostly cancel. DUTs could be cables (well, not that promising to find anything there ;-), and amplifers, basically anything that has an analog input and an analog output even when the internals are digital (containing AD-->DA internally).

I'll probably open a thread with a detailed step-by-step guide for all the processing steps, for replication by others as well as because at this level of precision every tiny detail that is neglected can make a huge difference for the results.
 
Last edited:
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,724
Likes
10,418
Location
North-East
Stereo mode for both ref and comp apparantly obtains matching parameters only for Left channels, then "blindly" applied to the Right channels. Any minor channel imbalance therefore does not yield optimal correction for R channel. Separate complete passes for L and R would be needed (with all consequences for display etc). So, atm, one should do those separate passes manually... and remove the stereo setting alltogether.
Also, the channel selection list should only display and use the channels actually present (which could be more than two, for example for surround data). So maybe a list like channel 1... channel N, plus an average (mono sum).

Not quite. Offset/delay and linear level corrections are computed only for the left channel and then the same corrections are applied to both. Non-linear EQ is computed and applied separately for each channel.
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,724
Likes
10,418
Location
North-East
OK, guys and girls,

I think I have acheived a real breakthrough for extracting (most of) the real error (stripped of simple linear errors) of a DA-->AD process (analog loopback recording within the same device, or with seperate devices if they can be clock-synchonized). Cable distortion included, if any ;-) (I've used a short 1/4" patch cable).

I've extended the experiment from the previous post to a higher (actually quite insane) precision level by introducing block-averaging as mentioned.

The process:
  • Measure the impulse response with REW to the highest precision it is capable of, using 8 averages of a 4M log sweep from "DC" to fs/2 (22050Hz in this case). That took 12 minutes. Export with proper window, the longest possible one as the AC coupling time constant for the ADC is very long (the DAC is DC-coupled). Level was -6dBFS where distortion would be assumed low and signal-to-noise ratio is high. The noise is at -140dB thereabouts, so already at the 24bit (or 32bit float) limit. The frequency response error is minimal due to the long sweep and the 8 averages. After exporting the IR the 6dB gain is aplied to get back a unity gain IR (close to, as the DA-->AD gain is sligthly less than unity in the RME ADI-2 Pro).
  • Convolve that IR, using Adobe Audition, with the original source file later to be used as the reference for DW. This imprints the linear transfer function on the reference so that it cancels out with the transfer function embedded in the recording.
  • Play and Record the original file, some 84 takes (the file is 1:14 long, so that took 103 minutes (enough for having dinner meanwhile ;-)
  • Block-Average that sequence of 84 takes to "condense" it back to one single take, yielding a signal with any non signal-correlated error terms (including the static gain drifts) reduced by almost 20dB. Any strictly signal-correlated error (like nonlinear distortion but also including signal-correlated jitter etc, for example) of the conversions will not be reduced at all (which is the goal here). I have written a small C program which affords that. This is the comparison file for DW.
  • Run DW to perform matching, with the nonlinear calibration turned off, so only gain factor and (sub-)sample offset is calculated and applied.
It turned out that it is best to obtain a gain correction factor first using only the quiet sections of the file giving the best RMS null but then apply it to the whole file. The reason for this is that the DA-->AD conversion showed a small gain compression effect, louder sections having an ever so slightly less gain than quieter sections, a compressive distortion (not to be confused with a standard signal compressor effect). We'll see that later in the linearity plot.

Lets look at the DW plots.

Original Sprectrum overlayed with Spectrum of Delta:
View attachment 110191
Those are really a tight match, showing no signs of processing artifacts (idle tones, mostly) that we would normally get from using DW alone with Non-linear Calibration on.


Delta of Spectra (magnitude error):
View attachment 110192
That could certainly be rated as "zero error". Micro-dBs!


Delta Phase (phase error):
View attachment 110193
Same as above, micro-degrees.


Linearity:
View attachment 110194
Here we can see that the louder sections, using the higher 10 bits show a level drop, an extremely small compressive distortion (note the dB scale, again). And it is clean as it gets, down to bit 24.


Of course the most interesting data is the Delta Waveform (residual):
View attachment 110195
The matching (and residual noise) in the quiet section is breath-taking, whereas the louder sections are sticking out (because of the compression). We need to compare this with the original waveform to better see the effect of the less deep Null at the louder section from the compressive distortion:
View attachment 110196


While this gives some optical hints what's going on, the real deal is listening to that residual (download ZIP-file, also containing the input files).
  • The first thing we note is a sort-of ricochet sound right at the begining. This the recovery from a small marker pulse I've put into the file at the first sample for alignment and trimming the raw data. Root cause for this I can only speculate about at the moment.
  • In the medium loud section up to 0:23 one can hear a combination of distortion (notably in the bass), overlayed with some linear residue (undistorted music) at the more dynamic spots.
  • In the loud section from 0:25 to 0:36 the linear residue dominates the delta though the underlying distortion can still be readily identified. Right at 0:36 a wind chimes is played and that has some real nastyness to it.
  • In the quiet part from 0:35 onward things get interesting. This is the section the static gain correction was obtained from, so we have the deepest null. By this, it uncovers the (microscopic, mind you) distortion in such a clear way, probably like never before. I think I can hear some time-domain effects (recovery?) also, not just a pure static distortion....

Of course this was just the first experiment I've made and I will have to check if the results hold and maybe others will verify this but so far this looks tremendously promising. For example, because of the heavy averaging there can be errors that get lost or underestimated which certainly needs some investigation.

The approach can be extended for comparisons of like recordings rather than a match of original vs recording. When using the same DA-->AD and approximately equal levels (and loading of DAC's output), the error of a DUT (device under test) is fully exposed as all the DA-->AD errors will mostly cancel. DUTs could be cables (well, not that promising to find anything there ;-), and amplifers, basically anything that has an analog input and an analog output even when the internals are digital (containing AD-->DA internally).

I'll probably open a thread with a detailed step-by-step guide for all the processing steps, for replication by others as well as because at this level of precision every tiny detail that is neglected can make a huge difference for the results.

Here's something that may help with phase visualizations: set the phase limit under Spectrum settings to something higher, say -110dB. You'll get rid of the large random phase excursions caused by noise in frequency bins and get a much cleaner phase plot. I may just change the default to something higher to help visualize the phase delta better. Here's an example with a higher limit setting:

1612359877514.png
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
^ Nice tip. It really helps a lot to clear up the display even when zooming in to a +-0.005deg range in my current test case.
-100 to -110dB looks like being a sweet-spot from what I've tried. Below that I'm getting staircases ;-)
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
As for #527, I've done a complete new round from ground zero today and I am getting stable results (exactly the same patterns). So the process seems to be stable.

For the fun of it, I had replaced the 20cm patch cord with a 5m junk RCA including 3 adapters (two to 1/4" and one RCA-RCA female) for the try.
Then, using the cross IR method to impress the measured FR of the second cable to the recording of the first and vice versa, I'm getting a distortion-free residual at -130dB RMS, pretty much as expected. Had to optimize by hand as the optimizer maxed out, probably finding too low improvements in varying the parameter (gain), even when restricting the FR with the filters to the range were there is some content above the average noise (30Hz to 4kHz).

The good news, this process variant also appears to be stable as the DA-->AD distortion seemed to fully cancel.

The bad news, no intrinisic cable "signature" found ;-), cable A does not do anything different than cable B.

Within the validity of the experiment ...
... probably the result will be critized by the Golden Ears arguing "using two crap cables for comparison tells us nothing about really good cables", or, "of course nothing's to be found by that much averaging", and if that doesn't stick, then at least "that lowly cheap RME is not adequate enough to expose anything" (*sigh*)

I had hoped for an even lower signal buried in the noise (which is already at least 15dB down from the true analog RMS noise) but then again, at this level of residual-seeking madness it's just getting silly (on a relative scale, on an absolute scale the "sillyness factor" is already some 80dB over the top ;-)
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,724
Likes
10,418
Location
North-East
^ Nice tip. It really helps a lot to clear up the display even when zooming in to a +-0.005deg range in my current test case.
-100 to -110dB looks like being a sweet-spot from what I've tried. Below that I'm getting staircases ;-)

Yeah, while the phase limit setting was always meant to do this, somehow it got disconnected from the phase plot in previous versions. No matter what limit was specified, it would use all the frequency bins, no matter how small the amplitude. I fixed this in 1.0.58.
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,724
Likes
10,418
Location
North-East
DeltaWave version 1.0.59 is now available.

Changes based on feedback from @MC_RME when using simple/periodic waveforms for loopback analysis:
  • Fixed: corrected sub-sample matching behavior of simple/periodic waveforms (regression from .58)
  • Added: Automatic trim of silence at both ends of a file when using Auto-Trim option
  • Added: Automatic detection of simple/periodic waveforms. If simple waveforms option is not on, user will be asked to turn it on
  • Changed: Improved THD+N measurements for simple (single frequency) waveforms
  • Fixed: dB display in THD frequency plot for simple waveforms – used to display in scientific notation the first time it’s used
Here's an example of a simple waveform measurement (1kHz sinewave) showing harmonics and their amplitude:
1612468725064.png
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
I'm making good progress with my technique of making "microdistortions" audible with music signals. The processing now appears to be stable and yields predictable and reproducible results. It turned out to be very effective to make the measurement sweep for the impulse response extraction a part of the signal which undergoes the block-averaging (in most tests this was 100x == 20dB reduction of uncorrelated noise/error), rather than doing it completely seperated.

For example, I could expose the difference in distortion when switching the RME ADI-2 Pro from +13dBu to +19dBu (unbalanced) reference levels for DAC and ADC which is basically the distortion difference from the gain switching alone (which changes gain of the analog stages and/or even alters them, skipping/inserting a whole stage. Not having the schematics I can't tell).

When looking at IMD plots generated with REW not much is to been seen (dBc scale, so the dBFS levels are even 12dB lower for the selected IMD signal):
+13dBu vs +19dBu.png


Yet the difference signal I could isolate shows the distortion delta very clearly, for a visual display see here some sine frequency zoomed in (part of the sweep mentioned above):
1612893628135.png


And the spectrum of this visible deformation of the sine:
1612893992184.png

The distortion also is readily audible in the difference file, both for the sweep and music sections.

The effect is visible in DW's linearity plot as well:
1612894031432.png

The kink at the 2.3 bits equivalent level is interesting at least, haven't seen that before... could be something like an opamp transitioning into class-B operation (speculation only, of course).

So, now it's time for a complete write-up of this .... stay tuned!
 
Last edited:

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,792
Likes
37,693
Seems now that in .59 when I access the Process menu to Load Only, it goes ahead and does a full match instead. Is .59 working that way for you guys?
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,724
Likes
10,418
Location
North-East
Seems now that in .59 when I access the Process menu to Load Only, it goes ahead and does a full match instead. Is .59 working that way for you guys?

Process->Load Only is the same as the Show button -- files are loaded, no matching is done, but statistics and charts are updated. Just quickly looking, that seems to work as expected.
 
Last edited:

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,792
Likes
6,257
Location
Berlin, Germany
[...], that seems to work as expected.
Same for me.

One other thing I noted, when you want to apply a set of parameters from the Manual Corrections page, you have to click inside the data fields (blue area) to copy them to the manual entry fields (and click Apply!). I sometimes clicked on the Arrow symbol which does nothing and I got confused... d'oh!
1612962551143.png
 
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,724
Likes
10,418
Location
North-East
Same for me.

One other thing I noted, when you want to apply a set of parameters from the Manual Corrections page, you have to click inside the data fields (blue area) to copy them to the manual entry fields (and click Apply!). I sometimes clicked on the Arrow symbol which does nothing and I got confused... d'oh!
View attachment 111689

That arrow isn't used at all, other than to indicate which row you are selecting. So, yes, click anywhere inside the actual data columns to select those settings.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,792
Likes
37,693
Process->Load Only is the same as the Show button -- files are loaded, no matching is done, but statistics and charts are updated. Just quickly looking, that seems to work as expected.
I'm having other problems. Now it gets caught in a loop and never completes. I wondering if I got a corrupt download. I'm going to re-download and try again.

Seems to work now, of course I downloaded .61 as that is current. What changes between .61 and .59?
 
Last edited:
OP
pkane

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,724
Likes
10,418
Location
North-East
I'm having other problems. Now it gets caught in a loop and never completes. I wondering if I got a corrupt download. I'm going to re-download and try again.

Seems to work now, of course I downloaded .61 as that is current. What changes between .61 and .59?

Release notes are usually a good place to check for what’s changed:


Changes in 1.0.61b
  • Fixed: auto-trim function under some conditions could result in extra zero samples being added to the waveform
Changes in 1.0.60b
  • Fixed: regression in 1.0.59 could cause an index out of bounds error when processing “stereo” channels with inverted absolute phase
 
Top Bottom