• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Using Cross Corelation to lower influence of ADC for DAC measurements

Rja4000

Major Contributor
Forum Donor
Joined
May 31, 2019
Messages
3,059
Likes
5,491
Location
Liège, Belgium
I start this thread to allow @FrenchFan to explain his noise reduction technique and a discussion on this subject

One of the refetence links he provided:


UPDATES
V5.40 beta 49 is now available with the cross correlation feature.
The latest beta (51) has support for cross correlation processing of stereo WAV files and an option to plot the magnitude of the complex cross correlation result.
20240817: Virtins published its Multi Instrument version 3.9.11, which also includes cross correlation FFT averages.
In version 3.9.11.1, an option for "in phase cross correlation" has been added. With proper signal, it now converges faster and gives similar results to REW.

Virtins published a paper on those techniques and more
 
Last edited:
I write my own scripts with octave
to better understand audio analysis
and it is not a luxury to delve
into the equations to help understand.

I saw by chance that QuantaSylum offered
measurement noise reduction by cross correlation.

look at the beginning of the following blog:


So I looked for how to realize the function.
I still leave my sources here.


Concerning the theory the calculation for me is simple,
I have the points of the FFT and I can therefore easily
do the calculation indicated in the paper of "Zurich instrument".

FFTcorr=FFT1.*conj(FFT2); # correl FFT1 FFT2
FFT1 = FFT of one channel
FFT2 = FFT of the other channel
conj=FFT complex conjugate

I can do the cumulation and the average then,
each FFT includes a block of N samples.

If you do an analysis with 32K samples, each correl
will take 32K samples,

I let you understand the deep theory of measurement.

time T to F
Corr(g,h) <=> G( f )H* ( f )

g,h time signal
G,H FFT of each signal


The cumulation allows to add the correlated signals (frequency line)
is to add the noises in quadratic value, noise = sqrt(noise1^2+noise2^2)

The final average gives directly the FFT with the noise reduction.

But well it is not easy to understand the convolution or correlation,
you need your little paper and make the signals and rack your brains.

If we look at the paper from "Zurich instrument" the noise reduction is
approximately log(NB correlations)*5.

so 10 correlation = -5db
100 = -10db
1000 = -15db
etc ....

Well be careful it is only when the signals are very close in terms of noise to the ADC floor.

A test with a signal at 1000Hz and another signal at 1000Hz + 2000Hz.
In a single correlation you lower the 2000Hz signal by 100db

To reduce the recording time I tested the overlap of the measurement blocks,
it is viable up to 90% overlap, we can reduce the measurement time
by 10 for the same number of correlations.

I will leave here the octave FFT calculation scripts with weighting window,
THD+N calculation and calculation function and launch of the correlation.

I am sure that this will help some and decide some to write their own script.

I assure you, I thought I knew everything about FFT and signal processing
and well I was wrong, while writing the software
I realized that I did not know much.

Hey, who can explain to me seriously, with proof, why the noise floor in an FFT
at a given sampling frequency increases when the number of samples decreases.

Bye and good luck .

----------------------------------------------------------------------------------------------------

Using scripts :

You will certainly need to install octave , I'll let you do it .

You will need to load the signal package , if you are on linux
octave and packages are in your repository .

If you are on windows or mac , I don't know , I only use Linux .
package are under "forge"

clculFFTmain.m :

contains FFT calculation + weighting window + THD+N calculation + display procedure

correl_FFT_optmized.m :

contains correlation calculation and result display
and THD+N calculation

nbCalcul = nb correlation
coeffShift = shift , 1 overlap = 0% , 0.1 overlap = 90%

replace the wav files with yours .
the 2 "S2_x_noise.wav" are used for calculation

The comments are in French, I think you will easily
translate them.

Be careful, I write software like a dog, so be careful.
 

Attachments

  • allSourcesCorrel.tar.zip
    3.6 KB · Views: 34
Hey, who can explain to me seriously, with proof, why the noise floor in an FFT
at a given sampling frequency increases when the number of samples decreases.
@FrenchFan: Thanks a lot for sharing your approach and your script.
So far I'm doing everything on Windows, but for Octave I think Linux is the right way to go. The need for Linux starts when I see a "tar" archive ;-)
Edit: just found out that the windows powershell has got a "tar" command :)
I wanted to set up a Linux computer anyways, but this may have to wait till autumn.

Does Octave provide a plotting backend meanwhile? I remember we had an interface to Gnuplot back then ....

I fully agree, signal processing is an interesting yet challenging thing. I have studied information technology and I have been writing Matlab scripts to do speaker measurements with multitone stimuli more than 10 years ago, so I should be able to "climb up that hill again".

The friend of mine who employs time-domain-averaging, he also uses something like 90% overlap for the FFTs in order to cut down on measurement time and still get a reasonably smooth spectrum.

Back to your question:
I think it's just related to the width of the frequency bins. The smaller the FFT-size, the larger the frequency bins and thus the energy that falls into the individual bin if you assume the noise has a certain spectral density e.g. V/sqrt(Hz).

When looking up the method this morning, I came across an application note from Zurich-Instruments: https://www.zhinst.com/sites/default/files/zi_appnote_mfli_cross_correlation.pdf
 
Last edited:
@FrenchFan: Thanks a lot for sharing your approach and your script.
So far I'm doing everything on Windows, but for Octave I think Linux is the right way to go. The need for Linux starts when I see a "tar" archive ;-)
Edit: just found out that the windows powershell has got a "tar" command :)
I wanted to set up a Linux computer anyways, but this may have to wait till autumn.

Does Octave provide a plotting backend meanwhile? I remember we had an interface to Gnuplot back then ....

I fully agree, signal processing is an interesting yet challenging thing. I have studied information technology and I have been writing Matlab scripts to do speaker measurements with multitone stimuli more than 10 years ago, so I should be able to "climb up that hill again".

The friend of mine who employs time-domain-averaging, he also uses something like 90% overlap for the FFTs in order to cut down on measurement time and still get a reasonably smooth spectrum.

Back to your question:
I think it's just related to the width of the frequency bins. The smaller the FFT-size, the larger the frequency bins and thus the energy that falls into the individual bin if you assume the noise has a certain spectral density e.g. V/sqrt(Hz).

When looking up the method this morning, I came across an application note from Zurich-Instruments: https://www.zhinst.com/sites/default/files/zi_appnote_mfli_cross_correlation.pdf
I think that "octave" is very well supported by windows
even better than for Linux, it is much easier
to upgrade than on Linux.

I don't know windows well, but it really doesn't
matter, either one or the other and it's OK.

Concerning the plotting of the curves it is completely
integrated into octave, that's its strength.

I leave the visual my measurement interface, everything is done with octave

The graph, here, is a simple semilogx(....) function and it
is displayed, what I say is a bit simple but it's almost
that.

You have an example in the scripts sent, since I plot the
FFTs.

The gui is also integrated into octave, it's not Qt but it's
enough to make a functional interface.

Octave shares 98% of its syntax with Matlab, only the
libraries are different and even then, Octave tries to align itself with Matlab functions.

Concerning the question, the answer seems correct to me,
it took me a while to understand.

Concerning "octave" I started using it 8 months ago,
I didn't know it at all, in one week I did my first FFT
and my first calculations.
plus I'm 65 years old, imagine!!
 

Attachments

  • InterfaceFFT.png
    InterfaceFFT.png
    214.6 KB · Views: 90
When we take measurements with microphones for room correction, we can improve the SNR by increasing the total energy of the sweep. The total energy is the width (time) and height (volume) of the measurement impulse. The height is limited by how loud your speakers will go before they distort, but the width can be extended to as long as your patience allows. There are two ways to extend the width - either choose a slow sweep, or take multiple sweeps and cross-correlate (average) them. For example, a 45 second sweep has a 90dB noise rejection.

From what I understand from the link provided by @Rja4000, this technique lowers the noise floor at the limit of the ADC by taking multiple measurements and cross-correlating them - the idea being, the signal you are measuring is correlated, but noise is not correlated. I am wondering whether you think this is similar to taking multiple measurements with a microphone and averaging them.
 
IMO it is rather ADC distortion that needs to be compensated in case of precise measurements of the best DACs. Re noise we can get as low as to 1nV/rt(Hz).
 
I am not sure thinking of the noise as some l low level thermal noise is worthwhile.
There are trucks going down the road, planes and choppers in the sky, birdsong, and dogs growling that are all coming in through the microphone too.

Those also do not cross correlated well like the schott noise in some resistor or transistor does not cross correlate well.
 
When we take measurements with microphones for room correction, we can improve the SNR by increasing the total energy of the sweep. The total energy is the width (time) and height (volume) of the measurement impulse. The height is limited by how loud your speakers will go before they distort, but the width can be extended to as long as your patience allows. There are two ways to extend the width - either choose a slow sweep, or take multiple sweeps and cross-correlate (average) them. For example, a 45 second sweep has a 90dB noise rejection.

From what I understand from the link provided by @Rja4000, this technique lowers the noise floor at the limit of the ADC by taking multiple measurements and cross-correlating them - the idea being, the signal you are measuring is correlated, but noise is not correlated. I am wondering whether you think this is similar to taking multiple measurements with a microphone and averaging them.
Hello Keith_W

In the sound recording method you indicate
you only have one microphone; so no correlation is possible.

The average of the measurements increases and multiplies your fundamental
signal by the number of measurements but on the other hand
the noise is added in quadratic value
total noise = sqrt(SUM(noise^2)).-> sqrt(SUM(noise^2)) << SUM(noise)
Which means that the sum of the noise is lower than the
sum of the fundamental signals.
You then divide by the number of measurements for the average,
the signal will have increased a lot but your noise much less, you have a real
decrease in the measurement noise but it is not a correlation +
average of the measurements.

In your measurement if you tend the number of measurements to infinity you will not be able
to eliminate the noise.

On the other hand with the cross correlation, Yes.

Correlation eliminates noise between 2 measurement channels of different noise
but to have a perfect correlation it would take an infinite number
of samples and therefore an infinite time to eliminate the noise.
So in the measurement I indicate we add the same average that you use
in your measurements. We cumulate correlation and average.
 
Hello Keith_W

In the sound recording method you indicate
you only have one microphone; so no correlation is possible.

The average of the measurements increases and multiplies your fundamental
signal by the number of measurements but on the other hand
the noise is added in quadratic value
total noise = sqrt(SUM(noise^2)).-> sqrt(SUM(noise^2)) << SUM(noise)
Which means that the sum of the noise is lower than the
sum of the fundamental signals.
You then divide by the number of measurements for the average,
the signal will have increased a lot but your noise much less, you have a real
decrease in the measurement noise but it is not a correlation +
average of the measurements.

In your measurement if you tend the number of measurements to infinity you will not be able
to eliminate the noise.

On the other hand with the cross correlation, Yes.

Correlation eliminates noise between 2 measurement channels of different noise
but to have a perfect correlation it would take an infinite number
of samples and therefore an infinite time to eliminate the noise.
So in the measurement I indicate we add the same average that you use
in your measurements. We cumulate correlation and average.

Are you saying that power can be increased to increase the SNR…
AND
That the time can be increased to increase the total energy, in order to increase the SNR ?
 
Are you saying that power can be increased to increase the SNR…
AND
That the time can be increased to increase the total energy, in order to increase the SNR ?
There is a bit of that
When you accumulate signals (sine for example) you multiply
by N the total value. (the signals are correlated)

When you accumulate noise only the effective value is accumulated;
a noise can erase another in absolute value when you do the
sum or even be lower depending on the phase so only the effective value counts,
but it's statistical, if you're not lucky???
(signals are uncorrelated)

No, you need sampling and averaging -> Of course this increases the measurement time
if you increase the time only , the noise measurement
will remain the same.

Energy:

(V1+V2)^2 = V1^2+v2^2+2*V1*V2 correlated sources -> signal
(V1+V2)^2 = V1^2+V2^2, 2*V1*V2 = 0 uncorrelated sources -> noise

By cumulating V1 V2 .. VN we see that the noise increases less quickly than
the signal. So the SNR increases.
 
@FrenchFan
If you except quantazilum and your own script, do you know of any spectrum analyzer software that implements cross corelation for that purpose ?

Virtins Multi Instrument has some cross corelation analysis, but I never investigated it.
It's more to measure similitude of 2 signals, as I understood it.

I still am not sure to fully understand how that works.
Let me summarize what (I think) I understood.
1 you send same signal from DUT to 2 separate ADC inputs
2. You have FFT analysis dine independently on each ADC
3. You compute product of both ADC, after complementing one of them
4. You average several samples of that
5. Expected benefit is 5 * log10([# of samples])
This is half than if you average several ADC inputs directly in the time domain (10 * log10([# of inputs])), but you may increase the number quickly. You just need time (and a stable signal - which may be less easy)

Correct ?

If I understand the benefit for signal and distortion, I wonder how, at the end, the DUT noise remains valid. And I suppose you could not avoid the ADC distortions, which will anyway not vary either.
I need to think about it.

I'd be very interested if you did measure the D50 III balanced output with that method.
Then we'd be able to compare with my results.
In unbalanced mode, there are more uncontrolled differences due to noise.
 
  • Like
Reactions: MCH
@FrenchFan
If you except quantazilum and your own script, do you know of any spectrum analyzer software that implements cross corelation for that purpose ?

Virtins Multi Instrument has some cross corelation analysis, but I never investigated it.
It's more to measure similitude of 2 signals, as I understood it.

I still am not sure to fully understand how that works.
Let me summarize what (I think) I understood.
1 you send same signal from DUT to 2 separate ADC inputs
2. You have FFT analysis dine independently on each ADC
3. You compute product of both ADC, after complementing one of them
4. You average several samples of that
5. Expected benefit is 5 * log10([# of samples])
This is half than if you average several ADC inputs directly in the time domain (10 * log10([# of inputs])), but you may increase the number quickly. You just need time (and a stable signal - which may be less easy)

Correct ?

If I understand the benefit for signal and distortion, I wonder how, at the end, the DUT noise remains valid. And I suppose you could not avoid the ADC distortions, which will anyway not vary either.
I need to think about it.

I'd be very interested if you did measure the D50 III balanced output with that method.
Then we'd be able to compare with my results.
In unbalanced mode, there are more uncontrolled differences due to noise.
Hi RJA4000

Concerning another software, The first link
indicated implements it in its High Frequency FFT.

"Zurich instrument".

But I don't know any other.

In fact Virgin offers the temporal cross correlation
not the associated FFT, that's what I just saw.

Concerning the correlation steps ::

1) OK
2) OK
3) I think it's OK for you, I rewrite the correlation
In fact the correlated FFT = FFT1 * conjugate(FFT2)
the FFTs are complex entities we take the conjugate
of one or the other of the FFTs to then multiply.

4) OK, I do for example the correlated FFTs on 32k samples
I do the average of the correlated FFTs.

5) 5*log(10) -> this is only approximately true if the
noise levels are comparable.

For example, if we make 2 signals:

1 - 0.5*sin(2*PI*1000*t)
2 - 0.5*sin(2*PI*1000*t) + 0.5*sin(2*PI*2000*t)

The 2000Hz signal can be considered as noise,
this time the noise reduction for one and only one correlation
exceeds 120dB, this is the purpose of the correlation, eliminates what
is not in the 2 signals.

So 5*log(10) is to be looked at closely, I checked it on
signals that I created, it remains valid in the range
of what we measure -110dBFS -120dBFS -125 dbFS .

Concerning your remark on temporal noise, I agree
with you [10*log10(of the number of measurements)], it seems easier
to implement, you just have to be in sync between the measurements.

Manipulation to be implemented under "octave" this thing is easy to do
we will see what will come out of it towards the -120dBFS of noise.

Concerning the time, it is especially the recording that is long
for an analysis with 48000HZ, 32768 samples and 1000 correlations
it takes 32768÷48000*1000 or ~ 12 minutes.

We can make an overlap on the measurements up to 90% so reach
1000 correlation with 1.2 minutes, to be checked each time.

Concerning the stability of the signal, I recorded a duration of 20
minutes and I do not see any problem.

The benefit is to lower the noise level of the ADC
so to measure THD+N as best as possible, like the notch of the
E1DA APU but with more difficulty
and above all it remains a "one shot" measurement

All the other imperfections of the ADC will remain

Concerning the THD, there is discussion, when we look dynamically
at the FFT, we see that the distortion amplitudes move,
especially those that are close to the noise.
These variations can be considered as noise and are therefore
reduced by correlation.

We can see it in the first graphs that I sent, the correlated THD is
slightly better than in the signals of each ADC, but it is
really not much.

Concerning the measurements on the D50III, I am late,
but yes, I measure and pause the results online.

Bye
 
Hi all
Here is a measurement on the D50III Jack output,
we are identical to the manufacturer measurement.

I just have to discuss with Ivan the distortion
never goes below 140db.

bye
 

Attachments

  • D50III_Output_Jack_ADC_Left.png
    D50III_Output_Jack_ADC_Left.png
    61.8 KB · Views: 87
  • D50III_Output_Jack_ADC_Right.png
    D50III_Output_Jack_ADC_Right.png
    63 KB · Views: 90
  • D50III_Output_Jack_Correl.png
    D50III_Output_Jack_Correl.png
    71.4 KB · Views: 89
  • D50II_CompareWithTopping.png
    D50II_CompareWithTopping.png
    381.6 KB · Views: 87
Comparing different methods to reduce noise and other interference

As far as I understand (please correct me if I'm wrong):
(n is the number of samples resp. the number of channels acquired simultaneously)

Coherent averaging (repetitive measurement with averaging in the time domain, prior to FFT):
- will reduce all random noise (noise contained in the signal as well as noise added by the measurement) -> Not suited for S/N measurement
- will reduce sporadic interference (lorry driving by when measuring with microphone) by 1/n

Incoherent averaging (repetitive measurement with averaging in the frequency domain, after FFT):
- will not reduce the noise level (just reduces the variation of the measurement giving a more reproducible and smoother plot)
- will reduce sporadic interference (lorry driving by when measuring with microphone) by 1/n (maybe this number is not quite right due to the root-mean-square result of the FFT)

Simultaneous measurement with a number of n channels and using the average result:
- will not reduce noise contained in the signal
- will reduce random noise added by the measurement by sqrt(n) (paralleling amplifier stages -> mono mode of E1DA ADC)

Cross correlation:
- will not reduce noise contained in the signal
- will reduce random noise added by the measurement by 5*log10(n)
- will suppress larger disturbing signals (lorry driving by) very effective

All of these methods should not change the level of harmonics, thus THD should be correct.

Expected improvement for random noise on the ADC side:

1722798242039.png
 
I use Matlab not Octave but chances are the scripts will work on either program, with slight modifications.

Correlation is a useful tool for highlighting common factors among channels, and cross-correlation useful for suppressing differences or, if you look at the residue, of obtaining useful information about those differences. It is also useful for other things: for example, I used cross-correlation to perform pattern recognition and alignment of a serial data stream in order to verify reception accuracy. Correlation study (functions) let me align transmitted and received patterns without having to analyze and compensate (generally unknown) phase/time differences between Tx and Rx points.
 
I do not want to speak for pkane, but I wonder if he could add a cross correlation option in Multitone if it were requested?

@pkane
 
Hi all
Here is a measurement on the D50III Jack output,
we are identical to the manufacturer measurement.

I just have to discuss with Ivan the distortion
never goes below 140db.

bye
So, you measured 126.5dB on 20kHz BW.
I measured 125.9dB on 22kHz BW, which would be 126.3dB on 20kHz.
And Topping measured 126.4dB (also 20kHz BW)
Looks like a nice confirmation.

A strange difference is the lower 5th harmonic.
On my and Topping measurements, it seems higher than on yours.
At that level, it might be a chip sample variation, though.

Wait.
Are you really measuring at -1 dBFS output ?
Why ?
 
Last edited:
Back
Top Bottom