• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Floating-Point ADC System

With actual ADCs there are more errors and inconsistencies to deal with.
Acquiring and maintaining an accurate value for the gain difference is arguably one of the most tricky parts of the whole operation. Even when using two parts of a stereo ADC so at least the Vref is tied together, there is always going to be a small amount of gain drift due to different tempco and aging in gain-determining resistors in external circuitry. Not to mention there might be tiny differences in frequency response as external analog gain differs.

Ideally what you want is an algorithm that is continuously monitoring the output of both ADCs as long as neither is clipping and both are well above their respective noise floors. Starting at a built-in preset value, it would perform sort of a moving average ideally stretching over minutes. And that's only after you've EQ'd out the FR differences, hope that those are static enough at least.
 
In principle, you can stitch the two ADC outputs together by simply gating one and inverse-gating (ducking) the other (after compensating for the gain difference). In a DAW, you can do that with something like ReaGate:
View attachment 315642
Increasing the attack and release times helps to smooth out small errors. And this can work instantaneously on a sample-by-sample basis.
I played around with it and it works perfectly. But I only tried it with a digital signal split into two separate tracks in a DAW. With actual ADCs there are more errors and inconsistencies to deal with.
I don't get it, increase attack and release times and getting one sample gating time? You need sample per sample gating decision time to prevent clipping getting passed the gate.
 
Ideally what you want is an algorithm that is continuously monitoring the output of both ADCs as long as neither is clipping and both are well above their respective noise floors. Starting at a built-in preset value, it would perform sort of a moving average ideally stretching over minutes. And that's only after you've EQ'd out the FR differences, hope that those are static enough at least.
That's right. What you need is an adaptive offset difference detector and an adaptive gain difference detector between the two (or more) input channels to seamlessly stitch the signals together in the digital domain as they transition from one gain range to another.
The convergence speed of these detectors could (and should) be made signal level dependent: at low level, the offset detection runs faster (shorter average) while the gain difference detection is ceased (too noisy) while at higher levels, the offset detection is halted (too much signal content even after low-pass filtering) and the gain offset detection runs with faster exponential time constant.
Further optimizations include using short look-ahead delay lines to perform a transition to the low gain channel before pre-ringing of the anti-alias-FIR-filter in the clipped high gain channel takes place and taking adavantage of masking effects, switching back to the high gain channel only after a couple of ms when signal level drops.
In pratice, these algorithms can be tweaked to make the transistions completely inaudible, taking full advantage of the dramatic DR extension offered by the multi-range conversion.
 
I don't get it, increase attack and release times and getting one sample gating time? You need sample per sample gating decision time to prevent clipping getting passed the gate.
Yes, you're right. It can't work in real-time. I was focusing on smooth transitions and I forgot about clipping...
 
You could see how it reacts to short 'dirac' pulses in the switching range.
I measured the two impulse responses (see picture below). The first one had an amplitude of 0.03Vp (0.021Vrms) and thus it should only involve the high-gain ADC. The second one had an amplitude of a 0.18Vp (0.127Vrms) which should be enough to trigger the switching from the high-gain ADC to the low-gain one.

ImpulseResponse192kHzwithLowerAmplitude.png


ImpulseResponse192kHz.png


I did not expect to see any noticeable ADC switching effect on the impulse waveform because the impulse is so short, and the change from sample to sample is so steep that it could hide any gain and offset discrepancies in the two ADCs. There are some minor differences in the frequency responses measured with this impulse response method.

I then zoomed in to check if there was any variation of the noise level before and after the impulse in the above two cases (see pictures below). As expected, the noise level does not change in the first case. However, in the second case, the change of the noise level clearly indicates the switching of ADC: from the high-gain ADC before the impulse to the low-gain ADC (within and) after the impulse, and after about 60ms back to the high-gain ADC again due to the very low signal level.

ImpulseResponse192kHzwithLowerAmplitudeNoiseLevel.png


ImpulseResponse192kHzNoiseLevel.png
 
I measured the two impulse responses (see picture below). The first one had an amplitude of 0.03Vp (0.021Vrms) and thus it should only involve the high-gain ADC. The second one had an amplitude of a 0.18Vp (0.127Vrms) which should be enough to trigger the switching from the high-gain ADC to the low-gain one.

View attachment 316193

View attachment 316191

I did not expect to see any noticeable ADC switching effect on the impulse waveform because the impulse is so short, and the change from sample to sample is so steep that it could hide any gain and offset discrepancies in the two ADCs. There are some minor differences in the frequency responses measured with this impulse response method.

I then zoomed in to check if there was any variation of the noise level before and after the impulse in the above two cases (see pictures below). As expected, the noise level does not change in the first case. However, in the second case, the change of the noise level clearly indicates the switching of ADC: from the high-gain ADC before the impulse to the low-gain ADC (within and) after the impulse, and after about 60ms back to the high-gain ADC again due to the very low signal level.

View attachment 316195

View attachment 316196
Ok, so in the third image you see switching behavior by the rising noise floor? At t = 0.559 it switches back it seems.

(edit: ah, you already had explained that )
 
In order to see if there is a step change in the waveform during the ADC switching, I performed a linear sinewave amplitude sweep from 0.03Vp to 0.05Vp in 0.5 s, then from 0.05Vp to 0.03Vp in the subsequent 0.5 s and the same process was repeated. The results are shown below. The red line represents the amplitude envelope of the waveform, which was obtained by amplitude demodulation of the original waveform through Hilbert Transform. A bolder red line is indicative of higher noise level in the original waveform. The upper and lower thresholds for ADC switching was measured to be 44.2mV and 31.8mV respectively. These two thresholds were found to remain constant within the carrier frequency range of 300Hz ~ 20kHz. For carrier frequencies below 300Hz, the upper threshold increases while the lower threshold value decreases for unknown reasons.
1kHzAmplitudeSweep30mV-50mV-withMarks.png


Upon zooming in to inspect the waveform at the point of ADC switching, I couldn't discern any step change. The waveform appeared smooth, as if there was no ADC switching. The picture below shows a close-up of the transitional area from the high-gain ADC to the low-gain one. The top (or bottom) of a sine wave is an ideal location to illustrate the noise level due to its flatness. The third peak of the sinewave exhibits a noticeably higher noise level than the previous two, further confirming the ADC switching location indicated by the red amplitude envelope. It can also be observed from red amplitude envelope below that the noise level increases gradually rather than instantly, implying a stitching DSP algorithm in place during the transition, to fade in the new data stream and fade out the old one. The transition seems to be about 0.5ms in length.

1kHzAmplitudeSweep30mV-50mV-ZoomInAtTop.png
 
Well done test of the properties by means of the continuously varying signal amplitude!
You could make the residual gain mismatch and offset error visible by subtracting the (suitably scaled) original excitation signal.
Overall, the algorithm seems to be well implemented, avoiding any audible artifacts.
 
Back
Top Bottom