• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Amplifier distortion testing using a modestly priced audio interface

With that out of the way I'll be making a lowpass filter for measuring the V3 Mono. But I am curious to know if it's really necessary as long as I'm only going to use if for 1W/5W testing. I've already run tests of the V3 with a direct probe without any damage with what appears to be good results
The filter is essential to guarantee that RF frequencies do not cause some sort of slew-rate error when measuring switching amplifiers.
 
What is the slew rate impact? How is it manifested in the measurements?
 
What is the slew rate impact? How is it manifested in the measurements?
In post #35 i including a link to the AES17-2020 technical standard which explains why it is a requirement. Also, in #36 and #37 more information was provided by other posters.

The test gear is designed to measure audio - i.e. things humans can hear. It's not expecting high levels of radio frequency to be mixed in with the audio. A classic preamplifier, or class AB amplifier do not have radio frequency mixed in.

These RF frequencies are quite high in level and so the slope of the signal is much steeper than anything we can hear (slopes get steeper as frequencies rise). If the test gear is unable to handle such a steep slope it will behave as if the audible frequencies are distorted, so you get inaccurate, junk data in the audio frequency range, even though there's no real audible distortion there. Similarly, the test gear may just act as if you are overloading it. This is not overloading as in "danger to the circuitry - there will be a fire"! But it just means the circuits won't be linear and will give junk answers.

So you filter everything above 48kHz. That's the commonest standard.
 
In post #35 i including a link to the AES17-2020 technical standard which explains why it is a requirement. Also, in #36 and #37 more information was provided by other posters.
Yes, I read through what I could access. The AES paper is $100 for non-members, so unavailable to me. The filter I will make is the one listed in the TI paper, I have the parts on-hand now, although it is a rather simple filter when compared to the more elaborate one describe in one paper to approximate that of the AP filter. More elaborate ones shown are beyond my needs, so my hope is that the TI filter will be sufficient for my purposes.
These RF frequencies are quite high in level and so the slope of the signal is much steeper than anything we can hear (slopes get steeper as frequencies rise). If the test gear is unable to handle such a steep slope it will behave as if the audible frequencies are distorted, so you get inaccurate, junk data in the audio frequency range, even though there's no real audible distortion there.
I was curious to know if there's some obvious impact that would be apparent in the measurements. Evidently not, just junk data, although the measurements I have made in brief testing don't appear to be severely distorted results. It will be interesting to compare the raw to the filtered measurements when I get to that point.
Similarly, the test gear may just act as if you are overloading it. This is not overloading as in "danger to the circuitry - there will be a fire"! But it just means the circuits won't be linear and will give junk answers.
There was one comment about possible damage to the ADC input, hence my concern on that point, especially for the ADCiso.
So you filter everything above 48kHz. That's the commonest standard.
The TI filter Fc is 33.86kHz and I'm only interested in 20-20kHz measurement as I will only use 48kHz sample rate, so this should be satisfactory, correct me if that won't be adequate. I will be using this primarily for speaker driver testing, not for low distortion amplifiers beyond the V3 Mono I currently have, at least for now. I will want to test any future class D amps I may acquire.

Thanks for the input, I still have much to learn about class D equipment.

Edit: I only intended to characterize the components I planned to use, that is, the Scarlett 2i2 and the V3 Mono to ensure that my rig was adequate for my needs. I found that I needed to learn more about distortion testing and the specific requirements and possible issues (hardware and software), especially since the 2i2 is a somewhat pedestrian audio interface. It's more challenging that I expected.
 
Last edited:
I was curious to know if there's some obvious impact that would be apparent in the measurements.
Yes, the noise + distortion figures will be consistently measurably worse if the test gear is impacted by the RF or runs into slew-rate limiting. So without the filter, Class D amps measure less well than they work in practice.

Bear in mind that this is controversial with some people - who argue that you should measure Class A, A/B, D, G and H all identically with a wideband perspective. They argue that the testgear-filter is "fake" and is needed to make Class D look good enough to be considered HiFi whilst Class A/B has good if not better results and does not need a testgear-filter to achieve these results. Almost that the testgear-filter is a rosy-eyed view of Class D and that in reality it's distorted and noisy and should not be treated with much respect. I see what they mean, but harmonic distortion of a fundamental above 6.7 or 10kHz is academic etc. and thanks to Fletcher-Munson and masking we can't hear much noise above 7kHz when in the presence of normal music. So I don't mind if the test gear is only testing noise and distortion to, say, 24kHz

although the measurements I have made in brief testing don't appear to be severely distorted results. It will be interesting to compare the raw to the filtered measurements when I get to that point.
You may be lucky and your rig is unperturbed by the RF imbedded in the signal. Certainly, I've read that APs definitely need the filter.

But, it's also worth noting that in many ways, AES is one of the primary, if not THE primary authority in audio (and yes, I know all about "Argument from Authority" and critical thinking). Many day-to-day audio standards and practices have been created by them. If their guidance is to use a filter, I'd go with their guidance.

There was one comment about possible damage to the ADC input, hence my concern on that point, especially for the ADCiso.
Perhaps there's a risk, I'm afraid I don't know. It doesn't hurt to be cautious. If it's been OK with 1 to 5W safely, that doesn't mean it will stay safe at 200W.

The TI filter Fc is 33.86kHz
Sounds reasonable to me
 
I'm finishing up testing and have a lot to update. First, I made the balanced lowpass filter for the V3 Mono probe. The difference was minimal.

I compared it to a direct connection for 5W into 8ohms. These used the 2i2 for output and input.
V3 Mono with and without Lowpass Filter.jpg

I tested the filter for 48kHz and 96kHz.
V3 Mono 48kHz vs 96kHz with Lowpass Filter.jpg

I also bypassed the Monitor1 with feedback probe directly connected to the 2i2 input. No significant change.
 
Separate from tests for rig efficacy I got curious about the V3 Mono temperature and its impact on V3 distortion. The fan that supported the V3 for cooling has three speeds via a switch. There was some measured noise from it, though not significant. The fan made a big difference in temperature of the top plate of the V3. I have an old fanless CPU cooler. I tested it with the V3 resting on the top face. No thermal paste, but it's inverted from the CPU use, has a large contact area. The cooler alone was surprisingly effective, but not as good as the V3 supported by just the fan. Then I leaned the fan against the V3 tilted for better flow through the cooler. The result is amazing. With nothing, the V3 top plate temperature goes above 103 degrees F. With the heat sink alone it stabilizes at 97.7 deg. With the fan at low speed it stabilizes at 78.6 deg. The latter was a real surprise.
V3 Mono Rig.jpg
V3 Mono 5W into 8 ohms Heat Sink with Fan vs No Fan vs Side Fan Slow Speed -12.27dBFS 2i2 Inpu...jpg

Roughly a 12dB difference for H2 due strictly due to temperature.
 
Last edited:
While testing the V3 with the setting at what I felt to be the "sweet spot" with 10dB gain I got curious about gain. So I raised it to 12dB in the 2i2, changing nothing else. To my surprise the results were better. That set me onto a whole set of tests at 12dB gain. Others, too, but they all failed. This meant that I had to go through the whole process again of determining the best "sweet spot". I say best because every gain setting results in a different S-THD. Maybe more so the 2i2, it's output distortion varies rather dramatically between HD components. The sweet spot is at best a compromise of all HD components. I've seen a 20dB change in an HD component with a relatively small change of the input to the 2i2, HD2 in this case.

I ran the S-THD loopback for 12dB gain in the 2i2, then tested several possible points as the sweet spot, these I captured.
V3 Mono 5W into 8 ohms Heat Sink High Speed Fan2i2 Gain 12dB Monitor1 Variations.jpg


Ultimately I selected -11.50dBFS. There wasn't much difference in HD values for normal vs coherent.
V3 Mono 5W 2i2 12dB Gain REW 48kHz 256kFFT Output -18.48dBFS  Input -11.50dBFS Normal vs Coher...jpg
 
Here's an example of the sensitivity of the input to different voltages. The only change in these two measurements was the setting of the Monitor1 for the 2i2 input. This I did while still using 2i2 input gain at 10dB. The HD2 difference is almost 20dB. I've settled on 12dB, saw no need to re-run this. This is the problem that could occur with using a switch selectable probe ratios. There's no way to know if those ratios result in optimal (sweet spot) input points. Again, this is for the 2i2 that has rather large HD component swings with differing input voltages.

Edit: Not the correct graph. I'll fix this.
Fixed:
Demonstrates the hazard of random input voltage selection.jpg


This shows what happens when only the gain is changed, in this case 10dB vs 12dB. The sweet spot changes. This is what lead me to select 2i2 input gain to12dB for all futures testing.
Demonstrates the hazard of random input voltage selection.jpg
 
Last edited:
On to my disappointment...the E1DA ADCiso. My thought was that the 2I2 output may be a limiting factor, but that the E1DA should provide a level of improvement. That has not been the case in my testing. When I started it seemed that I had to set the feedback to maximum. Dropping the voltage with the Monitor1 continually worsened the results. The ADCiso switches were set for 6.7V. My testing was for V3 at 5W into 8ohms, 6.32V, so I could even directly connect the probe to the V3 through the filter, which I did. No improvement. I tried a number of random settings of the Monitor1 to examine the response, but that was shooting in the dark.

It then occurred to me that I could run a stepped THD in REW by setting the upper step setting to -18.48dBFS, the 5W point, without over-driving the ADC. This is that test, the input graph for the ADC.
V3 Mono into 8ohms E1DA Cosmos ADCiso.jpg


What I don't know is how much of this randomness is due to the 2i2 output. I had hoped to see smoother distortion curves.

Here's the biggest reason for my disappointment. This is an overlay of the V3 Mono 5W measurement for Scarlett 2i2 vs E1DA ADCiso. It may be due to the 2i2 test that has the same hardware (clock) for input and output whereas the ADCiso has 2i2 output. I don't know how much difference that makes, but unless I've done something wrong, the ADCiso results are actually worse than the pure 2i2. Maybe some of you can explain why that is or what I may have done wrong. Maybe the 2i2 output noise level is to blame.
V3 Mono Selected Sweet Spot Points for E1DA ADCiso vs Scarlett 2i2 Gen 4.jpg

The ADC input was -30.52dBFS, but I found no higher input points that were a significant improvement. But I should probably try more tests with it.
 
I do plan to do that, I have three multichannel Kenwood amps (KM-X1). I replaced the output relays and set the bias and offset (I have the service manual). Despite their age they still preform well. All had zero DC offset on all channels and bias adjustments were small except for one amp. These were THX compliant, I assume that has something to do with their quality. I bought them on ebay used when they were 20+ years old because they were in my budget at the time. One has seen nearly daily use by me for many years. My goal is to make distortion tests on all channels at some point, but that will have to wait.

Thanks for the encouragement. I value all feedback.
Good job on the measurements of your Fosi V3 Mono but isn't it starting to get a little close to the measurements on your Kenwood KM-X1? :)
Since you have three of them, it's interesting to see if they measure the same. From what I can see they are around thirty years old. See what the ravages of time have done to their performance so to speak.

 
I have a Focusrite 2i2 (gen 2) and I found that noise / THD varied quite a lot depending on gain settings.

Some gain settings resulted in lower THD but higher noise and vice versa.

I have attached an Excel spreadsheet should anyone wish to graph this.

1740497720389.png
 

Attachments

  • focusrite.zip
    7 KB · Views: 22
Good job on the measurements of your Fosi V3 Mono but isn't it starting to get a little close to the measurements on your Kenwood KM-X1? :)
Since you have three of them, it's interesting to see if they measure the same. From what I can see they are around thirty years old. See what the ravages of time have done to their performance so to speak.
I initially thought your point was probably due to the 2i2 output noise limiting the SINAD in my test rig, but then I looked back at Amir's V3 Mono measurements, the V3 published numbers and those of the Kenwood in the link you provided. V3 beats the Kenwood hands down, then I realized I was comparing V3 balanced to the Kenwood that is unbalanced. From that perspective, the Kenwood easily exceeds the V3 Mono's unbalanced specs, although there's no SNR listed for the V3 unbalanced. Even so, the Kenwood 105 SNR is pretty fair by today's standard. My rig won't be able to measure an accurate SNR given the 2i2 output limitation on noise, however. The Kenwood has no published info on HD components, but it's THD spec is better than that of the V3, balanced or unbalanced (one number shown). It will be interesting to compare HD components since REW can pull those out of the noise.
 
One more curious item to report. With the 2i2/ADCiso loopback, there was no external noise problem, even though this was with a Samsung charger on the ADC. When measuring the V3 with 2i2 only (I/O), no noise issue. But with the 2i2/ADCiso measuring the V3, the ADCiso picked up significant noise. I had to switch ADC power to a power bank on the ADC to eliminate this noise. Odd that it was good in the loopback, but not when measuring the amp. Any thoughts as to why?

V3 Mono to Monitor1 to ADCiso Samsung Charger vs PowerBank.jpg
 
I switched to the old PC in order to set up for testing the unbalanced Kenwood amps. This will use the Delta 1010LT PCI sound card. There were two surprises. First, the 1010LT (unbalanced) sweet point loopback measurement is nearly the equivalent as the Scarlett 2i2 balanced sweet point loopback measurement. Being on a different PC I measured the 2i2 to have a comparison graph. The second surprise is that the 2i2 measures significantly worse when connected to the old PC. This seems to be due to the noise floor being much higher. I would expect there to be no difference between the (old)PC and the (old) laptop since they use a USB connection, but that noise floor limit is constant. Below are the 2i2 measurements in question. Can anyone explain why this may be so? There's no power supply noise (60Hz here). Is there this much noise contaminating the USB in the PC?

To contrast this, the PCI sound card is excellent, from early testing the unbalanced 1010LT is nearly the equivalent of the balanced 2i2 on the laptop.

Old laptop 2i2 measurement:

Scarlett 2i2 48kHz Output Max Monitor1 100 REW 256FFT -25.0dBFS Standard.jpg


Old PC 2i2 measurement using the same generator output:

Scarlett 2i2 Monitor1 100 Loopback 2i2 Input Gain 12dB T1650 PC REW Gen -20.0dBFS.png


I will be posting the 1010LT loopback results soon.
 
Delta 1010LT unbalanced loopback measurements. 1010LT input is via a 10-turn potentiometer at max (0.0) for the loopback measurements. The latter will be used later to set the feedback input voltage to the sweet spot. The first thing I did was to run an REW stepped THD vs level.

1010LT Unbalanced Loopback 10 Turn Pot at Max REW S-THD vs Level 0.1dB Step 48kHz 256kFFT.png


After a few tests at lower dBFS levels I expanded the scale to the range more suitable.

1010LT Unbalanced Loopback 10 Turn Pot at Max REW S-THD vs Level 0.1dB Step 48kHz 256kFFT Expa...png


Testing more selected points I settled on -2.4dBFS as the optimal sweet spot. This has the advantage of being almost 3V 1.365V (-2.72dBFS) at the input. This ought to reduce effects of any external noise since this will be for testing unbalanced amplifiers.

1010LT Unbalanced Loopback 10 Turn Pot at Max REW Gen -2.4dBFS 48kHz 256kFFT.png


Not bad for a sound card that came out around the year 2000.

I know that this diverges from the purpose of the thread, that is, using a modest audio interface, not a PCI sound card, but there was interest in it and I will be measuring more unbalanced than balanced amps. The 1010LT looks to be superior to the 2i2 for that use.
 
Last edited:
The second surprise is that the 2i2 measures significantly worse when connected to the old PC. This seems to be due to the noise floor being much higher. I would expect there to be no difference between the (old)PC and the (old) laptop since they use a USB connection, but that noise floor limit is constant. Below are the 2i2 measurements in question. Can anyone explain why this may be so?
Looks an awful lot like you were stuck at 16 bits per sample. Check your REW version and I/O settings. My guess is Java I/O without exclusive mode or something.

Not bad for a sound card that came out around the year 2000.
Mind you, you're using line ins and outs only. (If you want to be impressed, look at the measurements of the E-MU cards that came out in the 2004-6 time frame.) The 2i2 Gen4's limiting factor in terms of distortion is its preamp stage. If you were using the fixed-level line-in provided by the 4i4 model, that would probably yield a whole lot more impressive results.

BTW, the 1010LT has a bit of a reputation for not using the best-quality electrolytic caps, and a high-hour sample may appreciate a recap (though yours seem to give no indications of distress). Ironically, this is an issue shared with the E-MU 1820/1820M external audio dock where two 680µ/10V caps - presumably on the supply rails - like to puff up particularly often but are not the only potential trouble spot.
 
Looks an awful lot like you were stuck at 16 bits per sample. Check your REW version and I/O settings. My guess is Java I/O without exclusive mode or something.
It's not using Java for the 2i2 loopback. It uses the supplied ASIO driver. The settings control is very limited, doesn't show the sample bit size, but it is 24 bits according to some info on the web. I couldn't find it on the Focusrite site. Given that it's the most recent, I'd be surprised to find anything 16 bit.
Mind you, you're using line ins and outs only. (If you want to be impressed, look at the measurements of the E-MU cards that came out in the 2004-6 time frame.) The 2i2 Gen4's limiting factor in terms of distortion is its preamp stage. If you were using the fixed-level line-in provided by the 4i4 model, that would probably yield a whole lot more impressive results.
The 2i2 input is best with gain set to 12dB in my case, no fixed line-in, but all tests are repeatable on both machines. I expected identical response when using the ASIO driver. Both systems are Windows 10 Pro. So it's still puzzling. Maybe there's some strange issue with the USB on the PC. It's an old USB A socket. I had not powered the 2i2 with a phone charger. I tried that, no change. Tried a power bank, no change. So not a power supply issue unless the USB A has some noise getting through to the 2i2. I can only guess.
BTW, the 1010LT has a bit of a reputation for not using the best-quality electrolytic caps, and a high-hour sample may appreciate a recap (though yours seem to give no indications of distress). Ironically, this is an issue shared with the E-MU 1820/1820M external audio dock where two 680µ/10V caps - presumably on the supply rails - like to puff up particularly often but are not the only potential trouble spot.
I plan to replace caps on the other board I have that has the failed balanced input, though there's no physical evidence of cap failure. I think you mentioned that before. The board I'm using now showed no physical cap failure signs, no guarantee of course. I'm very pleased with the results of this board as it is, especially the unbalanced I/O. I'll be using that for testing many unbalanced amps.

I've also just completed testing the 2i2-to1010LT balanced path. Also surprisingly good, but actually not as good as the 1010LT unbalanced loopback. That's with using REW 2i2 output control into a 1010LT sweet spot. That may be affected by the 2i2 output not being a sweet spot. Amir's tests showed ideal output at 4V, above the 1010LT input max, so some of the distortion may be due to the 2i2 not being at its output "sweet spot". The best 2i2-1010LT response was with 2i2 output about 1.2V. This puts it somewhere around 102dB on his SINAD curve. It may or may not influence the 2i2-1010LT results. The 1010LT has poorer noise than the 2i2 even in my non-AP tests, but the THD I measured on 1010LT unbalanced loopback is 108.8dB. I would expect the balanced input to be at least that good.

Edit: Forgot to say that I'm using the latest beta release of REW.
 
Last edited:
It's not using Java for the 2i2 loopback. It uses the supplied ASIO driver. The settings control is very limited, doesn't show the sample bit size, but it is 24 bits according to some info on the web. I couldn't find it on the Focusrite site. Given that it's the most recent, I'd be surprised to find anything 16 bit.
REW should be displaying input and output bit depth in the status bar, so I'd recommend checking what it says. (If there isn't anything, check whether you aren't accidentally running a rather old version still, like pre-5.30.)
 
I I hadn't noticed that, thanks. Version is 5.40 beta 70.

Interesting:

Scarlett 2i2 ASIO driver: Int32L in and out.

FlexASIO driver (2i2 out, 1010LT in): Float32L in and out.

Delta 1010LT ASIO driver: Int32L in and out.
 
Back
Top Bottom