• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Amplifier distortion testing using a modestly priced audio interface

DavidR

Active Member
Joined
Aug 19, 2024
Messages
142
Likes
67
If this thread should be in another sub-forum, I ask the moderators to please move it.

I've been doing a of testing/learning the capabilities of a test setup I will use primarily for driver testing. This is my first deep dive into distortion. I knew that I needed to determine the limits of the hardware and software to ensure that the test system was adequate. The hardware includes an audio interface, a "feedback device" I found to be critical for optimal results and an amplifier.

The software is without doubt more than adequate (REW) so this thread will be focused on the surprising results achieved in comparison to posted measurements by amirm using his APx555 system (I think that's right). Posts to follow may be a bit long as I want to provide as much detail as I can. The results are arguably close to those provided by the APx555 with the exception of the noise level, thus the SINAD result is limited by the audio interface (in my case). My belief is that with a higher quality (lower noise) audio interface the results can rival the APx555 within the interface limits and REW. Quite the claim, yes, but I think that the measurements will demonstrate this.

I'll start by showing the final result with comparison to similar testing with the APx555 results reported by amirm. Later posts will provide details. The target amp is the Fosi V3 Mono. All my tests were made using the XLR I/O.

This is the 1kHz tone test result by amirm:
Fosi Audio Mono V2 amplifier measurement 1kH tone test.jpg


This is my best result:
Fosi V3 Mono 5W 8-ohm Resistive Load 48k 512kFFT 103Deg.jpg

The V3 must be allowed to get "hot". I measured the top with a laser temperature measurement. The V3 temperature stabilized at 103.8 degrees measured at the center of the top (hottest location I found). The HD components seem to settle in quickly, but the noise floor improves as it gets hotter, up to that point. I have the V3 supported for clearance underneath for cooling. More posts to come, but may take a while to pick which to post and to organize in a logical manner.
 
What audio interface are you using for your measurements?
 
This is going to be long because it will cover three components being used.

The audio interface I use is the Focusrite Scarlett 2i2 Gen 4, covered in other threads. I it bought on recommendation, but some testing made me question whether it would be adequate given my first tests of it. I found this site and read through the review by amirm as well as other member comments. I was bothered by how much focus was on the "sweet spot" because I expected to have varying levels for tests. The distortion outside of the sweet spot made me question it. Originally I planned to use the 2i2 only for output to the power amp output then to an older PCI sound card with balanced input, but that turned out to be inadequate as well. Some users have made feedback probes (voltage dividers) switchable for ratio, but that didn't appeal to me and still meant that most often the 2i2 input voltage might not be close to the sweet spot. Close wasn't good enough for me. What I felt I needed was a better way to attenuate the feedback from the amp output.

In reading other reviews (you can spend months trying to read through the many helpful and/or just interesting threads here) I happened upon mention of the Behringer Monitor1 passive attenuator. It was cheap (cost wise), has balanced I/O (a key point), was even on sale at Amazon at the time, so I bought one to experiment with. After finding the source of a problem detailed in my Monitor1 thread here using the Monitor1 in the loopback with the Scarlett 2i2 the results looked promising for use in place of fixed or switchable feedback probes.

All testing with Monitor1 in the loop was with the 2i2 output dial at maximum, amirm said that is its best THD setting and in doing 2i2 loopback tests it was what I had found. That is with the its input gain at zero to allow for maximum 2i2 output in a THD vs Level test. I then tested various settings of the Monitor1 dial (0-100 markings) while changing the REW output level for THD tests. With the Monitor1 at 100 the 2i2 feedback input was not equal to the 2i2 output. For some reason the Monitor1 at 100 isn't really at 100%, there is a voltage drop, but this wasn't a problem for distortion tests. A calibration of the 2i2 with Monitor1 in the loopback is identical to the 2i2 alone, only down some in level (documented in my other thread). THD sweeps showed the same 2i2 sweet spot (input voltage) as the 2i2 loopback, not surprising since the Monitor1 is a passive attenuator. I verified that THD results of 2i2 only and 2i2-Monitor1 were nearly identical, so I'm fairly confident that this scheme is reliable.

Best results in the 2i2 loopback were at the sweet spot, of course. My goal was testing the V3 Mono with a controlled feedback attenuation. All three units, Scarlett 2i2, Behringer Monitor1 and V3 Mono input, have balanced connections. I fed the V3 output to the Monitor1 input, then connected its output to the 2i2 input. Then came extensive testing.

Still, initial results were not impressive, certainly not adequate. Then I happened to read a thread here (unfortunately I can't find it again) in which a member tested the 2i2 for optimal results and found that the optimal input gain is around 10dB, coinciding with someone who posted tests on youtube. I tested 2i2 input gain at 10dB and a few other gain settings, but I concur that 10dB input gain is optimal. It's also helpful that the 2i2 Gen 4 input gain setting is not on a potentiometer, so its setting is precise and stable once set.

I then ran tests of the 2i2-Monitor1 feedback from the V3 with the 2i2 input gain at 10dB and with various combinations of Monitor1 dial and REW output level, keeping the 2i2 input level at its sweet spot. Having a potentiometer for voltage control allows for fairly precise voltage to the 2i2 input. This was done for V3 output of 1W and 5W into an 8-ohm resistive load. Most testing was done at 5W, so I'll only present those results here.

I have a number of graphs to post that will document this, but it will take more time to go through them to post. I hope to have that all done today.
 
For those not familiar with the Focusrite Scarlett 2i2 Gen 4 amirm has a full review here. It's the basis for all of my measurements. On its own it was far from satisfactory for my needs. Adding the Behringer Monitor1 in the feedback makes it far better. In fact I suspect that the Behringer might improve the performance of higher quality audio interfaces for amplifier measurement. These graphs are REW calibration results of the 2i2 alone and the 2i2 with the Monitor1 in the loop at various sample rates.

The first one shows the offset due to the Monitor1 at its 100 (maximum) position that still has some voltage drop (that surprised me), but it's apparent that it's only a level change. I believe these were all 64k FFTs.

Calibrations of Scarlett 2i2 Only and of 2i2 with Behringer Monitor1 in the Loop - Overlays - ...jpg


The second one shows an overlay where REW apparently normalizes the responses for nominal 0 level. The 2i2 and Monitor1 responses are essentially identical for any given sample rate which is what one would expect for a passive attenuator.

Calibrations of Scarlett 2i2 Only and of 2i2 with Behringer Monitor1 in the Loop - All SPL - M...jpg


One thing to note is that the Monitor1 is rated for a maximum input of 22dBu, identical to that of the Scarlett 2i2. This limits the input voltage to 9.75V, something to keep in mind. My usage is fine with that as I'm not expecting to use the amp for high power, but if so a single voltage divider probe (or series resistor?) will bring it into the safe input range. I'll probably test that at some point.

The purpose of using the Monitor1 in the feedback loop is to provide a means to adjust the feedback voltage so that is it close to or precisely at the "sweet point" of the 2i2 (or any other balanced audio interface for that matter. My testing has shown that it works. That voltage is about 0.526V as measured. In fact, for the 2i2 it's critical to get the best results for amplifier distortion testing. Small changes away from the sweet point start to have increasing distortion results due to the 2i2 itself. It's limited, not useful for very low distortion amplifiers, but with better audio interfaces - and the testing offered by REW - adding the Monitor1 into the feedback provides for a good measuring system "on the cheap".

One point to emphasize. All testing can and should be done with the 2i2 output dial at maximum.

I'll have some results of measuring the Fosi V3 Mono later. All of V3 Mono tests to follow are with the 2i2 input at +10db.
 
Last edited:
For reference, this is the Scarlett 2i2 loopback response at the sweet spot, without the Monitor1.
Scarlett 2i2 Sweet Point for a Loopback -25.5dBFS.jpg

I have V3 Mono distortion tests to post. I'm going to start with the basic RTA again. The graph in post 1 above is not the 5W test, I was mistaken when I exported it. It's close to 12W which has a better distortion measurement vs 5W. I've done some re-testing to ensure I'm getting repeatability and to replace the one above.

This is the V3 Mono at 5W. This is feedback to the Behringer Monitor1 then to the Scarlett 2i2 input, 64k FFT.
V3Mono 5W into 8 Resistive Load Scarlett 2i2 64kHz 10dB Gain Behringer Monitor1 @35 REW -18.48...jpg

This is puzzling. This is the same measurement, only with a 256k FFT. I found that simply changing to a higher FFT results in lower noise floor, but slightly higher 2nd and 3rd harmonic components. It was consistent, I tested several times at different settings. HD components do tend to move around between measurements and while averaging, but the noise floor change at higher SR seems to be consistent and repeatable. But I don't think it has anything to do with having the Monitor1 in the loop.
V3Mono 5W to Behringer Monitor1 @35 to Scarlett 2i2 48kHz Ouput Max Input 10dB Gain -18.48dBFS...jpg

Here's an overlay to make it easier to see the difference. It was 48kHz vs 192kHz, not 64k (typo).
V3Mono 5W into 8 Resistive Load Scarlett 2i2 10dB Gain Behringer Monitor1 @35 REW -18.48dBFS 6...jpg

The key point in all of this is that the input to the Scarlett 2i2 can be set to be right at its sweet spot no matter the signal level. Of course for signals higher than 22dBu it will be necessary to have a voltage divider probe or other way to lower the signal prior to the Monitor1, but even then the Monitor1 (or some other passive attenuator) can then set input to the audio interface to its sweet spot.
 
Last edited:
I am disappointed in one aspect. I haven't found anything that improves it. That is the multitone distortion test. The REW 31 point test at 5W is not what I expected. Higher V3 power output improves it , but the 5W is poor. I need to do more testing to see if its something I'm doing wrong on settings, hardware and REW. I'll report results later.
 
The REW 31 point test at 5W is not what I expected. Higher V3 power output improves it , but the 5W is poor.
Mind posting your results?

Keep in mind that you average audio interface may not be designed to handle the output of a Class D power amp which tends to be rich in ultrasonics that could wreak all kinds of havoc (both in the analog stages and on the ADC side - I would give 96 kHz with a 128k FFT a shot as well). Audio Precision specifically recommends using a steep lowpass filter for their gear. As an aside, HF distortion in your mic preamp is not exactly great either, so any wideband testing would show up its limits more easily.
 
Mind posting your results?

Keep in mind that you average audio interface may not be designed to handle the output of a Class D power amp which tends to be rich in ultrasonics that could wreak all kinds of havoc (both in the analog stages and on the ADC side - I would give 96 kHz with a 128k FFT a shot as well). Audio Precision specifically recommends using a steep lowpass filter for their gear. As an aside, HF distortion in your mic preamp is not exactly great either, so any wideband testing would show up its limits more easily.
Yes, I will be posting. The problem is, again, REW, as I had in my other thread. I restarted REW. The measurements now are more what I expected, so I'll need to make new tests. I think it's tied to REW when switching between sample rates. It may actually not be REW, it could be the ASIO driver for the Scarlett, so I can't do a process of elimination to find it. I will probably post later today. But your comment about the filter is a good point. I've read that elsewhere, but wondered if the V3 Mono is less prone to that issue given the PFFB implementation.

I was also trying 192kHZ and 96kHz. REW would not complete a long average or a successful S-THD, part of what prompted me to re-start it.
 
I have re-measured the V3/Monitor1/2i2 combo. These are the single tone THD tests. I should note that I have not made a lowpass filter that may be beneficial to these tests of a Class D amp.

48kHz 256k FFT:
V3Mono 1kTone 5W into 8-ohm Monitor1 @35 2i2 48kHz SR Ouput Max Input 10dB Gain REW Gen -18.48...jpg

192kHz 256k FFT:
V3Mono 1kTone 5W into 8-ohm Behringer Monitor1 @35 Scarlett 2i2 192kHz SR Ouput Max Input 10dB...jpg

48kHz vs 192kHz 256k FFT:
V3Mono 1kTone 5W into 8-ohm Monitor1 @35 2i2 Ouput Max Input 10dB Gain REW Gen -18.48dBFS 256k...jpg


The next one is 48kHz after the V3Mono is hot. Much more than a five minute heating period. The V3 is on supports to allow heat dissipation underneath it, but no fan. I do have a fan on order to test. If you compare this measurement with the first one above it you'll note that when it's hot the V3 has a fair bit more distortion in H2 and H3. THD+Noise is also a bit worse.
V3Mono 1kTone 5W into 8-ohm Monitor1 @35 2i2 48kHz SR Ouput Max Input 10dB Gain REW Gen -18.48...jpg

For comparison again with the proper test of mine against that by amirm:
Fosi Audio Mono V2 amplifier measurement 1kH tone test.jpg


For good measure here is an overlay of the Noise Floor from S-THD measurement for 48kHz vs 192kHz.
V3 Mono 5W to Behringer Monitor1 to Scarlett 2i2 256kFFT 48kHz vs 192kHz - Noise Floor.jpg
 
Last edited:
These are the MultiTone distortion tests of the V3/Monitor1/2i2 combo.

First, the test provided by amirm in his review:
Fosi Audio Mono V2 amplifier Multitone measurement.png


Now mine using the Scarlett 2i2/Behringer Monitor1.

48kHz 256k FFT:
V3Mono MultiTone 5W into 8-ohm Monitor1 @35 2i2 48kHz Ouput Max Input 10dB Gain REW Gen -18.48...jpg

192kHz 256k FFT:
V3Mono MultiTone 5W into 8-ohm Monitor1 @35 2i2 192kHz Ouput Max Input 10dB Gain REW Gen -18.4...jpg

48kHz vs 192kHz 256k FFT:
V3Mono MultiTone 5W into 8-ohm Monitor1 @35 2i2 Ouput Max Input 10dB Gain 48kHz vs 192kHz Samp...jpg

The difference is primarily due to the Noise Floor of the two.

I'm fairly satisfied with the results. They compare favorably to the Apx555 results at least for this level of amplifier distortion. A bit surprising. It's absolutely critical, though, to ensure that the input to the audio interface is as close to the sweet spot as possible, at least for the Scarlett class. I also have an E1DA ADCiso on order.
 
FYI, almost all of my testing is with 4 ohm load. I see from the note in your graphs that you are using 8 ohm. That would reduce the amount of distortion.
Yes, that occurred to me. I was intent on just getting it working and checked. I should run tests with a 4 ohm load. Plus the fact that it's a purely resistive load. In fact, I've had it running for a couple of hours now under load. I measured 106F degrees on the top panel and ran a test. That alone has driven the odd order distortion higher.

One step at a time. I didn't plan to do this much when I started. Thanks for the encouragement.
 
Last edited:
The difference is primarily due to the Noise Floor of the two.
Which is more or less expected. If you use the same FFT length at 192k vs. 48k, you will have 4 times the bin width at 192k, which will capture 4 times as much noise power per bin = +6 dB.
 
I have more to report. I've done more testing, partly triggered by temperature measurements. I use common 10W sand-cast resistors in parallel for the 8-ohm load. One was 300 degF! The REW gen value required for 5W into 8-ohms is -18.48dBFS. Working on my (poor) memory early on I entered -14.84dBFS. This turns out to be about 0.3dBFS below the gen level that provides the V3Mono output (9.6V) that is close to the limit for the input of both the 2i2 and Monitor1 (9.75V). The V3 power out was actually about 11.5W for the measurements I provided above that I compared to those by Amir. That puts the V3 current output close, 1.12A at 4.47V (5W 4ohms) vs 1.2 Amps at 9.6V (11.5W 8ohms). But when I set the gen appropriately for a true 5W the 1k tone was similar, but the multitone results were significantly worse. Either the test scheme is flawed (2i2 the culprit) or the V3 Mono distortion changes in the lower range more than I would expect.

Testing had indicated that the 2i2/Monitor1 combo provides better distortion results with higher V3 power output while setting the Monitor1 input to the 2i2 input to the same level (approximately -15dBFS). I'm a bit puzzled by that. The multitone distortion reported in REW for the V3Mono is below (better) than that of the 2i2 in loopback. I had thought that it could be no better than the inherent distortion in the 2i2, but it's completely repeatable, not a fluke as shown in previous and my subsequent measurements. So distortion at 5W is worse than at 11.5W? Is that due to Class D or is it a deficiency in this measurement scheme?

I will provide additional measurements to demonstrate the results. I also have received the E1DA ADCiso to use for input testing. That may shed some light.
 
I found that simply changing to a higher FFT results in lower noise floor
Not really. The SNR figures remains (almost) the same.
The noise APPEARS lower because each FFT bin has a smaller width. This is perfectly normal and documented.
It will help seeing the distortion peaks better.


By the way, there is an easy way to decrease the 2i2 noise impact on your measurements:
This is called cross correlation averaging.
If you use an Y cable after the Monitor 1 and feed both inputs of the 2i2, you may activate that option in the distortion settings panel of REW.
(You need 2 different inputs for this to work)
If you then allow 100 or more averages, you'll see a decrease of the ADC noise impact.
Theory says the benefit should be around 5×log10(<# averages>).

You may then be limited by your 2i2 output though.
In that case, combining both outputs (with another Y cable) will help by a few dBs (2-3dBs)
 
Last edited:
We do wonders with a scarlett 2i2

Despite the absence of a filter at the output of V3
the measurements are rather really correct.

I think you had a little luck by connecting
your V3 amp directly to the scarlett, as
it is a class D with non-symmetrical power supply
you brought 32 volts/2 (continuous voltage) to each input of the 2i2.
I don't know how you connected your ampli output?
You must have input capacitors of the 2i2 that
hold the voltage well.

If I were you I would put 2 safety capacitors before
retaking measurements.

Concerning the multitone IMD measurement, we see that the
noise floor is limited by the noise floor of
the scarlett.

You can gain a little dB by increasing the level
at the input of the scarlett, maybe with your "monitor1"
you can get there, and pass "REW Gen = -11.3dB", you will gain
~ 6dB on your noise, at 11.3db your max time signal will be at ~ 0db

Another way is to use the cross correlation in REW
as "RJA400" said, it will be much longer, this
allows to get much closer to the real noise floor.
But does not allow to easily arrive at the truth, I have not
arrived there yet on a multitone signal.

I say bravo for the analysis, thank you.
 
Not really. The SNR figures remains (almost) the same.
The noise APPEARS lower because each FFT bin has a smaller width. This is perfectly normal and documented.
It will help seeing the distortion peaks better.


By the way, there is an easy way to decrease the 2i2 noise impact on your measurements:
This is called cross correlation averaging.
If you use an Y cable after the Monitor 1 and feed both inputs of the 2i2, you may activate that option in the distortion settings panel of REW.
(You need 2 different inputs for this to work)
If you then allow 100 or more averages, you'll see a decrease of the ADC noise impact.
Theory says the benefit should be around 5×log10(<# averages>).

You may then be limited by your 2i2 output though.
In that case, combining both outputs (with another Y cable) will help by a few dBs (2-3dBs)
Thanks for this feedback, I have a lot to learn about distortion and measuring it. I have started tests with the E1DA ADCiso that also had a recommendation (for reasons unique to it I think) to input mono to the two inputs. Cable to arrive today. I'll order cables to implement your suggestions as well. Initial tests with the 2i2/ADCiso combo point to the 2i2 output being a limiting factor. Your suggestions should improve that.
 
I think you had a little luck by connecting
your V3 amp directly to the scarlett, as
it is a class D with non-symmetrical power supply
you brought 32 volts/2 (continuous voltage) to each input of the 2i2.
I don't know how you connected your ampli output?
You must have input capacitors of the 2i2 that
hold the voltage well.

If I were you I would put 2 safety capacitors before
retaking measurements.
I knew it was a risk to directly connect the V3Mono to the 2i2 input, but as long as the V3 power is controlled, it's safe enough. Just takes care. REW has the benefit of ending any distortion tests if the feedback signal exceeds a specified distortion level, 1% by default that I left unchanged. It has stopped many times during tests (even loopbacks can do that), so I felt it was safe enough without a voltage divider probe or safety caps. The use of the Monitor1 is solely to allow setting the 2i2 feedback voltage to the "sweet spot" that is in the area of lowest inherent 2i2 distortion. The Monitor1 helps for safety as well as it can be set to 0 to start that prevents any excess voltage to the 2i2 if something is done wrong. So far so good, nothing damaging has occurred. For higher amp power tests a single voltage divider probe can be used in front of the Monitor1 to bring the voltage down and still have the Monitor1 to adjust for optimal 2i2 input (or any audio interface used).
 
Back
Top Bottom