• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Focusrite Scarlett 2i2 (4th Gen) Interface Review

Rate this audio interface:

  • 1. Poor (headless panther)

    Votes: 26 19.0%
  • 2. Not terrible (postman panther)

    Votes: 73 53.3%
  • 3. Fine (happy panther)

    Votes: 36 26.3%
  • 4. Great (golfing panther)

    Votes: 2 1.5%

  • Total voters
    137
@amirm the ADC section is lacking the comparison chart, just wanted to make sure it's included :)

Problem is that it would not be relevant since XLR rear connectors are for Microphone. Front are for line input.

This is the difference, with 1.7Vrms input as Amir did with XLR rear connectors (input gain +8dB):

1732988088413.png


And the same 1.7Vrms sent to front Line input (same input gain +8dB):

1732988059010.png


So it would need more testing to know what to compare to. I don't know how Amir usually setup this test (with what input voltage and what expected input attenuation, I mean).

Cheers
 
Last edited:
So it would need more testing to know what to compare to. I don't know how Amir usually setup this test (with what input voltage and what expected input attenuation, I mean).
"As much as the input can handle" and "as low as it will go", generally (within reason, of course; sometimes further attenuation is achieved in the digital domain which is obviously pointless). You should be unable to saturate the input with your Ultralite Mk5 as it "only" outputs about +20.5 dBu. It ought to be enough for a decent idea of input performance though.

Either way it appears that the line input has somewhat higher dynamic range since you're only losing 3.4 dB of SNR at 5.25 dB less resulting input level. That's odd. I would have expected everything to be going through the preamp with preceding passive (or active) attenuation for the line-in, so if anything dynamic range would be expected to be slightly worse. The specs would definitely support this as well (116 dB(A) mic vs. 115.5 dB(A) line).
 
Last edited:
"As much as the input can handle" and "as low as it will go", generally (within reason, of course; sometimes further attenuation is achieved in the digital domain which is obviously pointless). You should be unable to saturate the input with your Ultralite Mk5 as it "only" outputs about +20.5 dBu. It ought to be enough for a decent idea of input performance though.
Exact, I’m not able to saturate the line in. That said, the Motu does not have its best perfs at max level either.
But I thought there would have been some sort of convention, like 2 or 4Vrms input.
Anyways, I get the feeling that the 2i2 does not appreciate much we push the input gain to reach close to 0dBFS whatever is the voltage input. Since I don’t think it’s a best practice, I’d say it’s ok.
Either way it appears that the line input has somewhat higher dynamic range since you're only losing 3.4 dB of SNR at 5.25 dB less resulting input level. That's odd. I would have expected everything to be going through the preamp with preceding passive (or active) attenuation for the line-in, so if anything dynamic range would be expected to be slightly worse. The specs would definitely support this as well (116 dB(A) mic vs. 115.5 dB(A) line).
Yep, Line in has a better DR as you spotted. I’ll try to report all of that thoroughly.
 
Last edited:
I gave up buying any hardware products from Focusrite after they withdrew all software support for my Liquid- stuff I bought from them, after only a couple of years. They had bought the processing in and didn't want to renew the license. So I don't trust them, and I don't find them better than other products for the similar price. There was a huge delay in supporting their Thunderbolt products on Windows. Nah. No Focusrite hardware again for me.
 
I gave up buying any hardware products from Focusrite after they withdrew all software support for my Liquid- stuff I bought from them, after only a couple of years. They had bought the processing in and didn't want to renew the license. So I don't trust them, and I don't find them better than other products for the similar price. There was a huge delay in supporting their Thunderbolt products on Windows. Nah. No Focusrite hardware again for me.
That was around the same time period they stopped supporting mine. The part that really got me was that not long before that, EMU did the same thing.
 
Focusrite Scarlett 2i2 4th Generation DAC Measurements
Let's start with our usual dashboard after setting the output volume level to 0dBFS (it goes up beyond it):
View attachment 409565
Well, this is a headscratcher. We are miles away from company spec of 109 dB for SINAD. It is distortion bound so not much that can be done about it (or so I thought). This causes the ranking to be at the bottom of all interfaces tested:
View attachment 409566

By accident, I realized that if I set the volume to max (+6 dB?) but then attenuate the level in the source (analyzer output) to still get 4 volts, I get much better results:
View attachment 409567
How could this be? This is inverted as usually distortion is proportional with output level. Here, maxing out the output lowers distortion! And how would the customer know to drop their max level to -2 dBFS? It makes no sense to me.

Hi Amir,

Now that we know the volume knob is analog attenuation and not so good, would you reconsider your ranking?

There are indeed two ways to get to your standard 4Vrms output for measurement, with the analog knob, not so good here, or digital attenuation by 2dB, as you do with interfaces having a digital knob (such as the Motu Mk5) or not having volume control at all (all interfaces would, though, I guess).

Because that means we would go from this chart (I reused you data, and recreated the bar graph):

1733217778494.png

Focusrite 2i2 Gen4 Ranking, 4Vrms output via front Volume Knob

To this one:

1733217798708.png

Focusrite 2i2 Gen4 Ranking, 4Vrms output via digital attenuation (-2dBFS)


At that makes a big difference.

Anyway, going with max volume, we can see the ideal output is indeed 4 volts although the penalty for max volume of 5 volts is not much at all:
View attachment 409568

It would also align with this above graph which is one of your standard measurement, and is all digital attenuation. And from this graph we see SINAD above 100dB from 0dBu.

I never use front volume control of the interface, I adjust digitally in the software (I can be more precise), but not everyone does the same. So I think it's important to document the differences with the 2i2 Gen4, but I don't think it's fair to rank it based on analog rather than digital attenuation.
 
Last edited:
I am wary of being definitive here, as the result is circuit-dependent, but it had used to be the case that reducing the level in the software reduces the bit depth of the output signal. Of course with 24 bit DACs this is not really noticeable, depending; with 16 bit it is more problematic. It is certainly possible to design devices with a separate digital attentuator so that this bit-depth reduction does not happen, but that is a cost centre contributor and I'd be wary of assumptions without explicit circuit knowledge: see eg -


Myself I always use the analog attenuator and keep the digital level at max. That is not the case for my monitoring system where I use a VCA fader which has the advantage of a very tight channel match.
 
Guess what, digital attenuation also has very tight channel matching without the distortion (and/or noise) penalty typically associated with VCAs. The 2i2 4th gen illustrates very nicely how in this day and age of preamp-grade DAC performance, an analog volume control stage can actually be detrimental. It still made sense with the old CS4272, but when using a CS43198, nope.

It's no longer 1999, and 24-bit and even 32-bit output have become commonplace. You're no longer using 16-bit samples if you can help it. It makes zero sense when DAC dynamic range is equivalent to 21-22+ bits.
 
I was considering one of these. So glad I didn’t waste my money.
 
I am wary of being definitive here, as the result is circuit-dependent, but it had used to be the case that reducing the level in the software reduces the bit depth of the output signal. Of course with 24 bit DACs this is not really noticeable, depending; with 16 bit it is more problematic. It is certainly possible to design devices with a separate digital attentuator so that this bit-depth reduction does not happen, but that is a cost centre contributor and I'd be wary of assumptions without explicit circuit knowledge: see eg -
I got your point.

With the Focusrite 2i2Gen4, output volume, controlled by the front volume knob, does not appear to be digital, because it is nearly impossible to set two times the exact same attenuation at the output. But I could be wrong, indeed.

It is true that precise analog attenuation can be obtained via "digital attenuators", in which case the initial digital signal is untouched since that happens after conversion.

But in the end, there's always the limit of the noise floor. So yes, lowering the volume in digital domain (before conversion) reduces bit depth. But lowering the volume in analog domain will have a similar effect, since the limit is set by the noise floor (provided harmonic distortion does not come in the way), so it reduces the available amplitude between the carrier and the noise floor, i.e. less resolution.

So the question becomes: which volume control mechanism offers the best performance for a desired output voltage? With the 2i2Gen4, it's via digital attenuation in the Software when we want 4Vrms at the output and measure performance on a SINAD perspective. As shown by Amir, we get 94dB SINAD in one case and 109dB in the other, a massive difference.

Now, if we talk SNR, it's a different story, but it's not how ranking is done here. When lowering volume of the 2i2Gen4 with volume knob, distortion increases, and comes in the way in the way of the SINAD calculation, and that's odd. If the degradation is less significant on SNR perspective, as per my measurements, I've seen 5dB SNR loss when lowering the volume via knob vs in software (still to get 4Vrms output), so one more reason to prefer digital attenuation with the 2i2Gen4.

I personally am more concerned by SNR-without-harmonics than SINAD-distortion-driven because the latter will more easily hide itself into musical content. But my concern applies more at the input of an interface than at the output, I must say.


Myself I always use the analog attenuator and keep the digital level at max. That is not the case for my monitoring system where I use a VCA fader which has the advantage of a very tight channel match.

That anyways deserves testing to evaluate the best strategy, should one be concerned to miss a fraction of bit/dB.
 
I am wary of being definitive here, as the result is circuit-dependent, but it had used to be the case that reducing the level in the software reduces the bit depth of the output signal. Of course with 24 bit DACs this is not really noticeable, depending; with 16 bit it is more problematic. It is certainly possible to design devices with a separate digital attentuator so that this bit-depth reduction does not happen, but that is a cost centre contributor and I'd be wary of assumptions without explicit circuit knowledge: see eg -


Myself I always use the analog attenuator and keep the digital level at max. That is not the case for my monitoring system where I use a VCA fader which has the advantage of a very tight channel match.

AFAIK, in order to get the best SNR numbers on an analog potentiometer you still have to crank up the gain. Whenever you attenuate, you lose bit depth either way.
 
AFAIK, in order to get the best SNR numbers on an analog potentiometer you still have to crank up the gain. Whenever you attenuate, you lose bit depth either way.
This is not quite correct. It is a little more complicated. For one with gain, you also boost the noise floor of the input signal. So cranking up gain and then attenuating would be counterproductive. OTOH, if the source is high enough in level without gain, attenuating can maintain the total dynamic range.
 
There's also the issue of S/N (signal to noise) and how it is calculated and with what conditions.

Signal to noise (S/N) and signal in the presence of noise are not the same. Most test gear performs S/N (noise in the presence of signal) in DSP/FFT. The AP does that AFAIK. Does Amir ever short the inputs to any device like a preamp or power amp?


Traditional S/N is measured by shorting the inputs, measuring the residual noise and referencing that to the maximum/rated/test output. Shorting the inputs, especially on high gain interfaces, will result in a different S/N number than doing S/N in software.

@NTTY how did you measure the base line A/D noise in the inferface? Did you ever short the inputs or did you just wind up/down the level pots? With analog pots, winding them down to max attenuation is close to a short- but not a short. With an electronic control gain front end, you should consider all options.
 
Last edited:
I wonder if Focusrite meant THD rather than THD+N in their specs.

I have no issues getting the published -108 dBc THD from my Focusrite Scarlett Solo (Gen 3). The latest generation (Gen 4) is currently on sale for $120.

It does take a bit of fiddling to get there, though, especially for the ADC side of things. Here's my setup:

APx555B with Asio4All driver installed. Focusrite Scarlett Solo connected via USB and Asio4All driver to the APx500 interface. I cranked the volume control on the Focusrite to the max and adjusted the generator level in the APx500 software such that the Focusrite provided 1.000 V RMS as read on the AP dashboard. I then used the FFT analyzer of the APx555B to measure the harmonic spectrum at the output of the Focusrite:
A_Focusrite Scarlett Solo 3rd Gen_ Harmonic Spectrum, DAC (-13.67 dBFS output -_ 1.0 V RMS, 1 ...png


That looks like -110 dBc THD to my eyes. The third harmonic is far enough down that it doesn't contribute significantly to the THD.

Testing the ADC is trickier. Here I used the sine generator in the APx555B to generate a clean 1.0 kHz sine wave with an amplitude of 1.0 V RMS. I adjusted the input gain on the Focusrite such that the 1.0 V RMS resulted in a sampled value of -1.0 dBFS to avoid saturating the input. In practice this is pretty easy to accomplish if all you need to do is to ensure that the input isn't clipping. Simply adjust the input gain until the VU meter indicator turns red and dial it back until it just turns yellow.

This is what I measured from the Focusrite Scarlett Solo ADC:
A_Focusrite Scarlett Solo 3rd Gen_ Harmonic Spectrum, ADC (1.0 V input -_ -1 dBFS, 1 kHz, 256k...png


Keeping in mind the -1 dBFS fundamental, it looks like the THD measures about -109.5 dBFS maybe a bit worse as the harmonics could start to add up a little.

Connecting the output to the input, i.e., performing a loopback test with the ADC set to provide a 1 kHz sine wave at -13.67 dBFS, resulting in 1.0 V RMS at the output yields this:
A_Focusrite Scarlett Solo 3rd Gen_ Harmonic Spectrum, Loopback (-13.67 dBFS output -_ 1.0 V RM...png


So not quite the -108 dBc specified. But by tweaking a bit further by reducing the output amplitude to -20 dBFS, resulting in 478 mV RMS at the output, and readjusting the gain knob such that the DAC reads -1 dBFS it is possible to improve things further:
A_Focusrite Scarlett Solo 3rd Gen_ Harmonic Spectrum, Loopback (-20 dBFS output -_ 479 mV RMS,...png


Now we're looking at about -110 dBc THD, so 2 dB better then the spec.

So I'd say Focusrite can reasonably claim -108 dBc THD for the Scarlett Solo. It would be nice if they listed their test conditions, though. Trouble is that they claim -108 dB THD+N and I don't see how they can meet that spec.

My only complaint about the Focusrite is that it's pretty noisy. I measure 27 µV RMS (unweighted, 20 Hz - 20 kHz) and 21 µV RMS (A-weighted). That's noisier than my power amps! Not fantastic for a line-level circuit. That also limits the THD+N to somewhere around -96 dB, which is not very impressive by today's standards.
@amirm, it would be insightful with some noise measurements of the 2i2. Just a quick measurement (20 kHz BW, RMS, unweighted and A-weighted).

My point here is that if you want to get the best measured performance from the Focusrite Scarlett Solo and, thus, likely also from the 2i2, you will need to operate it at its sweet spot. That's the tradeoff you make when you buy a $120 audio interface. Higher end audio interfaces, including the RME ones mentioned earlier, undoubtedly deliver better performance ... but they also cost more. The RME ADI 2 DAC mentioned earlier sets you back $1300 and doesn't include an ADC... Ya think ya might get better performance for the 20 dB higher price? ;)

Tom
 
I wonder if Focusrite meant THD rather than THD+N in their specs.

Hi Tom,

It’s THD+N. Amir measured -109dB THD+N with 4Vrms output (which requires adjusting the test sine at -2dBFS) and with volume knob set at max.

I measured -107.7dB under the same conditions. THD alone was at around -117dBc. I previously documented here and here.

Lowering the output with the volume knob degrades performances and so it’s preferable to do it by digital attenuation.
APx555B with Asio4All driver installed. Focusrite Scarlett Solo connected via USB and Asio4All driver to the APx500 interface. I cranked the volume control on the Focusrite to the max and adjusted the generator level in the APx500 software such that the Focusrite provided 1.000 V RMS as read on the AP dashboard. I then used the FFT analyzer of the APx555B to measure the harmonic spectrum at the output of the Focusrite:

To lower the output of the Solo Gen3 to 1Vrms as you did (by attenuating the test sine from the digital generator), and because the Solo Gen3 outputs 15.5dBu, you had to run your test sine at roughly -13dBFS. With that level of attenuation, you inevitably virtually increased the noise floor (in dBr or dBc) and so it comes in the way of the SINAD calculation. This is why you got that feeling.

Amir performs this test at 4Vrms, by his own convention, but ranked the 2i2 Gen4 with analog attenuation (via volume Knob) rather than with digital attenuation, where the performance is much better.
 
Last edited:
It’s THD+N. Amir measured -109dB THD+N with 4Vrms output (which requires adjusting the test sine at -2dBFS) and with volume knob set at max.
I measured -107.7dB under the same conditions. THD alone was at around -117dBc
Ah! I see that now. Thanks! Of course, you won't get the best performance if you run the ADC or DAC at 0 dBFS. The sweet spot seems to be around -4 dBFS for the DAC. This was measured with the volume control at maximum. 1 kHz test frequency, 20 kHz measurement bandwidth.
A_Focusrite Scarlett Solo 3rd Gen_ THD+N vs Generator Level (1 kHz, 20 kHz BW, Max. Volume).png


Amir performs this test at 4Vrms, by his own convention
Therein lies the crux.

I just as arbitrarily picked 1 V RMS.

Thank you for your input. I really appreciate it. I'm looking at some devices for measuring distortion on the cheap, so this is good information. I'm a bit spoiled from working with the APx555 where you just plug stuff in and it works. :)

Tom
 
@NTTY how did you measure the base line A/D noise in the inferface? Did you ever short the inputs or did you just wind up/down the level pots? With analog pots, winding them down to max attenuation is close to a short- but not a short. With an electronic control gain front end, you should consider all options.
Hi John, sorry for the delay to reply.

With Line in, calibration with 15dBu input, 4Vrsm output (around -2dBFS output, so a bit more than 14dBu), which means input gain at +6dB:
- Software calculation (with signal): -113.1dBr
- Input shorted: -115.5dBr

Cheers
 
Last edited:
Quick quick quick 'cause I never find time. It's a draft, I'll update this message when I get more time (+PC BSOD prevented more typing).

I performed measurements of the 2i2 inputs mostly using my Motu Ultralite mk5 as the generator, but also the Focusrite 2i2 itself (therefore in a loopback) as its DAC is better than its ADC.

Before measurements, I installed the Focusrite Control 2 Software.

As opposed to output volume knob, the input gain knobs are digital and so will increase or decrease the input gain in 1dB steps. It's therefore same as doing in the Software.

This interface has only 2 inputs. So when using the front inputs to connect an instrument like a guitar or a line level one (like a synthesizer), it will disconnect the respective rear XLR Microphone input. So it's kind of ok to have the microphone inputs at the back of the interface and the instruments at front since we are most likely to switch often the latter.

Amir tested at 48kHz Microphone inputs only and with an input gain of obviously +9dB. So I tested at 96kHz with different input gains and all the input types, to complete measurements.


Focusrite 2i2 Gen4 - Measurements (Line input)

I could not saturate the input as my interface does not output more that 21dbu. So the Focusrite saturates above that and probably at the specified 22dBu (9.75Vrms).

The interface will exhibit its best performances with an input gain of +6dB (whatever is the input, by the way). Here is 4Vrms input with +6dB gain applied:

1733506378791.png


We get a SINAD of 101.2dB in that case. Distortion is dominated by H3 with input 1 but it is H2 with input 2, which is funny. If someone can hear the influence of that, using one input or the other could change the final result, and considering the planned utilization, it could even be useful.

Here is the proof, with 2Vrms input in line 2 and still +6db gain at the input, we get:

1733506899966.png


SINAD reaches nearly 102dB which means we have consistent performances with lower input voltage. And we see that now it's H2 dominated. Distortion is very low anyways. This is very good for the main purpose of a low cost interface.

Quick view on the THD+N vs Frequency, at 96kHz (instead of 48kHz that Amir did), noise and distortion included up to 30kHz:

2i2Gen4_THDNvsFreq_LineIn_4vrms.jpg


It's better than what Amir reported, but this is Line Input, not Microphone input that he used (see below for the same with Microphone input).


Focusrite 2i2 Gen4 - Measurements (Microphoe input)

With 2Vrms at the input and +6dB input gain, we get:

1733507720834.png


This 100dB SINAD.

Increasing the input gain (+8dB) to reach -1dBFS at the input does not please our interface:

1733507878724.png


We lost 4dB in SINAD (for only additional 2dB gain input), we are just at CD Audio performance, not too bad for a Microphone, that said.
Now again, +6dB gain input is the comfort zone of the 2i2, especially on distortion perspective. Try to stay as much as possible close to that and adjust in your DAW to the final desired level for your mix.

Now if I try to replicate the measurement of Amir, 1.7Vrms at the microphone input and with +9dB input gain, so that we get -0.14dBFS in the dashboard (as in his):

1733508675843.png


Voilà, our interface is unhappy with high level internal digital input, from an increased input gain.

On a noise perspective, you will be increasing the noise a little up to +12dB input gain. After that, the more gain the more noise and it's quasi linear from +20dB input gain (i.e. +1dB input gain = +1dB noise).

Quick view on the THD+N vs Frequency, at 96kHz (1.7Vrms input which Amir used but at +6dB gain instead of +9dB), noise and distortion included up to 30kHz:

2i2Gen4_THDNvsFreq_LineIn_1.7vrms.jpg


This is nearly same performance as with line input, so it’s good.


Focusrite 2i2 Gen4 - Measurements (Instrument input)

Before I start talking about the Instruments input, I need to mention that the Focusrite 2i2 Gen4 does not appreciate at all to be connected with an unbalanced cable at the input unless... we push the "Inst" button to go Instrument input (at the front of the 2i2 or in the Focus Control 2 computer interface).

Indeed, when using line input with unbalanced connectors (TS cable or RCA to TS), the distorsion increases a lot. Pressing the "Inst" button immediately reduces the distortion. The Instruments input are designed to address much lower voltage sources. A guitar, for instance, outputs few mV, maybe 1V max, and uses a TS connector.

When the "Instrument" input is engaged, the interface automatically adjust the input gain to +7dB minimum.

So let's feed the 2i2 with 100mVrms and the input gain adjusted to +31dB, we get:

1733931872523.png


Forget about the mains components (50Hz and 150Hz), they are due to the absence of decent galvanic isolation between the two computers I used.

We have a THD of -89dBr and the SINAD is only 83dB because of the low signal level and the huge gain at the input (which inevitably increase the noise level too).

With higher input level, like 250mVrms, and input gain adjusted at +23dB:

1733931916851.png


You see the noise is lower (-92dB) relative to the signal because the amplitude of the signal is higher, and the input gain is lower. But the THD has increased to -78dBr. So now the SINAD (78dB) is dominated by distortion instead of noise.

Even if this looks like high harmonic distortion (and it is, per our standards of DAC resolution of today) this is not so a problem when recording an instrument which has a lot of harmonic content by definition, much much higher than this unwanted distortion. In other words, it will remain hidden.

So this instrument input does not like high level signals. Anything playing above 0dBU (0.77Vrms) will upset it, and I'd recommend using the line input, rather that instrument input, in that case. But do not use unbalanced connectors with the line input.

I hope all of this helps the future users.

Cheers
 
Last edited:
My point here is that if you want to get the best measured performance from the Focusrite Scarlett Solo and, thus, likely also from the 2i2, you will need to operate it at its sweet spot.

Absolutely true. When I was using the two 2i2s (v2 and v3) I have here for measuring, much time was spent trimming input and output levels to find the sweet spot and then not touching the controls.
My only complaint about the Focusrite is that it's pretty noisy. I measure 27 µV RMS (unweighted, 20 Hz - 20 kHz) and 21 µV RMS (A-weighted). That's noisier than my power amps! Not fantastic for a line-level circuit. That also limits the THD+N to somewhere around -96 dB, which is not very impressive by today's standards.
@amirm, it would be insightful with some noise measurements of the 2i2. Just a quick measurement (20 kHz BW, RMS, unweighted and A-weighted).

Input level controls/output level control positions, shorted/non-shorted inputs? SE or differential output? I can give you those values for the two 2i2s here if you clarify the conditions.
 
It's a draft, I'll update this message when I get more time (+PC BSOD prevented more typing).
Crikey, I don't think I've seen one of those pop up in ages (thankfully). Did it seem to point to any component in particular? What kind of hardware did you have again?

My only complaint about the Focusrite is that it's pretty noisy. I measure 27 µV RMS (unweighted, 20 Hz - 20 kHz) and 21 µV RMS (A-weighted). That's noisier than my power amps! Not fantastic for a line-level circuit. That also limits the THD+N to somewhere around -96 dB, which is not very impressive by today's standards.
Odd. @Julian Krause got about 108 dB(A) out of his 3rd gen Solo. Are you sure the copious ultrasonic noise isn't messing with your measurement setup?
 
Last edited:
Back
Top Bottom