but let me ask this, I'm currently reviewing an audio interface, what specific tests would you want to see in an audio interface review? I get the feeling learning this stuff will take time but if I know what to research I at least have a starting point.
What kind of tests you can carry out will depend on the tools at your disposal. User interface, build quality, driver stability and quirks, that sort of stuff should take little more than your own senses. Roundtrip latency can be measured easily by routing just one channel of a stereo pair through the interface under test and determining the delay in your DAW of choice, you just need a good test signal with an abrupt start (maybe a snare hit or similar) or other distinctive characteristic that is easily identified visually in waveform view.
If max output level and line input level are close, doing loopback testing using RMAA and/or REW will tell you something about the qualities of A/D + D/A combined. By turning down the output, you may be able to test even the mic input from minimum to maximum gain (noise will generally be excessive unless you have an analog attenuator, but it should give you a pretty good idea about preamp and ADC nonlinear distortion and frequency response).
Getting an idea of mic input noise generally is quite helpful. Subjectively, a lowish-impedance dynamic mic (the Shure SM7B being a very common and notorious example) should give you a decent idea, particularly if you have a few different candidates to compare with... that should weed out any real turkeys quickly.
Ideally though, you want to determine the input noise level for a typically 20 kHz bandwidth with a known source impedance plugged into the input (either a short or 150 ohms, typically).
That takes an absolute level calibration. For this, you would first take an output of low impedance and measure its output voltage at a known, near 0 dBFS level (as determined e.g. with a multimeter that requires at least a low-voltage AC range, preferably one TrueRMS capable - take note of the valid frequency range, the simple ones are generally optimized for mains frequencies). Then you would reduce digital generator level by a known amount, enough to not overdrive the mic input (maybe arouind -10 dBFS in). This then allows you to reference input dBFS levels to analog voltage levels, for both signal and noise. Once you have that, you unplug your reference source and swap to your short or 150 ohm resistive noise source.
Determining input dBFS levels can be done using rather rudimentary software tools, even Audacity will do if its meter dB range is set large enough. At 44.1 kHz, applying Audacity's A-weighting EQ curve should give decently accurate A-wtd results as-is, at higher rates a steep 20 kHz lowpass would be required in addition as A-weighting is not defined beyond that.
I suggest you take a look at
@Julian Krause's channel, he's got a pretty decent set of tests he runs these days (he's also got a video where he explains how to determine input noise).
If you want to fully characterize a modern high-performance interface, ideally you want a full-blown audio analyzer like an Audio Precision, a dScope or a R&S UPV or UPL, but those costs thousands or tens of thousands of dollars even used. The tighter your budget, the more limited you'll be and the more resourceful you'll have to get. The converter performance alone isn't actually
that expensive - an RME ADI-2 Pro FS should generally be as good as any of them, for example (while still offering substantial flexibility in terms of levels), a QuantAsylum analyzer isn't a slouch either (their new QA402 should have been out by now but seems to have been delayed), and if you just need a superb DAC you don't need to spend more than a few hundred either - but it's the signal conditioning and automation that'll cost you dearly. For just a few spot checks, you can often cobble something together quite inexpensively.