I must admit I'm not the biggest fan of the NT1-A highs peak, but it might work better on singing than spoken voice. If OP needs to squeeze in
two mics, I'd probably look at some AT2035s instead ($149 a pop)... purists may turn up their nose at them saying they're "just" an electret, but they sound good, their level handling is insane at 148 dB SPL (1% THD), and 12 dB SPL(A) worth of EIN still is more than good enough in real life. (I read
one review where the guy put both AT2035 and an NT2-A next to each other and couldn't hear a difference in noise after compensating for levels. Both swamped by ambient noise.)
Very good point. Off-axis is not necessarily the strength of LDCs due to plain capsule size (18-32 mm vs. 16 mm, typ), and even a type that's decidedly good in terms of off-axis coloration like a Sennheiser MK4 (or Rode NT2-A, NT-1000, Shure PGA32) will have its limits under 45°. How'd they record orchestras with a bunch of Neumann LDCs back in the day then? I know the very first stereo trials still were A-B, and their stereo imaging is pretty wonky especially on headphones, but by the 1960s or so things apparently were pretty much figured out.
In return, inexpensive (cardioid) pencil mics often have a distinctive kind of "small condenser" midrange coloration, weak low end and highs peaking that tend to drive me nuts. (Even some of the cheapest plastic-fantastic side address jobs like the t.bone SC300 and whatever other names it may be sold under elsewhere manage to at least get
inoffensive pleasing results on voice.) Mind you, I have never had a KM184 to play with, or even an Oktava or Rode NT5 or M3. It seems the most practical approach for SDCs may be getting ones that are well-behaved in terms of pattern and narrow-band colorations and then EQing them software side to compensate for the low end dropoff if need be.
That would be a bummer. I would hope that any reputable manufacturer would not be taking such shortcuts unless plainly justified.
Rode's spec is "@ 1kHz, 1% THD into 1KΩ load" (a 1k load is pretty severe, actually - many inputs are 2-3 kOhm). 1 kHz / 1% seems to be pretty much the standard, Neumann even specifies 0.5% (which, depending on whether 2nd or 3rd harmonic is dominant, would give 3-6 dB less, so the TLM103 would be hitting 1% between 141 and 144 dB SPL), but with a footnote saying "measured as equivalent el. input signal". Hmm. If even
they are using this "shortcut", I would be inclined to believe that getting the microphone capsule itself to distort has to be extremely hard.
If that is correct, I would rather be willing to believe that audible distortion encountered well below rated SPL is more likely the result of accidentally overdriving the mic preamp. Said TLM103 at 138 dB SPL would be dishing out a whopping +13 dBu, that's not what you normally consider mic level! The AT2035 at 148 dB SPL would even output +23 dBu (~32.5 Vpp).
For comparison: A Focusrite Scarlett 2i2 input will handle +9 dBu tops, a Behringer UMC204HD will call it quits at +3 dBu (and I think even that may be a bit of a stretch, given that the ADC in the thing will clip around -3 dBFS or something, so let's say more like 0 dBu). And that's with the gain all the way down, of course. A Mackie 402VLZ4 mixer will in fact handle up to +21 dBu in, my old Behringer Q1002USB up to +12 dBu. Not sure what you'd need to make it to +23 dBu, an Earthworks ZDT preamp?