In practice, for companies with no digital correction, keeping tolerances small is, in essence, asking for tight tolerances in the drivers performance. For those inside the industry, can be a real nightmare. The only parts of the driver that we can trust to remain stable is the magnet (taking care of temperature if neodymium is used) and motor structure. Then, everything else is prone to high variablity.
As an example, The celulosis used to make paper cones can never be trusted to retain it's properties over time, the manufacturing process of it offers little to no control over the density of the final product. Erros on Mms, damping will occur even inside the same batch.
The suspension structure is another point where the control of variablity is very hard. As I mentioned earlier, even the glueing process must be done incredibily accurately, because when the amount of glue attaching two parts if not homogeneous, the rigidity of the whole system is different, which causes buckling, rub & buzz and more importantly for this discussion, variations on the modal damping and the position of the modes in frequency.
For the ATC, I would imagine they only select the drivers inside the tolerances they accept. They produce the drivers, thus they decide the accuracy, but they can end up reffusing a lot of samples, and this is an added cost to the operation of the company.
For PSI, the added Equalizer allows for living with more variations on the drivers response, correcting for some imperfections that might appear.
For DSP powered speakers, the drivers problem don't bite too much. Two options is normally used in the QC phase:
1- You average the response of a batch of loudspeakers, guaranteeing that not even the glue model and brand used in the spider changes over time. Correct with an FIR filter that is generic but good enough, might allow to pass a peak or dip due to temperature drift and changes on Kms, Mms. This way you gain time on the QC phase, because you can order the PCBs already flashed. Good option for big production scales.
2 - You measure each sample once in a stable environment, apply a Least Square algorithm to find the optimal filter that matches the response to the target. This approach is likely what Neumann does with the MK2 series. Genelec probably uses it inside the GLM, and pretty much all DSP correction system. It is the most accurate way of matching pairs, because the cost function guarantees mathematically that they are the same. Cheap, fast, precise.
Bang an FIR filter on top of this A-21 and we would probably be as flat as the KH150Mk2 on-axis, then of course we could discuss THD, Port distortion and directivity.
