Minphase problems are to be corrected with minphase solutions, it's as simple as that, for me at least.
RIAA EQs etc all are minphase problems which is intuitive.
Less intuitive is the correction of, for example, a single post echo (a later Dirac pulse slightly lower in level as the main one in the IR). The FR is the typical comb filter pattern with finite nulls. This is also a minphase problem which is less intuitive. You could simple use curve fitting of a bunch of minphase notch filters to the comb filter pattern and that fully replicates the impulse doublet. And thus its inversion (1/minphase remains minphase) then corrects it.
A DAC reconstruction filter is a task with no exact spec of phase gender, anything from linphase to minphase can be used, though -minphase ("maximum phase", pre-ringing only) is not useful of course.
First, I agree that there is an ideal that matching an analog filter with an IIR would be nice. That doesnt' always work as a direct conversion,even in the case of my DolbyA band splitting with nice, wide skirts.
SO, when considering the realities of blinear conversion(or similar options), and then the alternatives which might include higher order filters, non-matching freqeucny responses, even more strange time delays, etc -- it is best to make an engineering instead of cookbook decision, if possible. Trying to follow the cookbook can put one into an impossible situation.
To 'correctly' solve the bandsplitting problem with a straight 2nd order IIR filter at a 44.1k sample rate signal, there would have to be an upconversion to a higher rate so that the IIR filter would be kept intact as 2nd order, but then the upconversion has issues also. I *have* been considering some complete 'engineering' design solutions where the up/down conversion has shortcuts, instead of cookbook, but there be dragons. Right now, I have a beautiful solution that works -- and any impairment associated with not being linear phase is FAR overwhlemed by the mostly INFINITELY superior DolbyA decoding abilities.
I DO have an upconversion design idea that eliminates the need for a complete upconversion/downconversion infrastructure without needing nyquist filtering -- but for now, the nice emulation of the IIR front ends by simple FIR filters and crafted frequency response that matches the desired analog response -- works MUCH better than a direct IIR equivalent. (I have studied the Sony DolbyA patent, very carefully, it misses a lot of practical aspects of the design -- even though the practical aspects are at the essense of teh patent. Using the DolbyA patent would have theoretically eliminated some of the 'close to nyquist' matters, but actually doesn't. It was best to totall avoid any aspect of the patent anyway.) When REALLY doing something that pushes the bounds and doing it accurately -- sometimes the cookbook gets thrown away.
I think what bothers me the most is basing a choice on the idea of 'pre-ringing'. (Gibbs isn't ringing, even though the term is used in common parlance) Basing a filter type on essentially non-existent 'pre-ringing' is a basically a strange decision. However, if one is worried about 'time delay' through the filter, or time skews vs frequency (smearing of the transients) or whatever -- those are decisions on real things.
So -- it is nice to think 'min phase' for 'min phase' is a good cookbook truism, but a 'min phase' FIR or IIR application might not come direclty from an analog filter prototype, because there are so many aspects of the decision that might point to a 'linear phase' solution for an analog (minimum phase) filter.
Understanding the problem being solved, the currently known solution space, a true engineering decision is your best bet.
Sometimes, the known solution space isn't enough, so it is important for the engineer to expand their solution space. (I have to 'expand' my own solution space from time to time -- it is called 'learning'.)