Understanding the distinction is kinda important in digital audio though.
The filters you are going to see out on the analog world are generally IIR = minimum phase filters, including RC and LC. For a given filter performance, they take the shortest time for the signal to traverse the passband, but that time will not be constant across frequencies (group delay variation), reflecting in phase response. In the digital domain, IIR filters involve feedback, which is potentially problematic as it allows computational inaccuracy (quantization errors) to accumulate. Nowadays we do have the required accuracy available though.
FIR = linear phase filters always take a constant time (group delay) for signals to get through anywhere, as they are essentially based on time delays. They were a favorite in digital filters early on since quantization errors cannot build up. In the analog world, you may be familiar with comb filtering effects. The effect is also being exploited in SAW (surface acoustic wave) filters, e.g. as RF band filters in mobile phones, or as SAW resonators to build oscillators in the GHz range that is way out of reach for quartz crystals.
In the time domain, FIR filters have a symmetrical impulse response, which gives rise to some peculiarities, including periodic passband ripple being linked to sort of a pre-echo effect (as discussed e.g. by
Julian Dunn). Also, a complex filter invariably necessitates a long group delay, which for a fancy anti-alias or reconstruction filter at single speed can be over a millisecond. When you ideally want <3 ms for A/D + processing + D/A for monitoring in a recording application, that can turn into a problem.
FIR and IIR filters can be implemented to give the same exact magnitude response, as seen in modern ADCs and DACs (AK557x come to mind). They will each have their own time domain and phase idiosyncrasies though.