I'll be honest and say I'm missing the point here. From your link and your short comment I can only conclude we agree. If that's the case and I'm not mistaken, I thank you for the link.
My whole point was that too much power ranks No1 when it comes to amp killing speakers (if no faults are detected). And I wouldn't let it go because if we help spread this myth that strong amps are safe, we might cause some damage to other people's gear (and that's honestly my only reason).
That said, I would avoid equating distortion as an effect in playing an instrument and distortion from clipping amp in audio reproduction. I'll tell you why:
This risks people thinking distortion is not a big deal and distorting audio amp is just like distortion as an effect in playing an instrument and they don't need to do anything when they hear a weak amp distorting and they could just carry on blasting it. I think that article you linked could be written even better.
When distortion is an effect (like the fuzz pedal the article mentions) you can have a very quiet reproduction (low SPL) of the distorted sound, which would imply you're creating the distortion effect with no actual stress on an amp or speakers. See what I'm getting at?
- When a weak amp creates distortions, it's linked with pushing that amp over its limits.
- When distortion is just a sound effect, your amp or speakers don't need to be stressed at all.
It is generally a good idea to keep creation of music separated from the reproduction of music. Not all rules of the two are interchangeable.
I am not disagreeing with you with the myth that stronger amps are "safer". I'm responding to your apparent agreement with the JBL tech note that suggests matching speakers with amps that have output power twice that of the speaker rating. JBL claims clipping from underpowered amp can cause damages to tweeters which would otherwise be avoidable with a beefier amp. That's the myth that the Elliott Sound's article debunks.
I used to believe in it too since it sounded so plausible. It is often talked about and (seemingly) reputable companies such as JBL publishes technotes publicizing it. I myself had post telling others about it until I got corrected. (My
post #92 in 2020, got corrected in
post #95, reminded me I haven't given the poster a "like".)
The theory sounded reasonable. Clipping creates high order harmonics, so clipping must put a lot of stress on tweeters. This actually depends on what the clipping behavior of the amp is. If the amp goes into oscillations or does other nasty things during clipping, sure. But what if the amp clips cleanly and recovers cleaning. So, I decided to run some numbers myself.
First case I simulated is a single 100 Hz tone. The original signal has an amplitude peak of 1 and the clipped signal clipped at 0.707, meaning that the clipping amp can output 1/2 the power of the unclipped amp. The (hard, clean, symmetric) clipping sprayed a bunch of (odd) harmonics to the higher frequencies. But when you look the FFT at 1 decade (10x) above, the amplitude of the spikes at less than 1% of the fundamental, meaning very very little power is sent to the tweeter.
Second case I added a 0.5 amplitude 2 kHz tone. After clipping amplitude of the 2 kHz tone actually decreased because of insufficient amp power. There are distortion products for sure, but the energy they add is small by comparison. The power to the tweeter is therefore significantly less than if the amp doesn't clip, which blew JBL's theory that beefier amp is "safer" (at least in this case).
The first lesson to me is that in audio there is a lot of bunk out there and nothing can be taken for granted. Second lesson is to only get amps with good clipping behavior. That's why I advocate measuring how amps clip into real loads.