If you watch closely, there is a recognition in the industry that we need to move away from specifying amplifier output capability using "power" with sine waves. IMHO, the FTC method is holding us back for the development of more meaningful industry standards.
Speaker sensitivities are now specified as a voltage sensitivity (i.e. dB SPL @ XXX V rms 1 m). So why do we need to convert amplifier power into voltage and then calculate SPL to see if the amp is powerful enough, instead of just giving us the voltage output capabilities? Of course, the amp spec must also include the info so that we'll know the amp is capable to supply the required current due to the speaker load impedance.
Below is a clip from the ANSI/CTA-2034 standard (the standard is more than just spinorama). In the data reporting of the required amplifier "power", it requires listing the amplifier output voltage, with the implicit requirement that the amp must be able to deliver the required current per the speaker impedance. Also, it accounts for a 12 dB crest factor.
You can see in the example they gave (of a pair of the example speakers in a typical room of 200-600 sq ft at a typical listening distance of 4 m), to drive it to the "loud" rating of 95 dB SPL full range, the power requirement is 158 W into 8 ohms, or a clipping threshold of minimum 50 V -- not a handful of watts.
View attachment 202983
yo