One thing that is certain is that there isn't any reason for performing independent measurements of the dedicated amplifier in an active. This is logically absolute, because anything that matters will affect the sound from the speaker, which can be measured the same as with any speaker.
The more interesting question is the question you raise: Are standalone amplifiers getting too much scrutiny?
My opinion on this is probably not as well-informed as the opinions of others, but inclination is a qualified "yes". I say qualified because some of the measurements taken of amplifiers are not frivolous. As I was reading some of Erin's subwoofer testing the other day, it occurred to me that some of what nowadays applies in subwoofer measurements would be applicable to measurements of standalone amplifiers.
Ideally, amplifiers would be measured in a way such that the results are easily translatable to how the amplifier will affect the measurements taken of a speaker connected to the amplifier.
In my individual opinion, the most important part of the measurements of most all reasonably good amplifiers is the signal level for the onset of clipping. With most any reasonably good amplifiers, distortion is much, much lower than it typically is with most any speaker. But once the signal level exceeds the supply rails and it begins to clip, then the story changes in a hurry. It could almost be argued that what really matters is the rail voltage, so long as the amp doesn't overheat and go into protection when operated in a sustained manner at the onset of clipping, when the load is not lower than the specified load. (And out of practical necessity, the specified load is implicitly non-reactive.) Not to suggest that this would be sufficient in 100% of cases, since there are cheap amplifiers where the distortion at levels not approaching clipping will be audible. But my personal take on this, with which other people will not necessarily agree, is that with most any half-decent amplifier the distortion at levels not approaching clipping is so far below the distortion level of most any speaker that it doesn't matter because you'd never notice the difference. The analogy that I sometimes use is to take a piece of paper and mark all over it at random. Then throw in a couple of much smaller, random marks. You don't even notice the extra marks unless you know what they look like and are trying very hard to pick them out from all the other marks.
As concerns the signal level at the onset of clipping, it ideally should be specified on a logarithmic scale, i.e., in dB relative to the signal level corresponding to 1 Watt at the specific load. The specified load should be standardized, maybe at 6 Ohms. There should be a standard test to insure that at signal level 10% below the onset of clipping, the distortion is below a standardized threshold deemed to be insignificant. There should be a standard test to insure that at the onset of clipping, as stated in the manufacturer's specification, the amplifier can operate in a sustained fashion without triggering self-protection, in a room with standard temperature. If the manufacturers cannot agree on what the allowed level of distortion at signal level 10% below the onset of clipping should be, or on how it should be measured, it would be reasonable for there to be two, maybe even three standardized levels for amplifier quality, each with its own particular allowed level of nominal distortion and its own standard for taking the measurement needed to insure that the amplifier is in compliance with the quality level stated by the manufacturer.
The status quo is really pretty sad, because the manufacturers never even were able to agree to the practice of expressing amplifier on a logarithmic scale in lieu of a linear scale. They couldn't even manage this much. The typical consumer believes that a 75 Watt amplifier is too wimpy, and that a 120 Watt amplifier is a whole different thing. We know that the difference is only 2 dB, and that speakers vary by more than 2 dB in their sensitivity. The typical consumer has a strong preference for a 120 Watt amp over a 75 Watt amp, but when choosing speakers, pays little if any attention to the differences in sensitivity of the different speakers. I'm not suggesting that people buying speakers should pay closer attention to the differences in sensitivity, but that they should perhaps pay less attention to differences in the power of amplifiers when the ratio of the differences in power, linearly, is less than 2:1. The obvious way to correct this would be for the manufacturers to stop specifying amplifier power in linear terms. But the manufacturers on their own were never able to manage even this much.