I might be wrong, but hear me out.
Whenever there is a distortion measurement in Audio Precision at the usual levels of 86 dBSPL and 96 dBSPL, in my understanding, this measurement is 100 % relative to the frequency response of that speaker.
Meaning that if the speaker already rolls off at around 120 Hz, of course it will have lower distortion at sub frequencies, artificially making it look more distortion-less than it really is.
I noticed this profoundly when I look at the KEF R11 Meta measurements. These speakers have a pretty strong bass-shelf reduction knee of about 6 db. This means that the 86 dbSPL of that distortion measurement are actually, for that range below 100 hz, only 80 dBSPL which is a range probably any semi decent speaker can handle. I am not making this example to pick on KEF but to show my point on this issue. (Measurement btw: https://www.audiosciencereview.com/forum/index.php?threads/kef-r11-meta-tower-speaker-review.53282/)
Isn't it a possibility during measurement to use the Audio Precision frequency response result to create a FIR correction to completely linearize the speaker from 20 to 20000 Hz and then make another distortion measurement normalized to 86 dbSPL (for me the level that is closest to real life usage) and the look for the "real" distortion behaviour? This way, all speakers would be actually comparable for once regarding their distortion behaviour.
Please correct me if I am taking a wrong turn with my thinking here.
Whenever there is a distortion measurement in Audio Precision at the usual levels of 86 dBSPL and 96 dBSPL, in my understanding, this measurement is 100 % relative to the frequency response of that speaker.
Meaning that if the speaker already rolls off at around 120 Hz, of course it will have lower distortion at sub frequencies, artificially making it look more distortion-less than it really is.
I noticed this profoundly when I look at the KEF R11 Meta measurements. These speakers have a pretty strong bass-shelf reduction knee of about 6 db. This means that the 86 dbSPL of that distortion measurement are actually, for that range below 100 hz, only 80 dBSPL which is a range probably any semi decent speaker can handle. I am not making this example to pick on KEF but to show my point on this issue. (Measurement btw: https://www.audiosciencereview.com/forum/index.php?threads/kef-r11-meta-tower-speaker-review.53282/)
Isn't it a possibility during measurement to use the Audio Precision frequency response result to create a FIR correction to completely linearize the speaker from 20 to 20000 Hz and then make another distortion measurement normalized to 86 dbSPL (for me the level that is closest to real life usage) and the look for the "real" distortion behaviour? This way, all speakers would be actually comparable for once regarding their distortion behaviour.
Please correct me if I am taking a wrong turn with my thinking here.