Like with SINAD, it's not always possible to tell the distortion and noise apart in THD+N vs frequency plots, so we have to go with the lenient noise threshold again.
View attachment 25303
I believe this thread is the best place to ask this question. I'm having
a discussion with an ASR member on the practice and interpretation of conventional single-tone THD+N vs frequency measurements. In my view, whenever we make and publish measurements, we should consider the audibility of any observed anomaly. We could measure any aspect of a device for some technical reason. But if any anomaly is due to an inaudible aspect---for instance, a noise-shaping-induced peak of -70 dBFS at 50 kHz---, the result would be misleading. Of course, people familiar with measurement procedures and settings will have no problem guessing / interpreting things behind each plot. But we should consider quite a few people can be easily misled by what is shown.
As for the best practice for measuring THD+N vs frequency, I believe Amir normally uses BW = 90 kHz. But when ultrasonic noise dominates THD+N and causes poor results, he measures the device again with BW = 45 kHz, like this:
But I do not think Amir chose the BW of 90 kHz or even 45 kHz because he believes distortion & noise > 20 kHz is audible. He must have chosen the BW because THD resulting from fundamentals in the treble region cannot be properly measured otherwise, i.e., without the wide BW including harmonic products > 20 kHz. Sure, really strong signals > 20 kHz could destroy a tweeter, but although it is possible due to strong fundamental tones, such damage is unlikely with harmonics or noise of a device unless there's some defect.
So, my question is, in making THD+N vs. frequency measurements of hi-fi audio devices, is it okay to use even narrower BW like 20 Hz to 20 kHz? Here, what I mean by BW is not simply half of DUT's Fs. Actual DUT Fs can be set to anything like 96 or 192 kHz. By using 20 kHz BW for THD+N calculation, THD+N in the treble region will not be technically accurate since it does not include higher-order harmonics above 20 kHz. Still, I think it
can be a better representation of the device's intended function, i.e., hi-fi audio serving human hearing.
Here's an example:
Of course, I do NOT mean we should ignore any response above 20 kHz, since it can indicate some issue related to the device's audible aspects (e.g., IMD). I also do not suggest only one setting should always be used. Multiple settings should be used to check if there's any anomalous behavior. But if alternative results have been examined (and found ok) and only one plot is to be presented to readers for simplicity purpose, which should be the default one?
I vote for 20 kHz BW being used to produce THD+N vs. frequency measurements to be informative by covering cases in which a device's (inconsequential) ultrasonic response corrupts THD+N in the audible frequency range.
I wonder about ASR members' opinion on this topic.
EDIT. I once showed THD+N vs. frequency plots in which 20 kHz BW was used but the sweep was done only up to 10 kHz because the results above 10 kHz simply reflect noise levels with no harmonic components. But my current practice is to use 20 kHz BW and make measurements up to 20 kHz as well (even if results in the treble range do not properly reflect harmonics).