- Joined
- Jun 19, 2018
- Messages
- 6,652
- Likes
- 9,399
There was a discussion developing today on another thread about what is an acceptable (i.e. inaudible) level of nonlinear distortion from an electronic component. While we have a good sense of what kinds of claim are plainly ridiculous, the discussion reinforced for me just how unclear the actual answer to this question is.
This is of course partly because when the question is framed in terms of THD or IMD, it's plainly the wrong question. It's well-understood that the threshold will always depend not only upon these very blunt metrics, but also upon:
Typically, studies focused on nonlinear distortion audibility thresholds have applied particular degrees and types of distortion to signals (often music excerpts), and then used listening tests to try to determine thresholds of audibility by determining whether subjects can reliably distinguish between a distorted and an undistorted stimulus.
The huge problem with this is that, in most cases, the thresholds determined are actually well below the level of distortion produced by the particular reproduction system used in the test (this is unsurprising given the radically higher level of distortion transducers introduce relative to electronic devices). If the test system is incapable of producing lower levels of distortion than the thresholds being examined, we're already basically out at sea.
One way I can imagine trying to circumvent this problem is to rephrase what we're examining in terms of masking. In other words, when a distortion component is below the masking threshold, (it seems to follows that) it will be inaudible.
If we look to studies of masking, we find that these actually do sidestep this key limitation of distortion audibility testing, because the masking studies generate only a primary tone or narrowband noise (the masker) and a secondary tone (maskee). Of course, there will be intermodulation products between these two signals when reproduced by loudspeaker transducers in the test setup, but these will lie well below the level of the secondary signal (maskee).
In other words, rephrasing nonlinear distortion audibility in terms of masking enables us to better isolate only the fundamental(s) and harmonics (including IM harmonics) and to free these from the limitations inherent in a test system that is asked to reproduce controlled degrees/types of nonlinear distortion on the false (but unavoidable) assumption that it will not introduce orders of magnitude more nonlinear distortion itself.
Anyway, this is obviously to some extent a hand-waving exercise on my part here now, but nevertheless it would seem reasonable to speculate that, on the basis of what we know about masking thresholds, a distortion component should fall below the threshold of audibility if it falls below the masking threshold. The key to determining better nonlinear distortion audibility thresholds may well therefore lie in examining the problem from this angle.
I don't have any answers here, but wondered what others thought of my reasoning? And also of course, if anyone knows whether this has been discussed already in the literature?
This is of course partly because when the question is framed in terms of THD or IMD, it's plainly the wrong question. It's well-understood that the threshold will always depend not only upon these very blunt metrics, but also upon:
- frequency of interest
- relationship of distortion to signal (e.g. harmonic order, IM product, etc.)
- SPL
- program material
- other factors
Typically, studies focused on nonlinear distortion audibility thresholds have applied particular degrees and types of distortion to signals (often music excerpts), and then used listening tests to try to determine thresholds of audibility by determining whether subjects can reliably distinguish between a distorted and an undistorted stimulus.
The huge problem with this is that, in most cases, the thresholds determined are actually well below the level of distortion produced by the particular reproduction system used in the test (this is unsurprising given the radically higher level of distortion transducers introduce relative to electronic devices). If the test system is incapable of producing lower levels of distortion than the thresholds being examined, we're already basically out at sea.
One way I can imagine trying to circumvent this problem is to rephrase what we're examining in terms of masking. In other words, when a distortion component is below the masking threshold, (it seems to follows that) it will be inaudible.
If we look to studies of masking, we find that these actually do sidestep this key limitation of distortion audibility testing, because the masking studies generate only a primary tone or narrowband noise (the masker) and a secondary tone (maskee). Of course, there will be intermodulation products between these two signals when reproduced by loudspeaker transducers in the test setup, but these will lie well below the level of the secondary signal (maskee).
In other words, rephrasing nonlinear distortion audibility in terms of masking enables us to better isolate only the fundamental(s) and harmonics (including IM harmonics) and to free these from the limitations inherent in a test system that is asked to reproduce controlled degrees/types of nonlinear distortion on the false (but unavoidable) assumption that it will not introduce orders of magnitude more nonlinear distortion itself.
Anyway, this is obviously to some extent a hand-waving exercise on my part here now, but nevertheless it would seem reasonable to speculate that, on the basis of what we know about masking thresholds, a distortion component should fall below the threshold of audibility if it falls below the masking threshold. The key to determining better nonlinear distortion audibility thresholds may well therefore lie in examining the problem from this angle.
I don't have any answers here, but wondered what others thought of my reasoning? And also of course, if anyone knows whether this has been discussed already in the literature?
Last edited: