• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How audible is distortion?

tomelex

Addicted to Fun and Learning
Forum Donor
Joined
Feb 29, 2016
Messages
990
Likes
572
Location
So called Midwest, USA
Thanks Tom for carrying the torch. I'll add only one point about the "loudness war" issue. That isn't about normalizing music so the loudest peak hits at 0 dBFS (or close to it). Rather, it's about using compression and other processes to make the average level louder. So soft parts are raised in volume making everything the same, which makes listening tedious and tiring.

Yep, of course you are right, poor choice of word on my part. Thanks for correcting.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
A useful source of information I wasn't aware of until a couple of days ago was a Gearslutz thread about evaluating DA/AD loops using DiffMaker, https://www.gearslutz.com/board/gea...ing-ad-da-loops-means-audio-diffmaker-50.html. The latter is not really useful because of its limitations, but all the uploads of data of conversion captures are. I've looked at a couple, and clear glitching and distortion is present in some; conversely, the "good ones" are very impressive, in their level of true accuracy - purely via visual inspection.

So, "pro" gear is certainly not transparent, by default - there are levels of competence, and the better ones should be audibly more invisible. I aim to try and make sense of what's there - see if there's a pattern , and what may be learned in terms of sound signatures correlating to actual flaws in the working of the circuits.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
A useful source of information I wasn't aware of until a couple of days ago was a Gearslutz thread about evaluating DA/AD loops using DiffMaker, https://www.gearslutz.com/board/gea...ing-ad-da-loops-means-audio-diffmaker-50.html. The latter is not really useful because of its limitations, but all the uploads of data of conversion captures are. I've looked at a couple, and clear glitching and distortion is present in some; conversely, the "good ones" are very impressive, in their level of true accuracy - purely via visual inspection.

So, "pro" gear is certainly not transparent, by default - there are levels of competence, and the better ones should be audibly more invisible. I aim to try and make sense of what's there - see if there's a pattern , and what may be learned in terms of sound signatures correlating to actual flaws in the working of the circuits.
Could you post any examples of glitches and distortion? I just saw a big table of numerical results.

How does this test cope with small benign(?) phase shifts and attenuation at the bottom end that may be caused by AC coupling? Or things going on at the outer reaches of the top end filtering? (minimum phase versus linear phase?).
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,836
Likes
16,501
Location
Monument, CO
Was that directed at me? What glitches are you asking about? What test?

Note linear phase leads to constant group delay and is usually preferred to preserve pulse integrity. FIR filters are usually linear phase. Minimum phase can be designed with minimum group delay (natch) but since different frequency components have different delay pulses are more "smeared". But, minimum phase filters don't usually have pre-ringing (there are examples illustrating the sound; I frankly have a hard time telling unless I am really focused on it), and generally use IIR filters that can be smaller and easier to implement.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Could you post any examples of glitches and distortion? I just saw a big table of numerical results.

How does this test cope with small benign(?) phase shifts and attenuation at the bottom end that may be caused by AC coupling? Or things going on at the outer reaches of the top end filtering? (minimum phase versus linear phase?).
Will do. I'm also "adjusting" to the new version of Audacity, which had a couple of peculiarities that interfered with the way I do things - so give me a day or so.

The numerical results are very deceptive - a single figure of merit tells one almost nothing - and a combo with poor numbers may actually do an excellent job.

DiffMaker may or may not be thrown by discrepancies - which is why I like to work on the results manually; I can compensate for gain, phase, etc.

The interesting stuff is what's happening at the top end - some converters are 'glitching' by getting sections almost perfectly right on either side of an area where the detail has been completely screwed up. And better ones do close to a photocopy match, don't skip a beat.
 

Dirk Wright

Member
Joined
Apr 28, 2018
Messages
23
Likes
13
Whether or not harmonic distortion is audible is based on several things. First, the perception of distortion is highly dependent on the frequency of the fundamental. Humans are virtually deaf to harmonic distortion in the bass frequencies, for example, but highly sensitive to it as frequency goes up. Second, higher harmonics are more audible than lower ones due to masking by the fundamental frequency. The lower harmonics have to be much stronger before they are perceived than higher ones. Third, another aspect of masking is temporal. There is a pre- and post- period where harmonics are not audible, and this varies with frequency and intensity as I recall.

So, you cannot state with certainty whether or not a particular distortion is audible without considering these other factors. For example, an amplifier that produces substantial 2nd, 3rd and 4th harmonics, in descending intensity, and no higher harmonics, will sound like it has no distortion if those harmonics are below a certain threshold (the exact dB value escapes me now but I think the 2nd generally only needs to be down 60-70dB in order to be inaudible - but I could be way off about that).
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,836
Likes
16,501
Location
Monument, CO
Whether or not harmonic distortion is audible is based on several things. First, the perception of distortion is highly dependent on the frequency of the fundamental. Humans are virtually deaf to harmonic distortion in the bass frequencies, for example, but highly sensitive to it as frequency goes up. Second, higher harmonics are more audible than lower ones due to masking by the fundamental frequency. The lower harmonics have to be much stronger before they are perceived than higher ones. Third, another aspect of masking is temporal. There is a pre- and post- period where harmonics are not audible, and this varies with frequency and intensity as I recall.

So, you cannot state with certainty whether or not a particular distortion is audible without considering these other factors. For example, an amplifier that produces substantial 2nd, 3rd and 4th harmonics, in descending intensity, and no higher harmonics, will sound like it has no distortion if those harmonics are below a certain threshold (the exact dB value escapes me now but I think the 2nd generally only needs to be down 60-70dB in order to be inaudible - but I could be way off about that).

Old thread...

I'd mostly agree but with a couple of caveats:
  1. The problem with harmonic distortion in a subwoofer is that the harmonics are often easier to hear than the fundamental. A sub with 50% THD dominated by the 2HD may sound much louder/richer/fuller to a listener hearing a 50 Hz tone because what they are really hearing is the 100 Hz (and up) harmonics instead of the 50 Hz fundamental.
  2. IMD should also be considered. IMD2 puts distortion spurs at very LF and twice the fundamental so those are not (usually) masked. IMD3 puts distortions spurs near the fundamentals that are not harmonically related so sound dissonant. I am not sure how masking impacts those; IME close dissonant tons are not masked as readily but I'm basing that on foggy memories and very limited more recent tests. As a musician, listening for difference tones is a way to stay in tune, so I may be more sensitive to them.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,204
Likes
16,986
Location
Riverview FL
As a musician, listening for difference tones is a way to stay in tune, so I may be more sensitive to them.

Difference tones or beating?
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,836
Likes
16,501
Location
Monument, CO
I use the terms interchangeably in this context. Some call them "beat tones" just to further the confusion... When you're in tune there is a definite "buzz" you can hear and "feel".
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,759
Location
My kitchen or my listening room.
Therefore you highlight a major problem with 'scientific' listening tests. Especially the "preference" based ones.

And was the dizi used during the development of lossy codecs? And a dizi ensemble? And a dizi-guitar-drums combo? And a dizi-ocarina duet? etc. With Western listeners? And Asian? etc. etc. If listening tests really are key to the development of lossy codecs, then this is the minefield the developers are entering. If, on the other hand, these codecs are based on something more fundamental than listen-tweak-listen-tweak-, etc., then listening tests are just confirmation that they more-or-less work.

There is a great deal of knowledge about auditory masking, the performance of the ear, etc, that guides codec development. This is to some extent source-independent, of course, different sources will push different parts of the coding envelope.

And auditory masking theory is the basic science, but the ***ONLY*** test that makes sense is a formal listening test. Mechanical/programmed metrics really don't mean much, INCLUDING those that claim to be good at measuring audio quality.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Mechanical/programmed metrics really don't mean much, INCLUDING those that claim to be good at measuring audio quality.
But the listening test itself is a mechanical/programmed metric; the selections of extracts of music to be played, etc. are pre-programmed.

The aim of this scheme is not much different from self-driving car development. You can test until you're blue in the face but you'll never cover more than the tiniest fraction of possible 'patterns' in the information space. If you test in the USA, can you be sure that it will work sensibly in India, say?

The only way to begin to understand whether the system is sensible or not is to define the functionality at a higher, theoretical level. Test-tweak-test... doesn't cut it.

Edit: in a link that was posted in the Tesla thread, they were talking about the need for more "innate structure" in AI development. Audio development by listening test has just the same problem: it purports to be a way of harnessing the power of the human brain to develop 'organic' audio systems, but a totally organic tweak-listen-tweak... scheme would collapse in a heap.

It's 100 years later, and we are still finding that a listening-test free approach is the best way to make an audio system e.g. D&D 8C speakers are not 'voiced'; they meet a simple, objective specification.
 
Last edited:

Dismayed

Senior Member
Joined
Jan 2, 2018
Messages
387
Likes
404
Location
Boston, MA
If an instrument plays a tone, it will typically include a series of overtones (distortion products of a sine), some may even be louder than the fundamental (> 100% "distortion").

The job of the playback system becomes one of accurately (or pleasingly for the subjectivists) reproducing those distortions without adding any of its own.

I'm interested in hearing a reproduction of sound from a concert hall. The sound of an instrument in an an anechoic chamber doesn't excite me, but a good reproduction system should be able to produce both.
 

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,459
Location
Australia
Detection of distortion using listening tests is one of the done-to-death testing subjects in audio. What is being added to the subject, here, in audible detection of such?

Has evolution in the last 50 years significantly made otic detection much more sensitive or is it an autic thing?

autic: http://www.oxfordreference.com/view/10.1093/oi/authority.20110803095435780
 
Last edited:

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,836
Likes
16,501
Location
Monument, CO
otic: of or relating to the ear; auricular.
autic state: A metamotivational state in which a person is focused on his or her own needs. See also reversal theory. Compare alloic state.

And here I thought autic had something to do with Australia.

As an engineer, though born and raised in the U.S.A., English is a second language...
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,759
Location
My kitchen or my listening room.
But the listening test itself is a mechanical/programmed metric; the selections of extracts of music to be played, etc. are pre-programmed.

The aim of this scheme is not much different from self-driving car development. You can test until you're blue in the face but you'll never cover more than the tiniest fraction of possible 'patterns' in the information space. If you test in the USA, can you be sure that it will work sensibly in India, say?

First, no, a listening test is not a mechanical test, which is to say a test involving only instrumentation and measurement. So let's not confuse the issue, listening tests ARE THE ONLY DEFINITIVE TESTS FOR THINGS LIKE CODECS.

And, yes, the human ear works the same anywhere that human beings can understand human speech. The two sets of properties are inextricably related.

It's 100 years later, and we are still finding that a listening-test free approach is the best way to make an audio system e.g. D&D 8C speakers are not 'voiced'; they meet a simple, objective specification.

For codecs, that's pure bollox. With speakers you don't have an 'absolute reference'. For a codec, you have an absolute reference, and you can do a distance test, a signal detection test, or any of a variety of blind test methods USING LISTENERS to determine the audibility of a coding system.

And that remains the only valid way to test any codec.

For speakers you are already completely beyond the "transparent" level and can never, EVER get there, so then you can simply test for preference for speakers in a given acoustic.

When you MAKE a speaker, yes, you absolutely must measure a host of things, but no matter what you measure, you'll never, ever get to reproduce the soundfield in the concert hall. At best with stereo, you can reproduce 2/8ths of the actual soundfield at your two ears, even if you forget that heads move around.

And, of course, what point in the concert hall? First row, middle, last row. These three locations have enormously different direct/reverberant ratios, and the speaker required to provide the same experience, coupled with the room, must have different radiation patterns.

And, of course, headphones are another question altogether.

Please do not continue to spread myth about testing of codecs. For codecs you're wrong, full stop.
 

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,073
Likes
16,609
Location
Central Fl
At best with stereo, you can reproduce 2/8ths of the actual soundfield at your two ears, even if you forget that heads move around.
JJ, I'm a bit at a loss there, could you explain the 2/8s statement?
TIA
 

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,759
Location
My kitchen or my listening room.
JJ, I'm a bit at a loss there, could you explain the 2/8s statement?
TIA

The soundfield at any point in a room has 4 variables. If you have to capture 2 points in a room, that means 8 variables.

2 channels means you can encapsulate those 8 variables into 2, but you must lose 6/8 of the information, in some fashion or another, along the way.

Fortunately, you can't hear a lot of the detail, but when you move your head, it matters.
 

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,073
Likes
16,609
Location
Central Fl
The soundfield at any point in a room has 4 variables. If you have to capture 2 points in a room, that means 8 variables.
Sorry but I'm still lost, what are the variables you speak of?
 
Top Bottom