• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Theoretical soures of amplifier distortion

OP
T

T3RIAD

Member
Joined
Sep 11, 2019
Messages
38
Likes
23
Yes the total distortion from a complex input such as music will always be higher than that for sine tones, but what I was referring to with those difference signal results I linked to is that they do not show a monotonic correlation between the median difference signal level (Df value as Serge calls it) when playing sine tones and the difference signal when playing music. For example, from those results the FiiO M11 has a slightly lower Df value (less signal degradation) for a 1kHz sine wave than the Questyle QP1R, yet the former has a significantly higher Df value (more signal degradation) when playing real music or the Program Simulation Noise. This suggests sine tones tests are not adequate for measuring the true signal degradation and distortion of a device when actually playing music, and so music (real or simulated) would be the required test signal for a true measure of this.

As for audibility, Dr. Sean Olive and Steve Temme for example have shown in this AES paper that a non-coherent distortion metric using real music has a much better correlation to sound quality than standard multitone distortion metrics, which along with IMD are poor in this regard, as well as beating THD (results are at 31:24 in the video):


More details on the non-coherent distortion metric can be found in this paper.

Okay, I skimmed through it and I have to say, my initial thought (if the measurements are accurate) is that most of the reported "Df" for complex signals is coming from error in the nulling algorithm that compares the two signals, which would naturally get larger as the signal got more complex and it became more difficult to phase align them. A tiny error in phase can cause two audibly identical signals to catastrophically fail a null test.

As another poster noted above, Amir typically does a 32-tone test that never shows distortion products of significantly larger magnitude than those found in the single-tone tests. But Serge's square wave tests (which only contain about 12 tones) are showing a 50 dB difference from their single-tone tests.

Furthermore, it seems this person is claiming that all of the tested DAPs are subjectively distinguishable, yet we have no evidence of this from blind tests.

I have to admit I'm skeptical up front, but I haven't looked very deep into it.
 
Last edited:

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,822
Yes the total distortion from a complex input such as music will always be higher than that for sine tones, but what I was referring to with those difference signal results I linked to is that they do not show a monotonic correlation between the median difference signal level (Df value as Serge calls it) when playing sine tones and the difference signal when playing music. For example, from those results the FiiO M11 has a slightly lower Df value (less signal degradation) for a 1kHz sine wave than the Questyle QP1R, yet the former has a significantly higher Df value (more signal degradation) when playing real music or the Program Simulation Noise. This suggests sine tones tests are not adequate for measuring the true signal degradation and distortion of a device when actually playing music, and so music (real or simulated) would be the required test signal for a true measure of this.

As for audibility, Dr. Sean Olive and Steve Temme for example have shown in this AES paper that a non-coherent distortion metric using real music has a much better correlation to sound quality than standard multitone distortion metrics, which along with IMD are poor in this regard, as well as beating THD (results are at 31:24 in the video):


More details on the non-coherent distortion metric can be found in this paper.

Speakers, headphones, phono systems, etc. have orders of magnitude more distortion than SOTA electronics and can be level and/or thermal history dependent. The OP asked about amplifiers.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,663
Likes
38,733
Location
Gold Coast, Queensland, Australia
Well the 32 tone test is more complex but rarely seems to reveal anything that you weren't expecting from looking at the more simple tests.

What I like about the multitone is it covers a bunch of otherwise individual tests into one.

Frequency response deviations are obvious without running a sweep.
Distortion vs Frequency
Power supply harmonics
Intermodulation issues.
You could even crank it right up to half power to do a bandwidth test
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Okay, I skimmed through it and I have to say, my initial thought (if the measurements are accurate) is that most of the reported "Df" for complex signals is coming from error in the nulling algorithm that compares the two signals, which would naturally get larger as the signal got more complex and it became more difficult to phase align them. A tiny error in phase can cause two audibly identical signals to catastrophically fail a null test.

As another poster noted above, Amir typically does a 32-tone test that never shows distortion products of significantly larger magnitude than those found in the single-tone tests. But Serge's square wave tests (which only contain about 12 tones) are showing a 50 dB difference from their single-tone tests.

Furthermore, it seems this person is claiming that all of the tested DAPs are subjectively distinguishable, yet we have no evidence of this from blind tests.

I have to admit I'm skeptical up front, but I haven't looked very deep into it.

Serge uses an algorithm that iterates over all possible phase (and amplitude) shifts to find the global minimum Df value for the difference and input signal. Even if accumulated phase errors are the cause of rising Df with music compared to sine tones, this doesn't explain why the ranking order of the DUTs by music Df is different than the order if ranked by sine tone Df. Something about the interaction between the music signal (or Program Simulation Noise) and the DUTs is causing each device to react differently relative to each other, not in agreement with how they react to sine tone inputs relative to each other, and so sine tone tests cannot predict even the relative ranking of audio degradation of the DUTs to each other when playing music, so it seems like sine tone tests are an inadequate metric to judge real-world performance. I'm curious if this could be a sign of some yet to be quantified distortion mechanism that only becomes noticeable with complex input signals.

Serge defines Df as the ratio of RMS levels of the difference signal to the input signal (see his AES paper here), so you can't directly compare dB values between his figures for square/sine waves with Amir's multitone/sine wave distortion values.

I don't think he's claiming all those devices are necessarily subjectively distinguishable, just that they have different 'artifact signatures' due to their differing difference signal profiles, which he attempts to correlate with listening tests here.
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Speakers, headphones, phono systems, etc. have orders of magnitude more distortion than SOTA electronics and can be level and/or thermal history dependent. The OP asked about amplifiers.

True, but in theory the difference signal method applies to any electrical DUT in the audio reproduction chain. As for the headphone distortion audibility research by Dr. Olive and Temme, I think they are generally suggestive of distortion metrics that use real music signals being better predictors of subjective sound quality than test tones (sine, multitone and IMD), but yes more research would need to be done to confirm that this extends to distortion of electrical as well as electroacoustic origin, the latter as you say almost always much higher than the former.
 
Last edited:
OP
T

T3RIAD

Member
Joined
Sep 11, 2019
Messages
38
Likes
23
Serge uses an algorithm that iterates over all possible phase (and amplitude) shifts to find the global minimum Df value of the input and output signal. Even if accumulated phase errors are the cause of rising Df with music compared to sine tones, this doesn't explain why the ranking order of the DUTs by music Df is different than the order if ranked by sine tone Df. Something about the music signal (and Program Simulation Noise) is causing each device to react differently relative to each other, not in agreement with how they react to sine tone inputs relative to each other, and so sine tone tests cannot predict even the relative ranking of audio degradation of the DUTs to each other when playing music, so it seems like sine tone tests are an inadequate metric to judge real-world performance. I'm curious if this could be a sign of some yet to be quantified distortion mechanism that only becomes noticeable with complex input signals.

I'm not sure if the algorithm is measuring audio degradation, or simply entropy.

As an example: generate two random samples of white noise. The difference between them will be super high according to this measurement, but they will sound identical.

Whatever this technique is measuring, it's unclear to me how it relates to amplifier quality.


I'm curious, if you actually listen to the difference signal for the music tests, does it sound like music, or is it just noise?
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
I'm not sure if the algorithm is measuring audio degradation, or simply entropy.

As an example: generate two random samples of white noise. The difference between them will be super high according to this measurement, but they will sound identical.

Whatever this technique is measuring, it's unclear to me how it relates to amplifier quality.


I'm curious, if you actually listen to the difference signal for the music tests, does it sound like music, or is it just noise?

To be honest I find it equally if not more unclear how sine tone distortion tests relate to audible sound quality of a device when playing music (which is what we use them for, not listening to sine tones), for which I've not seen any real proven correlation either.

If I turn up the playback volume, yes I can recognise the music in the difference signal, albeit faint/degraded, at least for the couple of devices I've tested so far. So the difference signal is correlated to the input signal, as would be expected.
 
Last edited:

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,680
Likes
37,388
I'm not sure if the algorithm is measuring audio degradation, or simply entropy.

As an example: generate two random samples of white noise. The difference between them will be super high according to this measurement, but they will sound identical.

Whatever this technique is measuring, it's unclear to me how it relates to amplifier quality.


I'm curious, if you actually listen to the difference signal for the music tests, does it sound like music, or is it just noise?

https://deltaw.org/

Download Paul's super good software. Paul goes by username pkane here at ASR. Big long thread from when he started releasing it to us.
https://www.audiosciencereview.com/...test-deltawave-null-comparison-software.6633/

It will do nulling and much more including built in metrics for Df. And yes you can listen to the nulls even listen to nulls with gain applied. Might take a little time to learn the basic ins and outs of Deltawave, but very useful stuff.
 
OP
T

T3RIAD

Member
Joined
Sep 11, 2019
Messages
38
Likes
23
To be honest I find it equally if not more unclear how sine tone distortion tests relate to audible sound quality of a device when playing music (which is what we use them for, not listening to sine tones), for which I've not seen any real proven correlation either.

So you claim to have found empirical evidence that there is a difference between how the amplifiers handle music, and how the amplifiers handle discrete tones. You have found a violation of the superposition principle (a nonlinearity, in other words), different from the one we already know about and test for (intermodulation).

My original question, and the reason I started the thread, was to see if anyone know of a theoretical basis for these additional distortion terms to be appearing. Do we see any evidence in the mathematics of something that test tones might miss? And could we design some new test signals that could better control, modulate, and quantify this additional distortive effect?
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,663
Likes
38,733
Location
Gold Coast, Queensland, Australia
To be honest I find it equally if not more unclear how sine tone distortion tests relate to audible sound quality of a device when playing music (which is what we use them for, not listening to sine tones), for which I've not seen any real proven correlation either.

Speak for yourself. :) I listen to pure sine tones all the time when working on gear, from amplifiers, preamplifiers, CD players right through to speakers. You can hear problems and potential issues faster with pure tones than any music you can think of.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,978
Likes
36,172
Location
The Neitherlands
As an example: generate two random samples of white noise. The difference between them will be super high according to this measurement, but they will sound identical.

That is true but this is not the case with using the DF method as the same noise signal is used. The differences between before and after or between the same noise amplified through a 'reference' and 'DUT' will determine the Df value.

For the rest I totally agree and have had this discussion already a few times.

I'm curious, if you actually listen to the difference signal for the music tests, does it sound like music, or is it just noise?

When nulling AND at the same time trying to hang an 'audibility' label on the generated number it is important to LISTEN to the null and analyze what the null consists of. This is what is missing so far in the Df stuff. It will require a LOT more research before one can say...'this generated number has a very high correlation with reality'

Besides, tons of research has been done to find out relation between types of distortion and audibility with music and test tones.
Masking and frequencies, phase shifts and how steep these are all determine audibility as well as recording quality, type of music, listener training and whatnot.

Certain thresholds have been established already, then there are strict limits and lenient limits. It's a rats nest.

Furthermore, especially with power amps, (and less so with headphone amps) the actual load can change the amounts of certain types of distortions and levels where it starts to become audible.
A resistive dummyload test will produce (in general) better 'numbers' than with certain loads.
So an amp with low numbers (be it measurements or nulling) and folks making claims of 'sound quality' to those numbers would have to realize these numbers can be substantially different (audible even) with certain specific speakers.

For pre-amps this doesn't apply as loads are purely resistive in almost all cases.
With DAC's the filter characteristics can skew Df numbers yet may not be audible.
When testing speaker amps some results could differ depending on the design.
 
OP
T

T3RIAD

Member
Joined
Sep 11, 2019
Messages
38
Likes
23
That is true but this is not the case with using the DF method as the same noise signal is used. The differences between before and after or between the same noise amplified through a 'reference' and 'DUT' will determine the Df value.

Oh I know the actual measurements posted use the same noise. I was just pointing out that two signals with a high DF can sound the same. Audibility would depend on the content of the difference, not just its RMS volume.


For another example, I just used the posted nulling utility to subtract a FLAC file from (1) a 320 kbps AAC compressed version, and (2) a 320 kbps Ogg compressed version of the same recording. I am able to hear the difference with the AAC (have passed blind ABX test), while I cannot tell the difference with the Ogg version.

Both files showed the same DF of about -30 dB. That makes sense; since the bitrate is the same, both algorithms have to discard a similar amount of data. But why can I pass an ABX test with the AAC file, but not with the Ogg file with similar DF?

The answer became apparent when I listened to the difference. With the AAC version it sounded like a really quiet version of the song - I could make out lyrics and instruments and pitches. But with the Ogg version, the difference was just noise. Both were equally loud, but the Ogg compression algorithm did a better job of finding less noticeable data to throw out. As a result, the AAC compression was audible while the Ogg compression was not.

EDIT - The files I made are attached. These are for the first track - Flesh and Bone - of the DigitalFeed ABX test for Tidal (FLAC vs AAC) and Spotify HQ (FLAC vs. Ogg).

Both have the same DF within a few dB. I can easily pass the Tidal ABX test 20 out of 20 times, while I can't do it on the Spotify version. If you listen to the null files, you'll understand why.
 

Attachments

  • abx_digitalfeed_tidal_spotify_nulls.zip
    9.3 MB · Views: 80
Last edited:

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,663
Likes
38,733
Location
Gold Coast, Queensland, Australia
When nulling AND at the same time trying to hang an 'audibility' label on the generated number it is important to LISTEN to the null and analyze what the null consists of.

The output terminal of my analog distortion analyzer provide a signal which consists of harmonics plus residual noise and other artefacts for any selectable frequency between 1Hz and 20KHz.

That signal can be amplified (if necessary), recorded and normalized. Listening to the amplified residual on a fundamental's deep notch ~80dB is interesting for sure.
 
Last edited:

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,978
Likes
36,172
Location
The Neitherlands
When I was nulling analog signals (speaker amps with actual loads) I used to record the null. Getting a really deep null was very hard to do (at say 1kHz) with analog potmeters. Had to spread it over a coarse and fine control. Indeed quite a lot if gain was needed. Of course this too added its own noise and distortion. :confused:

Oh I know the actual measurements posted use the same noise. I was just pointing out that two signals with a high DF can sound the same. Audibility would depend on the content of the difference, not just its RMS volume.

Of course and I fully agree. I was merely pointing out that for nulling that is not a thing to consider.

I fully agree with the rest and that listening to the null should be part of the 'number generation' part. Perhaps should weigh even more than the electrical measured differences.

Nulling is a different method though. It is difficult to separate linear and non linear distortion but with proper digital signal analysis this should be quite do-able. The ADC also needs to be corrected for which can be done with a reference measurement.

There is a potential in nulling. I had been nulling 30 years ago already and know it can shed light. So can 'classic' measurements. However, it takes a lot of experience and knowledge to get good correlation and need to look at the whole measurement suite including with challenging loads.

Non linear distortion (amplitude, phase) is also a form of distortion but not necesarilly a sound quality degrading one (depending on severity)
Also channel separation and phase differences between L and R can cause problems.

The 4 you mentioned in your OP can be expanded with these linear ones.
 
Last edited:

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,868
Likes
16,620
Location
Monument, CO
Additional sources of noise and distortion? Crosstalk; EMI/RFI; clock coupling (LO, digital sampling clocks, network clocks, etc.); all sorts of noise (which I do not lump with "distortion" personally but treat separately) like thermal, shot, flicker, radiant, etc.; thermal nonlinearities (e.g. active and passive devices' response varying dynamically and statically with temperature, a big problem in some circuits like certain ADCs and DACs, resistor and metal trace temp coefficients, etc.), slew-induced distortion, phase distortion, and so forth. Mechanically-induced distortion of all sorts, cone flexure in speakers, microphonics from vibration sensitivity in devices active and passive, voltage modulation of component values (caps especially, resistors usually less so), A bunch more I have not thought of in a minute of posting. There are a myriad of distortion and noise causes so often the trick is to identify and minimize the important ones for the application. That can be the goal of numerous college course and a lifetime of experience. I am not going to try to sum it up in an Internet forum post.
 
Top Bottom