• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

When a power rating is given at 1% or 0.7% THD, what is the non-clipping maximum?

OP
M

mike7877

Addicted to Fun and Learning
Joined
Aug 5, 2021
Messages
698
Likes
140
Generally the difference in power from x% distortion to clipping depends upon how manufacturers rate the amp, what headroom they build into their spec, and how much feedback is present. Then you have to consider if the clipping is short-term or long-term, thermal issues, and so forth. And finally you must decide what "clipping" means to you -- is it when the waveform is completely flat-topped, some high level of distortion, or what? I do not think there can be a simple metric because designs differ greatly, as do specs. One manufacturer may rate distortion low and thus limit their power spec, whilst another may spec higher distortion (closer to clipping) to provide a higher power number. Rated full power does not always correlate to clipping or to headroom.

Clipping is usually estimated by looking at the knee of the distortion curve as power is increased. An amp with low or "no" feedback may have a very "soft" knee, making it hard to draw a line before hard clipping (flat-topped waveforms), and a rather broadly sloped distortion line as it begins to clip. An amp with high feedback will have low distortion until nearly at clipping, resulting in a very sharp knee and almost vertical line. The difference could be several dB as others have said so there is not a simple (linear) ratio for what is fundamentally a very nonlinear operation. Some manufacturers market 6 dB headroom, others almost none, again greatly affecting the power rating "before clipping". Using a single number could greatly over- or underestimate the actual performance.
Ah, I wasn't even thinking that... I was thinking they would increase power until clipping began and when distortion finally rose to 0.7 or 1% they gave that rating.

Yes, some amps use higher rail voltages than the transformer is capable of supporting continuously into 8 or 4 ohms, which, in combination with bulk capacitors, enables louder transients. I thought that even in those cases, when the current capability is maxed there is voltage droop and this is used as the RMS rating when clipping begins

edit: I know it's not just lack of power supply current capability that can induce clipping in those situations, but IME, in all properly designed amplifiers it usually is (as it should be)
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,915
Likes
16,748
Location
Monument, CO
Ah, I wasn't even thinking that... I was thinking they would increase power until clipping began and when distortion finally rose to 0.7 or 1% they gave that rating.
I have seen amps rated from <0.01% to 10% (latter mostly car amps) for the same rated output power.

Yes, some amps use higher rail voltages than the transformer is capable of supporting continuously into 8 or 4 ohms, which, in combination with bulk capacitors, enables louder transients. I thought that even in those cases, when the current capability is maxed there is voltage droop and this is used as the RMS rating when clipping begins
Using stepped (class G) or tracking (class H) power supplies need not change the power spec; they allow the amp to deliver lower power more efficiently with less wasted power.

Ultimately the output is limited by the voltage supply rails and current capacity of the power supply. Voltage clipping is what I have seen the vast majority of the time; current limiting reduces the output only slightly, and then thermal limiting usually cuts in.

edit: I know it's not just lack of power supply current capability that can induce clipping in those situations, but IME, in all properly designed amplifiers it usually is (as it should be)
IME clipping hits the voltage rails, maybe a few percent drop due to current, but sustained high current causes heat that leads to thermal shutdown (or amplifier self-destruction). That said, voltage droop due to current is all over the map, since it depends upon how tightly the supply is regulated. Could be tens of volts for a tube supply, a few volts for a big SS amp's supply, and less for a tightly regulated supply.
 

sergeauckland

Major Contributor
Forum Donor
Joined
Mar 16, 2016
Messages
3,461
Likes
9,165
Location
Suffolk UK
The way I rate amplifiers is to feed the amplifier 1kHz into a dummy load of the rated impedance, usually 8 or 4 ohms, sometimes as low as 2 ohms. I then increase the input volts until the output, as seen on a 'scope shows clipping. I then go back and forth a little with input level to find the point at which the amplifier just starts clipping. That's the output level I rate the amp at. At that level distortion is low, well under 0.1%, even 0.01%. Once the amp starts clipping, it's outside its rated output, so I really don't care what happens to distortion, as the amp should never ever be used at that level.

Decent, conservatively rated amps will usually produce something like 5-10% above their specification under the above circumstances. Sadly, all too many of today's cheap amps are rated at stupid levels of 10% THD or whatever, as they're sold on the basis of the most watts for the $£€,not on their real rating.

Although I accept that the old IHF rating was unrealistic, at least it kept manufacturers honest, which now doesn't happen with the new generation of cheap amps.

S.
 

Hayabusa

Addicted to Fun and Learning
Joined
Oct 12, 2019
Messages
839
Likes
585
Location
Abu Dhabi
So we all know that 1% THD is -40dB. (If you didn't, you do now!)

A lot of reviews Amir does, the power vs THD+n chart will show distortion falling as power is increasing, and usually, somewhere around 10-20% of full power, this straight line reaching for the bottom, starts reaching for the side (I consider full power the point at which clipping occurs. So... while the line is falling in a straight line and power is increasing, harmonic distortion is entirely masked by the grass that his hiss. When it stops falling at this constant rate, I believe harmonic distortion is beginning to poke through the noise (ie. it's not entirely masked anymore). When the THD+n value starts quickly rising to the top of the chart, that is when clipping is happening. Distortion rises fast, but it's not extremely fast. The difference of power output at 1% and 10% distortion is actually pretty huge... So say the amplifier is normal-good, and right before it begins clipping at 122 watts into 8 ohms, THD+n is -100dB, or 0.001%. Since manufacturers give often give the power rating at 0.7% THD or 1% THD, I'd like to know roughly, by what amount, does power output increase from -100dB to -40dB? Basically I want to be able to take a rating like:

140W @ 1% THD+n

And multiply it by 0.94 to get where the amp [most likely] begins clipping.

For this to be generally applicable and on the conservative side, it's the power difference going from -80dB to -40db THD+n, when the increase is from clipping.
(sentence not worded the best, but y'know what I mean! (I hope lol) )
Let's make the following approximation:

The top level of a sine wave that is cut of by the clipping equals the amount of distortion.
In that way 1% distortion means a signal of amplitude 101 peak peak is clipped to 100.
the rms voltage will also change by this 1% (also an approximation).
This means the power will increase by 1%^2 = 1.01^2 = 1.02 so +2% power
for 0.7% you would get 1.007^2 = 1.014 so + 1.4% more power
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,796
Likes
37,707
This is just about watts, I knew the difference would be small - just wanted to quantify it
You can never have a rule to quantify this. We can say it will be small usually. And I don't know if any design will result in more than a 3 db level difference (usually less) which is not very large. So kind of a useless pursuit given all the variables. I don't see the usefulness of 10% THD specs at all myself. 1% is good. Maybe 3% as this is where it might be audible with music. In the modern world I don't see a reason to rate anything more than its 1 % rating other than intentional clouding of the real capabilities.
 

Verig

Active Member
Joined
May 4, 2023
Messages
117
Likes
67
To illustrate better what I'm looking for: when clipping is causing 1% distortion and the volume is then increased until distortion reaches 10%, the power increase is almost exactly 19%. So if you get an amplifier that specs its power output 100W @ 10% THD, and you want a value that's more realistic, you can multiply the 100 watts: (1/1.19) * 100 to calculate, giving 84 watts at 1% distortion.
Usually any 10% marketed amps are 100W 10% @4ohm so first thing I see is 100W-->50W@8ohm. Then take away a big chunk to reach good distortion value. 50% off again is good as I'm not interested in 1% distortion either, now we have a nice 25W usable power. But in all seriousness I take a healthy guess that the product is complete junk anyway and move on to products that are marketed with non-bloated numbers.
You can ballpark these well enough without very fancy math.
 

Sokel

Master Contributor
Joined
Sep 8, 2021
Messages
6,161
Likes
6,263
Usually any 10% marketed amps are 100W 10% @4ohm so first thing I see is 100W-->50W@8ohm. Then take away a big chunk to reach good distortion value. 50% off again is good as I'm not interested in 1% distortion either, now we have a nice 25W usable power. But in all seriousness I take a healthy guess that the product is complete junk anyway and move on to products that are marketed with non-bloated numbers.
You can ballpark these well enough without very fancy math.
That's what one of my low's amps states about 1 and 10%.
Generally they're accurate as we saw in Amir's test for 4 and 8 Ohm (considering the conditions of course,the one tested by Amir had a class A buffer in front of it)

1200as2.PNG



I think the 10% spec is ridiculous thought,in many tested amps we saw the protections kicking in way before that.
 

Verig

Active Member
Joined
May 4, 2023
Messages
117
Likes
67
Ok ok, I can live with the 1%. 1200AS2 has pretty decent values a bit past that part for both channels driven. Also, they have 0.003% there. :)
Still a bit messy. Not so much of a problem in a module data sheet but in the marketing of a finished consumer product it really gets wild.
 

Verig

Active Member
Joined
May 4, 2023
Messages
117
Likes
67
  • Continuous power output: 100W (4-8 Ohms)
  • Signal-to-noise ratio: >90 dB
  • Clipping (8/4 Ohms): 130/230 W (@ 1 kHz, 0.1 % THD)
  • Dynamic power (8/4 Ohms): 160/300 W
Here's NAD M10 marketing. I quite like this conservative approach (it's a nCore module inside).

Basically what I think about this is that I bought a really nice 100W@8R amp with some proper driving ability to near 4R and for peaks. They even state the clipping point which is quite extra.
 

Sokel

Master Contributor
Joined
Sep 8, 2021
Messages
6,161
Likes
6,263
Ok ok, I can live with the 1%. 1200AS2 has pretty decent values a bit past that part for both channels driven. Also, they have 0.003% there. :)
Still a bit messy. Not so much of a problem in a module data sheet but in the marketing of a finished consumer product it really gets wild.
If you want to get to the realm of magic look at the results of swapping 1200as2 with 300a2.
It only drives the woofers of my semi-actives and for the test here (who I have repeated hundreds of times) only needs a few watts as level is at about 80db.

test1.jpg

1200as2 low

test2.jpg

300a2 low.

(ignore everything below 30Hz,they have a F3 of 31Hz)

Fun part is that 300a2 has way nicer THD+N specs (I use a couple for the mid/highs) .
 
Last edited:

Verig

Active Member
Joined
May 4, 2023
Messages
117
Likes
67
Ok, that's interesting. A bit of magick in your hobby is always welcome.
 
OP
M

mike7877

Addicted to Fun and Learning
Joined
Aug 5, 2021
Messages
698
Likes
140
You can never have a rule to quantify this. We can say it will be small usually. And I don't know if any design will result in more than a 3 db level difference (usually less) which is not very large. So kind of a useless pursuit given all the variables. I don't see the usefulness of 10% THD specs at all myself. 1% is good. Maybe 3% as this is where it might be audible with music. In the modern world I don't see a reason to rate anything more than its 1 % rating other than intentional clouding of the real capabilities.

Consider a 600Hz tone with 1800Hz at 3% its level. Maybe someone who doesn't know that the 600Hz tone wasn't supposed to be accompanied by the 1800Hz tone, would think nothing's wrong, but if that person had the 3% 1800Hz switched in and out, they'd definitely notice. Search google for "online tone generator", pick the one of the first two with an orange background. Drag the slider to approximately 600Hz - press play. Now drag that slider to 1800Hz. See how much louder it sounds than the 600Hz? This 3% 1800Hz will be perceived as 12-15% the level of 600Hz because the ear is sensitive there.

Besides the outright perception of an extra tone on a signal, there's another way to track audibility of harmonic distortion: qualitative loss of sound quality. The change to the sound depends entirely on the signal itself and which harmonics are generated and at what level [by the imperfect amplification]. A simple example: the texture of a violin is altered. How? A harmonic was generated (by the amp, DAC, or speaker) on a note which came out of a louder, lower frequency instrument. When the note is such that the harmonic generated occupies the part of the spectrum near the secondary tones generated by the violin (those which make it sound like a violin), the sound of the violin is changed. Because lower notes are relatively high energy compared to higher notes and the harmonics of the instruments which play them, the harmonics they generate end up around the same level as the higher pitched instruments' natural harmonics, so there's interference. So it's not immediately obvious, and some people don't care, but it's not having things like this happen which allow the illusion of real instruments playing in your room
 
OP
M

mike7877

Addicted to Fun and Learning
Joined
Aug 5, 2021
Messages
698
Likes
140
Usually any 10% marketed amps are 100W 10% @4ohm so first thing I see is 100W-->50W@8ohm. Then take away a big chunk to reach good distortion value. 50% off again is good as I'm not interested in 1% distortion either, now we have a nice 25W usable power. But in all seriousness I take a healthy guess that the product is complete junk anyway and move on to products that are marketed with non-bloated numbers.
You can ballpark these well enough without very fancy math.

Yes, I agree - definitely stay away from things which would only give you a 10% distortion rating lol.

I think any amplifier which halves its power when impedance is doubled is designed properly. If the amp you're referencing above is also 10% distortion at 8 ohms, the amp was designed to provide its full power down to 4 ohms. If it was rated for 0.7% or 0.03 or some other common % THD+n value arbitrarily stamped on things by companies like Pioneer, Denon, Arcam, Sony, and it was 100W into 4 ohms / 50W into 8 ohms, I would commend the designer - they designed the amp to be stable with the most offensive 8 ohm speakers. Since the design is 100% voltage limited, this means there's at least some current reserve on top of the 100W / 4 ohm... so as long as the rest of the amp was built to be as resilient as the power supply, It would probably also do a pretty good job with most 4 ohm loads
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,796
Likes
37,707
Consider a 600Hz tone with 1800Hz at 3% its level. Maybe someone who doesn't know that the 600Hz tone wasn't supposed to be accompanied by the 1800Hz tone, would think nothing's wrong, but if that person had the 3% 1800Hz switched in and out, they'd definitely notice. Search google for "online tone generator", pick the one of the first two with an orange background. Drag the slider to approximately 600Hz - press play. Now drag that slider to 1800Hz. See how much louder it sounds than the 600Hz? This 3% 1800Hz will be perceived as 12-15% the level of 600Hz because the ear is sensitive there.

Besides the outright perception of an extra tone on a signal, there's another way to track audibility of harmonic distortion: qualitative loss of sound quality. The change to the sound depends entirely on the signal itself and which harmonics are generated and at what level [by the imperfect amplification]. A simple example: the texture of a violin is altered. How? A harmonic was generated (by the amp, DAC, or speaker) on a note which came out of a louder, lower frequency instrument. When the note is such that the harmonic generated occupies the part of the spectrum near the secondary tones generated by the violin (those which make it sound like a violin), the sound of the violin is changed. Because lower notes are relatively high energy compared to higher notes and the harmonics of the instruments which play them, the harmonics they generate end up around the same level as the higher pitched instruments' natural harmonics, so there's interference. So it's not immediately obvious, and some people don't care, but it's not having things like this happen which allow the illusion of real instruments playing in your room
That is why I mentioned music. 3% with music is around the audible area. 1% with music likely is not audible. 3% tones easy, 1% tones detectable. The oft listed .1% of -60 db is with pure tones around that 1 khz mark putting the 2nd and 3rd harmonics around 2 and 3 khz. That amount is not noticeable with music. I agree with using the strictest standard so you'll have a margin of error to make sure music isn't effected. So I'd want .1% or better. Even if I know you need far more with music to hear a difference. In this case the difference in 1% and .1% in power is not very much, excepting some relatively high distortion designs. I think you are over-reaching with the "qualitative loss of sound quality" if you are saying we need lots less than 1% for that. With you music you don't and if you get to .1% not a worry.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,796
Likes
37,707
Yes, I agree - definitely stay away from things which would only give you a 10% distortion rating lol.

I think any amplifier which halves its power when impedance is doubled is designed properly. If the amp you're referencing above is also 10% distortion at 8 ohms, the amp was designed to provide its full power down to 4 ohms. If it was rated for 0.7% or 0.03 or some other common % THD+n value arbitrarily stamped on things by companies like Pioneer, Denon, Arcam, Sony, and it was 100W into 4 ohms / 50W into 8 ohms, I would commend the designer - they designed the amp to be stable with the most offensive 8 ohm speakers. Since the design is 100% voltage limited, this means there's at least some current reserve on top of the 100W / 4 ohm... so as long as the rest of the amp was built to be as resilient as the power supply, It would probably also do a pretty good job with most 4 ohm loads
Yes, it is good if an amp can double power up and down with impedance. However a 100 wpc @ 8 ohm amp that is only 150 wpc @ 4ohm amp is still fine if you only need 150 wpc. Transformer coupled amps whether tube or SS with multiple taps have the same power at each impedance. As long as that is enough power for the speaker connected that is not a negative.

I've owned some Classe 25 amps. 250 wpc @ 8 ohms. It would double down to 4 ohms and double again into 2 ohms or quite close to it. Didn't double into 1 ohms, but would play that. Which is why it was the go to recommendation if you owned Apogee Scintilla ribbons with a more or less 1 ohm impedance. With less vicious speaker loads it didn't have any inherent advantage over a good amp that was powerful enough.
 
Last edited:
OP
M

mike7877

Addicted to Fun and Learning
Joined
Aug 5, 2021
Messages
698
Likes
140
Yes, it is good if an amp can double power up and down with impedance. However a 100 wpc @ 8 ohm amp that is only 150 wpc @ 4ohm amp is still fine if you only need 150 wpc. Transformer coupled amps whether tube or SS with multiple taps have the same power at each impedance. As long as that is enough power for the speaker connected that is not a negative.

I've owned some Classe 25 amps. 250 wpc @ 8 ohms. It would double down to 4 ohms and double again into 2 ohms or quite close to it. Didn't double into 1 ohms, but would play that. Which is why it was the go to recommendation if you owned Apogee Scintilla ribbons with a more or less 1 ohm impedance. With less vicious speaker loads it didn't have any inherent advantage over a good amp that was powerful enough.

Very true. Speaking of amps that double and double, I've got one too! It's currently getting repaired (I really broke it bad lol :( ), called Kinergetics KBA-280.
The 280 is the 140W + 140W into 8 ohms
It does 280 + 280 into 4
540 + 540 into 2
and
1000 into 1...

It does have a massive toroidal transformer that would probably be able to do 800+800 into 1 ohm, but I'd have to remove the 10A fuse and put something bigger...
It's not the first thing I plan to do with it when I get it back (obviously!!!!!)

It's got like 5kW of output transistors! They all work at like 5% of their rating because they work best down there because:
The amp is class a!


And I broke it with a soldering iron. One of those fast heating ones... Amp was still plugged in and on, when I went to solder a wire with my dang T12 iron... :facepalm:
From now on I'm floating ALL of my tools lol
 
Top Bottom