solderdude
Grand Contributor
In short, there are 2 factors at play:
1: voltage division
2: actual change in damping current.
1: When speakers have considerable varying impedances then due to the voltage division the frequency response is changed. The higher the impedance value changes and the higher the output resistance the higher the effect (change of frequency response)
Depending on the speaker impedance and 'allowed' frequency response variations the damping factor may be higher or lower.
Take into account that the roundtrip resistance of the cable must be added to the output resistance from a voltage division standpoint.
A longer and/or thinner cable thus also has an effect and for this reason, wanting the smallest FR changes and be less dependent on the cable used the output resistance of the amp should be lowest possible to allow for the cable to 'f it up bit while staying within the max. allowed frequency deviations.
The points here thus are:
What max frequency deviation is allowed ? 0.1dB with about all speakers, 0.5dB with about all speakers etc. The more one allows the lower the DF can be.
Cable resistance (roundtrip) the higher the resistance is of the cable, when allowing the same FR deviations mentioned above, the higher the DF should be. Note a factor 10000 and 100 both aren't going to compensate. There is a maximum allowed cable resistance basically.
Speaker impedance the lower the impedance of the speaker are the higher the DF must be to ensure FR deviations remain below the set target. And the higher the impedance variations are relative to the lowest impedance the higher the DF must be to remain below the desired FR variations.
2: Actual damping current change. The actual damping current won't change much when the DF is above say a factor of 10. The damping current is determined by the generated back-EMF and the total resistance path. So DC-resistance of the to be damped driver + DC resistance + impedance of the XO filter in the driver (thus frequency dependent) + roundtrip cable + connectors + output resistance.
It is easy to see that the driver's DC resistance + impedance of the XO filter by far are the greatest contributors when it comes to total resistance path. Varying the cable or output R somewhat won't change the damping current much. Say the DC resistance of a speaker + filter + cable is 8 Ohm and the DF is 1000 the output R = 0.008 Ohm. When the DF is 10 the output R is 0.8 Ohm.
So between a DF of 1000 and DF of 10 the difference in actual damping current = 0.91x smaller in case of a DF of = 0.8dB so very little change in damping current.
This means the voltage division issue is far more important than the damping current.
The maximum allowed frequency variations in as good as all situations for an amplifier, when allowing just 0.1dB variations under those conditions thus requires a high damping factor and short cable runs with low resistance wires.
That's why benchmark comes up with the numbers they use.
Now for headphones (aside from some MA IEMS) the same more or less applies but the lowest required DF is far more relaxed (can be considerably lower in number thus higher in output resistance) because A: membranes aren't as heavy, DC resistance of headphone drivers is much higher than speakers (with some exceptions) and B: impedances do not vary nearly as much relatively to the DC resistance of the driver. Again with the exception of some MA IEMS.
To ensure a low FR change under all circumstances this requires a low output resistance. This is why NwAvGuy set his rules so strict
However, in reality, when owning just one or 2 not or slightly impedance varying headphones that aren't unusally low in impedance the '1/8th rule' can be broken without any ill effects to the frequency response.
To play it safe there is merit to the rule. whether it should be 1/50th or 1/4th depends on what FR variations one allows. 0.1dB or 0.5dB is a substantial difference in rule numbers.
1: voltage division
2: actual change in damping current.
1: When speakers have considerable varying impedances then due to the voltage division the frequency response is changed. The higher the impedance value changes and the higher the output resistance the higher the effect (change of frequency response)
Depending on the speaker impedance and 'allowed' frequency response variations the damping factor may be higher or lower.
Take into account that the roundtrip resistance of the cable must be added to the output resistance from a voltage division standpoint.
A longer and/or thinner cable thus also has an effect and for this reason, wanting the smallest FR changes and be less dependent on the cable used the output resistance of the amp should be lowest possible to allow for the cable to 'f it up bit while staying within the max. allowed frequency deviations.
The points here thus are:
What max frequency deviation is allowed ? 0.1dB with about all speakers, 0.5dB with about all speakers etc. The more one allows the lower the DF can be.
Cable resistance (roundtrip) the higher the resistance is of the cable, when allowing the same FR deviations mentioned above, the higher the DF should be. Note a factor 10000 and 100 both aren't going to compensate. There is a maximum allowed cable resistance basically.
Speaker impedance the lower the impedance of the speaker are the higher the DF must be to ensure FR deviations remain below the set target. And the higher the impedance variations are relative to the lowest impedance the higher the DF must be to remain below the desired FR variations.
2: Actual damping current change. The actual damping current won't change much when the DF is above say a factor of 10. The damping current is determined by the generated back-EMF and the total resistance path. So DC-resistance of the to be damped driver + DC resistance + impedance of the XO filter in the driver (thus frequency dependent) + roundtrip cable + connectors + output resistance.
It is easy to see that the driver's DC resistance + impedance of the XO filter by far are the greatest contributors when it comes to total resistance path. Varying the cable or output R somewhat won't change the damping current much. Say the DC resistance of a speaker + filter + cable is 8 Ohm and the DF is 1000 the output R = 0.008 Ohm. When the DF is 10 the output R is 0.8 Ohm.
So between a DF of 1000 and DF of 10 the difference in actual damping current = 0.91x smaller in case of a DF of = 0.8dB so very little change in damping current.
This means the voltage division issue is far more important than the damping current.
The maximum allowed frequency variations in as good as all situations for an amplifier, when allowing just 0.1dB variations under those conditions thus requires a high damping factor and short cable runs with low resistance wires.
That's why benchmark comes up with the numbers they use.
Now for headphones (aside from some MA IEMS) the same more or less applies but the lowest required DF is far more relaxed (can be considerably lower in number thus higher in output resistance) because A: membranes aren't as heavy, DC resistance of headphone drivers is much higher than speakers (with some exceptions) and B: impedances do not vary nearly as much relatively to the DC resistance of the driver. Again with the exception of some MA IEMS.
To ensure a low FR change under all circumstances this requires a low output resistance. This is why NwAvGuy set his rules so strict
However, in reality, when owning just one or 2 not or slightly impedance varying headphones that aren't unusally low in impedance the '1/8th rule' can be broken without any ill effects to the frequency response.
To play it safe there is merit to the rule. whether it should be 1/50th or 1/4th depends on what FR variations one allows. 0.1dB or 0.5dB is a substantial difference in rule numbers.
Last edited: