It is an understandable misunderstanding, but a misunderstanding nevertheless.
There is a whole ASR thread discussing this exact question. My best attempt in that thread to explain why the output (SPL) from a driver does not track the velocity or displacement of the diaphragm, no matter how intuitive that seems, is linked here.
www.audiosciencereview.com
This is the "fast bass" audiophile argument. It is a myth. Is that "definitive" enough?![]()
The linked to diagrams attributed to Purify are complete fakes. There is absolutely no way any physical system can respond like that to a square wave.<snip>
There is a whole ASR thread discussing this exact question. My best attempt in that thread to explain why the output (SPL) from a driver does not track the velocity or displacement of the diaphragm, no matter how intuitive that seems, is linked here.
cheers
I'd like to make another attempt at explaining why I think SPL is key to the question of, does higher sensitivity correspond with higher dynamics.
First, an attempt at defining/separating the terms transient response from dynamics. Admittedly my own differentiation, but i think it might help.
…
…
Transient response equals frequency response. The wider the range of frequency response, the flatter the response....the greater the transient response.
(and the 'faster the bass' lol).
Time and phase alignment are the icing on the cake for additional transient response improvement.
This is all definitional and rooted in the equivalency between a transfer function and impulse impulse response.
Nothing about this definition of transient response requires any degree of specification with regard to level (SPL or electrical).
Dynamics means to me, how well does a speaker produce excellent transient response/frequency response..
…
…
It is dependent on every driver section in the speaker maintaining its bandwidth contribution to transient response/frequency response, at the SPL level being driven.
Which means no thermal compression, no amplifier clipping, no excursion limitations...at the grossest level of breakdown. And at a finer level, no excessive rise in distortion or other non-linear responses.
…
…
And this is of course very much SPL level dependent,
as every speaker has its transient response/frequency response degrade, once above a certain SPL threshold.
Above that threshold, one or more driver sections will fail to maintain output linearity throughout their passbands. (as @Fredygump was questioning)
…
…
Below that SPL threshold, I'd say the speaker has excellent dynamics.
Above it, dynamics begin to suffer, increasingly so as SPL is raised.
…
…
Without bringing SPL into the discussion of sensitivity vs dynamics, the discussion is undefined and pointless imo.
(Or efficiency, as sensitivity and efficiency are directly related and the distinction is primarily pedantic at this level).
Without harping further on why I think high sensitivity designs are much more likely to maintain dynamics with increased SPL (seems like common sense to me)
…
Sensitivity is how much SPL is produced at 1 m with 1 W input. Reducing self-heating in the voice coil means less wasted power (energy) to heating the voice coil wires resulting in higher output for a given input, thus higher efficiency (greater output power for less input power). It could be argued that more sensitive speakers are more efficient but it does not always work out that way.I'm unclear what the difference is between sensitivity and efficiency in this context.
I am having a hard time understanding how a more sensitive speaker would also not be more efficient. Can you give an example to help me understand.It could be argued that more sensitive speakers are more efficient but it does not always work out that way.
I find myself having a hard time grasping this. Can you point to some reading or examples?You can get a nice flat FR and have poor TR and poor IR.
I am having a hard time understanding how a more sensitive speaker would also not be more efficient. Can you give an example to help me understand.
Not offhand and I am generally staying away from ASR lately so not going to dig. We had to calculate an example in one of the college acoustics classes I took a millennia or so ago. Basically you can make a speaker that has high sensitivity and yet wastes a lot of power, I think related to the impedance curve and crossover/driver choices that lead to a good sensitivity number but poor overall efficiency (or vice-versa). Part of it is due to how sensitivity is often defined, as a function of efficiency, which makes for an easy calculation but is not necessarily true to the physics. Sensitivity is also a single-frequency measurement whilst efficiency is generally calculated for the overall (broadband) transfer function. By limiting the definitions you can make the argument they are related as I mentioned, and that is how most audiophiles treat it. It often goes the other way, with midrange and tweeter drivers attenuated to provide more extended bass relative to the upper drivers' levels, reducing sensitivity though woofer efficiency could be higher. In that case the upper drivers are not necessarily less efficient, but power is dissipated in the attenuating resistor, reducing overall efficiency. Note speakers are electromechanical devices with a number of things affecting sensitivity and efficiency, such as magnet strength, spider and surround resistance, voice coil impedance, pressure coupling (e.g. cone vs. horn radiators, ports, passive radiators, etc.), and so forth that complicate the calculations. In the end sensitivity is how SPL a watt produces at 1 kHz 1 m away whilst efficiency depends on how much power is dissipated by anything that does not directly produce sound. Small single-driver speakers tend to be fairly sensitive but inefficient.I am having a hard time understanding how a more sensitive speaker would also not be more efficient. Can you give an example to help me understand.
Not offhand and I am generally staying away from ASR lately so not going to dig. We had to calculate an example in one of the college acoustics classes I took a millennia or so ago. Basically you can make a speaker that has high sensitivity and yet wastes a lot of power, I think related to the impedance curve and crossover/driver choices that lead to a good sensitivity number but poor overall efficiency (or vice-versa). Part of it is due to how sensitivity is often defined, as a function of efficiency, which makes for an easy calculation but is not necessarily true to the physics. Sensitivity is also a single-frequency measurement whilst efficiency is generally calculated for the overall (broadband) transfer function. By limiting the definitions you can make the argument they are related as I mentioned, and that is how most audiophiles treat it.
Just buy some Quad 57s, problem solved. People, we've had electrostatic speakers and over-engineered TLS from people like Bill Perkins for years now. Building a great speaker is just honest engineering. Most of what we have now is just marketing bs from greedy people selling overpriced crap.
dB SPL @ 1 watt @ 1 m iis the "traditional" statement of sensitivity, but it needs elaboration, and it has been replaced.Sensitivity is how much SPL is produced at 1 m with 1 W input.
Thanks for the clarification and update, Floyd. Glad to see you're still carrying the torch!dB SPL @ 1 watt @ 1 m iis the "traditional" statement of sensitivity, but it needs elaboration, and it has been replaced.
Today the standard method of measuring loudspeaker sensitivity is to use an input of 2.83 volts and measure the sound pressure level in an anechoic space in the far field of the source. 2.83 volts generates 1 watt into 8 ohms. a number that was selected because 8 ohms is the nominal load for defining the power output of amplifiers. For consumer and monitor loudspeaker systems the far field is typically 2 m or more, depending on the size of the loudspeaker, and the measured SPL is adjusted by calculation to what it would have been at 1 m. So, the measured SPL level at 2 m is lifted by 6 dB (a factor of 2 in distance in this example) to give us the sensitivity at the standard distance of 1 m. Only single transducers and small loudspeaker systems yield valid measurements at 1 m. Sensitivity should be specified as the average SPL over a frequency range, such as 300 Hz to 3 kHz (there are other possible ranges) to avoid manufacturers picking a peak in the frequency response to get a favorable specification. Ignore any specifications that do not specify how sensitivity was measured, and absolutely ignore in-room measurements.
But this statement alone is not enough information. We need to know the impedance characteristics.
Loudspeaker systems and transducers do not have constant impedance, even though manufacturers specify a "nominal" impedance. A lot of "8 ohm" loudspeaker systems have impedances that more correctly would be described as 4 ohms, and they may have minimum impedances even lower at some frequencies, and maximum impedances much higher. That means that the power into the loudspeaker varies with frequency, so there is no meaningful power sensitivity except at specific frequencies, which is a useless situation, but the charade goes on. Loudspeakers with real impedances lower than the specified number extract more power from the amplifier which is a problem for customers. Many inexpensive power amplifiers cannot drive lower impedances, leading to clipping and protective circuit activation - distortions we want to avoid. To be safe, choose power amplifiers that are designed to safely drive 4 ohms, and even better, 2 ohms. The best ones will double power into halved impedances, but many do not, but are still stable. In multichannel amplifiers be careful to note how many channels are driven for the power specification, and over what frequency range the specification holds.
The current standard is a voltage sensitivity, which for solid state amplifiers (constant voltage sources) is appropriate. It works so long as the impedance minimum does not result in current limiting of the output stages, so a full impedance specification needs to include a minimum impedance. Some incompetent system designs have had impedances that drop to under 1 ohm. Ignorant reviewers thought that such loudspeakers were able to "reveal" differences between power amplifiers, when in fact they were the problem. The result has been generations of massive monoblock "arc welder" power amps that can drive very low impedances and remain stable - expensive solutions for problems that should not exist.
Efficiency is a measure of acoustical power out compared to the electrical power in, and the percentages are low. Measuring the acoustical power out is complicated so it is almost an academic concept. In addition, a percentage is not very helpful in practical applications. The word "efficiency" is widely misused in common parlance.
It may be understood by recognizing that transducers, within their operating frequency ranges, are minimum-phase devices. That means that the impulse response can be calculated from the amplitude vs. frequency response: the frequency response alone. When the frequency response is flat and smooth there are no resonances and no time-domain misbehaviour.You can get a nice flat FR and have poor TR and poor IR.
I find myself having a hard time grasping this. Can you point to some reading or examples?
Could it not be approximated from the electrical impedance along with the same set of measurements needed to compute the spinorama?Measuring the acoustical power out is complicated so it is almost an academic concept.
I debated about commenting, but this statement is simply so far from reality that I feel I must.Just buy some Quad 57s, problem solved. People, we've had electrostatic speakers and over-engineered TLS from people like Bill Perkins for years now. Building a great speaker is just honest engineering. Most of what we have now is just marketing bs from greedy people selling overpriced crap.
Probably true, but I don't see a thermodynamic efficiency percentage as an answer to an existing problem, and of no direct use in the design of audio systems - that I can think of. It is an interesting statistic in comparing cones, domes and horns, but that has been known for decades. Any suggestions?Could it not be approximated from the electrical impedance along with the same set of measurements needed to compute the spinorama?