• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Cheapest Full Range 20hz - 20khz Speakers?

Shazb0t

Addicted to Fun and Learning
Joined
May 1, 2018
Messages
643
Likes
1,230
Location
NJ
Some of it is too superficial or used overly small samples or untrained listeners or simplistic methodology.
Then I encourage you to organize and perform better controlled research and publish it for peer-review. Until you are able to do this and show disagreement I would suggest that you stop doubling down on ignorance.
 
Last edited:

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,281
Location
Oxford, England
Then I encourage you to organize and perform better controlled research and publish it for peer-review. Until you are able to do this and show disagreement I would suggest that you stop doubling down on ignorance.

I suggest that you ignore my posts then. Unless you're intent on putting me right, in which case you can continue with your current modus operandi. It's your choice.
Blessed be the fruit.
 

Shazb0t

Addicted to Fun and Learning
Joined
May 1, 2018
Messages
643
Likes
1,230
Location
NJ
I suggest that you ignore my posts then. Unless you're intent on putting me right, in which case you can continue with your current modus operandi. It's your choice.
Blessed be the fruit.
If you insist on continuing to sow disinformation you should expect to be challenged. I suspect that challenge will not only come from me.

I also want to be clear that you haven't posted any sort of evidence based argument for anything that you've been challenged on in this thread. It wasn't rhetorical! I'm still waiting for you to explain the "other" audio magic (outside of what we look at here: Spinorama, FR, directivity, HD, CSD, sensitivity, sometimes IMD and/or compression) and the importance of these attributes with research based characterization on their audibility in loudspeakers which goes against the existing research.
 
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
Then I encourage you to organize and perform better controlled research and publish it for peer-review. Until you are able to do this and show disagreement I would suggest that you stop doubling down on ignorance.

IMHO, there are plenty of reasons to not place all our eggs in the Olive Preference Rating basket.

For example:
  1. The vast majority of speakers used in Olive's studies were conventional box monopoles, whose polar radiation tended (as a result of their inherent design similarities) to converge.
  2. The model was developed on the basis of data acquired in precisely one listening room, which hardly resembles a typical home environment.
  3. When the original model, based on the first, smaller sample of speakers, was applied to the results from listening tests using the second, larger sample of speakers, there was little correlation. The model was then updated to incorporate the results from the larger sample, but it's not clear to me why we should expect the second (current) version of the model to now apply particularly well to even larger samples of speakers (or indeed the whole population - especially given point #1).
  4. The SM and NBD scores, which are central to the model, are mathematically contradictory of each other, which raises questions about the validity of the model itself IMO.
  5. The 0.86 correlation cited so frequently is a correlation between the model's predictions and the very sample from which those predictions were derived. This is clearly a (highly unlikely) best-case-scenario as regards the model's applicability to a wider sample of speakers (not to mention rooms).
Now I absolutely agree that this model is the most comprehensive thing we have right now. But you seem to be going a step further and suggesting that the model can't be criticised with anything other than a new, equally comprehensive body of research that would completely supplant it (if I've understood you correctly - apologies if not).

I would take a more cautious approach towards Olive's model. It gives us a good indication of how listener preference correlates to objective data, and is likely especially applicable to box monopoles set up optimally in well-treated rooms. But it clearly has a number of limitations and question marks hanging over it that are worth discussing - even if we can't categorically say there is some other model out there that is superior.

It may also be worth repeating Olive's own words (from the paper itself) in which he discusses "some of" the model's limitations:

10 LIMITATIONS OF MODEL

The conclusions in this study may only be safely generalized to the conditions in which the tests were performed. Some of the possible limitations are listed below.

1. Up to this point, the model has been tested in one listening room.

2. The model doesn’t include variables that account for nonlinear distortion (and to a lesser extent, perceived spatial attributes).

3. The model is limited to the specific types of loudspeakers in our sample of 70.

4. The model’s accuracy is limited by the accuracy of the subjective measurements.
 
Last edited:

Shazb0t

Addicted to Fun and Learning
Joined
May 1, 2018
Messages
643
Likes
1,230
Location
NJ
IMHO, there are plenty of reasons to not place all our eggs in the Olive Preference Rating basket.

For example:
  1. The vast majority of speakers used in Olive's studies were conventional box monopoles, whose polar radiation tended (as a result of their inherent design similarities) to converge.
  2. The model was developed on the basis of data acquired in precisely one listening room, which hardly resembles a typical home environment.
  3. When the original model, based on the first, smaller sample of speakers, was applied to the results from listening tests using the second, larger sample of speakers, there was little correlation. The model was then updated to incorporate the results from the larger sample, but it's not clear to me why we should expect the second (current) version of the model to now apply particularly well to even larger samples of speakers (or indeed the whole population - especially given point #1).
  4. The SM and NBD scores, which are central to the model, are mathematically contradictory of each other, which raises questions about the validity of the model itself IMO.
  5. The 0.86 correlation cited so frequently is a correlation between the model's predictions and the very sample from which those predictions were derived. This is clearly a (highly unlikely) best-case-scenario as regards the model's applicability to a wider sample of speakers (not to mention rooms).
Now I absolutely agree that this model is the most comprehensive thing we have right now. But you seem to be going a step further and suggesting that the model can't be criticised with anything other than a new, equally comprehensive body of research that would completely supplant it (if I've understood you correctly - apologies if not).

I would take a more cautious approach towards Olive's model. It gives us a good indication of how listener preference correlates to objective data, and is likely especially applicable to box monopoles. But it clearly has a number of question marks hanging over it that are worth discussing - even if we can't categorically say there is some other model out there that is superior.

It may also be worth repeating Olive's own words (from the paper itself) in which he discusses "some of" the model's limitations:
You're suggesting my response was in defense of the Olive Preference Rating. It wasn't, that is a strawman. We are discussing the relevance of the spinorama and other measurements used to evaluate speaker performance and their audibility at ASR.

Fact, I never mentioned the Olive Preference Rating in this thread. I actually agree with most of what you said. I don't agree with @tuga stating as fact that the research done by Toole to characterize speaker performance and measurement importance along with medical studies regarding audibility of audio attributes like distortion are wrong and not accounting for "other" unknowns that are only able to be demonstrated in uncontrolled listening.
 
Last edited:

vavan

Senior Member
Joined
Apr 29, 2019
Messages
341
Likes
212
Location
Kazan, Russia
Toole's research, often too small samples, untrained listeners.
If people want to look at "the book" (which by the way has been revised a few times
If you mean Toole's book, here's excerpt from intro to 3rd edition: "What was called the second edition was a labeling error associated with a change in publishers. The book was unchanged. This edition is substantially new. I have tried to adopt a more linear approach to explaining how the art, technology and science combine to create listening experiences and how we perceive them"
 

detlev24

Senior Member
Joined
Dec 3, 2019
Messages
305
Likes
291
[...] Are you sure that those measurements came from Neumann? [...]
The first measurements came from Neumann's development manager "Markus Wolff", as stated in the article. Only later, with image no. 11, the magazine refers to "their own" measurements.

It however is interesting that those seem to be exactly the same as found on Neumann's website, as well as in "Sound & Recording" - just different excerpts of the graphs seem to have been published at different resolutions. (Why?) // The reason might be a commonly delegated external laboratory; as the FIDELITY website states an excursion to the measurement facility in Aachen (a German city) - in their 'B&W 800 D3' review. Both the magazines do also state the exact same measurement hardware and limitations, mainly due to floor reflections, of their measurement chamber.

The important take-away is, that the measurements are inaccurate below ~100 Hz, due to the measurement chamber's nature (it is not fully anechoic).

A 'Klippel NFS' would provide much more reliable data even <100 Hz, as their proprietary algorithms have proven to accurately remove reflections for analysis purposes in the whole measurement spectrum. This is also why Klippel states a higher accuracy than common measurements in an anechoic chamber could provide: even fully anechoic chambers often suffer from interferences of the outside world [<50 Hz].


[EDIT]

For a description of the magazines' measurement environment, see here [make sure to click on the ENGLISH (GOOGLE TRANSLATE) field on top of the article].
 
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
You're suggesting my response was in defense of the Olive Preference Rating. It wasn't, that is a strawman. We are discussing the relevance of the spinorama and other measurements used to evaluate speaker performance and their audibility at ASR.

Fact, I never mentioned the Olive Preference Rating in this thread. I actually agree with most of what you said. I don't agree with @tuga stating as fact that the research done by Toole to characterize speaker performance and measurement importance along with medical studies regarding audibility of audio attributes like distortion are wrong and not accounting for "other" unknowns that are only able to be demonstrated by uncontrolled listening.

Ok, fair enough :)

Perhaps one reason you and @tuga are butting heads - or one reason I apparently can't understand what it is that you're arguing over here - is that the spinorama per se doesn't tell us anything about listener preference. It's just an objective data point (or rather, a representation of a large number of data points) that is meaningless absent interpretation.

I also think I'm interpreting @tuga's posts pretty differently to you. It seems to me that he's mostly saying that nonlinear distortion and SPL capacity (neither of which are represented in the spinorama) are likely correlated with listener preference.

We know that distortion is audible in the degree to which it commonly manifests in loudspeakers. And we know that listeners tend to prefer louder over quieter (all else equal). So it would seem to me to follow that listeners would tend to prefer a loudspeaker capable of producing less distortion at higher SPLs, all else equal. Does that seem controversial to you? Or is it just not what Tuga is saying in your opinion?

OTOH, I agree with you 100% that group delay performance is highly unlikely to correlate with listener preference under normal circumstances.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,281
Location
Oxford, England
The first measurements came from Neumann's development manager "Markus Wolff", as stated in the article. Only later, with image no. 11, the magazine refers to "their own" measurements.

It however is interesting that those seem to be exactly the same as found on Neumann's website, as well as in "Sound & Recording" - just different excerpts of the graphs seem to have been published at different resolutions. (Why?) // The reason might be a commonly delegated external laboratory; as the FIDELITY website states an excursion to the measurement facility in Aachen (a German city) - in their 'B&W 800 D3' review. Both the magazines do also state the exact same measurement hardware and limitations, mainly due to floor reflections, of their measurement chamber.

The important take-away is, that the measurements are inaccurate below 100 Hz, due to the measurement chamber's nature (it is not fully anechoic).

A 'Klippel NFS' would provide much more reliable data even <100 Hz, as their proprietary algorithms have proven to accurately remove reflections for analysis purposes in the whole measurement spectrum. This is also why Klippel states a higher accuracy than common measurements in an anechoic chamber could provide: even fully anechoic chambers often suffer from interferences of the outside world [<50 Hz].

Thank for the extra info. My not being able to speak German is a massive shortcoming.
 

Shazb0t

Addicted to Fun and Learning
Joined
May 1, 2018
Messages
643
Likes
1,230
Location
NJ
Ok, fair enough :)
Thank you.
Perhaps one reason you and @tuga are butting heads - or one reason I apparently can't understand what it is that you're arguing over here - is that the spinorama per se doesn't tell us anything about listener preference. It's just an objective data point (or rather, a representation of a large number of data points) that is meaningless absent interpretation.
Let me try to make it as clear as possible then. The spinorama was specified by Toole as a means to objectively characterize loudspeaker performance. My understating is that is the entire point of it. The reasoning and research showing why the particular measurements used to compromise the spinorama were chosen are described by Toole in his book and in other research done by the NRC and Harman. This is the basis for the audio science that is being performed here at ASR and what is used by many speaker companies to design objectively "good" loudspeakers. @tuga doesn't appear to accept the validity of this research as evidenced by statements made in this thread:
Hardly any classical music pros use JBL speakers. The M2 was designed for loudness and, being a Harman, to produced a pretty Spinorama.
But a Spinorama is manifestly insuficient to characterise performance.
I would rather have a lesser Spin with lower distortion than a prety one with other audible issues. The B&W is a rather extreme example.
The difference between a near-perfect and a good Spinorama may be less audible than those other things that you seem to be disdaining
When pressed on these statements with the fact that FR and directivity have been shown in controlled studies to be much more important that group delay, reasonable levels of distortion, other audio magic; which by the way, are also generally accounted for in the measurements taken at ASR, there was just additional hand waiving.
Also, we have not seen any of the large Genelec or Neumanns measured here at ASR with which the toy-sized speakers cannot compete in areas other than the Spinorama.
And then there's the listening assessment (I am not referring to preference evaluation) which cannot be ignored.
More statements about "other" measurements that we are not aware of which he believes are more important to speaker performance than what is done here and based on audio research. The big red flag here is his statement that listening assessments, which I am assuming to be uncontrolled (as far as I'm aware there isn't any controlled listening research that show new audio properties which we aren't accounting for here), is the ultimate arbiter. IMO this statement is counter to everything that this site represents.
And in my view there at times appears to be some bias and some corner cutting and some box ticking and some controversial interpretation of the data in Toole's research, often too small samples, untrained listeners.
If people want to look at "the book" (which by the way has been revised a few times and is full of words like "appears" and "indicates" or "implies" which express some degree of uncertainty) as the be-all and end-all in loudspeaker science that is not my problem, but I don't have to agree.
It doesn't get any more clear than this one. I can't think of an alternative way to interpret this statement other than that it is questioning the validity of the research done by Toole and the audio research used as the basis for what Amir does on ASR. My point is that he doesn't have to agree with the research but if he wants to constantly preach to the rest of us then he had better do some controlled audio research of his own and publish it for peer-review.
I also think I'm interpreting @tuga's posts pretty differently to you. It seems to me that he's mostly saying that nonlinear distortion and SPL capacity (neither of which are represented in the spinorama) are likely correlated with listener preference.
Hopefully I've cleared up any confusion regarding my interpretation. I agree that distortion, sensitivity, CSD should be looked at in addition to the spinorama when categorizing and comparing loudspeakers. I believe that is why we do it here. However, I don't agree that they are more important to listener preference or that there are "other" attributes we are not accounting for that influence listener preference more than these as evidenced by the research.
We know that distortion is audible in the degree to which it commonly manifests in loudspeakers. And we know that listeners tend to prefer louder over quieter (all else equal). So it would seem to me to follow that listeners would tend to prefer a loudspeaker capable of producing less distortion at higher SPLs, all else equal. Does that seem controversial to you? Or is it just not what Tuga is saying in your opinion?

OTOH, I agree with you 100% that group delay performance is highly unlikely to correlate with listener preference under normal circumstances.
We both agree that distortion and group delay can be audible. The amount which is audible to the human ear has been studied. Generally if you were testing listener preference with loudspeakers they should be level matched, so higher SPL capability is something one should consider for their personal use case. Luckily, distortion at high SPL is measured here along with sensitivity and is given for each speaker. The data that you get here that you don't get elsewhere is the spinorama. IMO and from the research that I have seen, this is the most important piece of data to objectively evaluate a loudspeaker. I believe we both agree on that. Does @tuga?
 
Last edited:

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
I already have those PDFs but thanks for the links anyway.
Are you sure that those measurements came from Neumann?
I don't speak German but the magazine website shows other speakers being measured as well as the resulting measurements:

https://www.fidelity-online.de/test/lautsprecher/lautsprecher-messungen/

Anyway, we are digressing. I merely wanted to point out that there are other measurements to be made (HD, IMD, step response, CSD, in-room) which will help characterise the speaker's performance.

Lots of good reviews on that site. Thanks for the link.

Not surprising to see poor performance on most metrics for the 800D3(FR, Directivity, Phase behavior), but still a bit sad to see for such a beautiful loudspeaker(imo). I want a speaker with Genelec/Revel/Neumann performance, but B&W aesthetics.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,281
Location
Oxford, England
We both agree that distortion and group delay can be audible. The amount which is audible to the human ear has been studied. Generally if you were testing listener preference with loudspeakers they should be level matched, so higher SPL capability is something one should consider for their personal use case. Luckily, distortion at high SPL is measured here along with sensitivity and is given for each speaker. The data that you get here that you don't get elsewhere is the spinorama. IMO and from the research that I have seen, this is the most important piece of data to objectively evaluate a loudspeaker. I believe we both agree on that. Does @tuga?

You appear to be satisfied with current knowledge in regard to audibility thresholds. That is a choice you make.
We may know a lot about electronics and acoustics, perhaps even the physiology of the ear, but not as much in regard to psychoacoustics.

We don't seem to know much about transducer performance at very low levels (apart from a slight mention by Putzeys and by Dali).

I would venture that close-mindedness is not good science but then again I am not a scientist... (There's one sitting next to me right now who couldn't care less about audio)
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
Let me try to make it as clear as possible then. The spinorama was specifically specified by Toole as a means to objectively characterize loudspeaker performance. My understating is that is the entire point of it. The reasoning and research showing why the particular measurements used to compromise the spinorama were chosen are described by Toole in his book and in other research done by the NRC and Harman. This is the basis for the audio science that is being performed here at ASR and what is used by many speaker companies to design objectively "good" loudspeakers.

Ok yes, I do essentially agree with that, notwithstanding a couple of caveats that I'll mention below..
We both agree that distortion and group delay can be audible. The amount which is audible to the human ear has been studied. Generally if you were testing listener preference with loudspeakers they should be level matched, so higher SPL capability is something one should consider for their personal use case. Luckily, distortion at high SPL is measured here along with sensitivity and is given for each speaker. The data that you get here that you don't get elsewhere is the spinorama.

I agree (although I also think that distortion at low-moderate SPL can be a real issue - which is also, fortunately, measured at ASR). I think Toole agrees on this point, too. For example (Sound Reproduction Ch. 5, p. 128):
Non-linear distortion is... not normally a factor in full-sized domestic and professional monitor loudspeakers. But, as described in that section, it does happen. It happens frequently, and indeed is expected, in small portable loudspeakers that are challenged in bandwidth, amplifier power and transducer capabilities. In all cases, subjective evaluation is the only certain metric.

This is why I'm not sure why you object so strongly to statements like these from @tuga (other than because of the strident language it's worded in ;)):
Also, we have not seen any of the large Genelec or Neumanns measured here at ASR with which the toy-sized speakers cannot compete in areas other than the Spinorama.

So, there's caveat one. Distortion and SPL capability. Likely to be a problem (where present) at high SPLs, but also in some cases of audible concern at low SPLs. We all seem to agree on this one (and yeh, ASR has this more or less covered, notwithstanding that it doesn't produce IMD measurements, which would tend to separate two-way from three-way designs better than mere HD sweeps).

Caveat two to my mind is that the spinorama doesn't distinguish between horizontal and vertical polar radiation. For example, a coaxial speaker with very symmetrical 360° radiation can give a very similar spin to a non-coincident multi-way speaker with very different vertical and horizontal radiation patterns. Or, for example, a speaker with wide directivity in one plane and narrow directivity in the other may give a similar spin to a speaker with medium directivity in both planes. Or finally, a speaker with issues in its off-axis response in one plane may compensate for these in the spin if it has inverse issues in the other plane (if you've ever played around designing speakers, you can work out how to "game" the spinorama in this way, in fact).

I think we'd be oversimplifying things to imagine that all the examples I've just mentioned wouldn't sound significantly different from each other, or that none would tend to be preferred over others.

What the spin is good for is as a tool for quickly identifying problems in speakers, for making sure that a speaker checks most of the important boxes. If there's something amiss in the spin, then it's likely an audible problem. For this reason too, it makes sense that Olive used spin data as the basis for his model. Overall, spins capture a lot of useful data in a simple format, and these data generally correlate reasonably well to listener preference.

But - and I guess this is my key point - there may be problems that are not necessarily apparent in the spin.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
We may know a lot about electronics and acoustics, perhaps even the physiology of the ear, but not as much in regard to psychoacoustics.

We don't seem to know much about transducer performance at very low levels (apart from a slight mention by Putzeys and by Dali).

What don't we know in your opinion?
 

carewser

Senior Member
Joined
Sep 30, 2020
Messages
310
Likes
284
Location
Victoria, BC
The Audioengines are in the kitchen. The main listening room is about 15'x19', with the seating position about 14' from the speakers. I have JBL 708P's in there, and they hold up nicely. They are also 500W per speaker (2x250W amps).

How many sound systems do you have that you have one in your kitchen?
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,281
Location
Oxford, England
What don't we know in your opinion?

Do you mean about speakers?
I see people not only accepting the Spinorama as suficiently informative in regard to speaker performance (the simplistic tonal-balance-trumps-everything-else approach) but also dismissing the importance of measuring the driver behaviour and performance in the time domain.
If you look at this tech sheet by Klippel you will see mention of several causes of loudspeaker-driver non-linearities. Dali and Purifi mention hysteresis.
Why would/should these suddenly become unimportant when measuring a loudspeaker?
We are no longer limited by vinyl, mics and ADCs and digital audio and DACs and amps have a S/N ratio better than 100dB and very low distortion.
If you picked a few loudspeakers of identical topology and performance potential, put them in a treated room (the BBC did this by the way, with sound engineers in their anechoic chamber) and EQ'd the response to be identical on-axis would trained listeners not be able to differentiate them from one another?
Tonal balance is the most obvious and primary characteristic. It may be enough to get this right for most people but we're supposed to be a demanding bunch.

You can grab two mid-woofers or tweeters that can be coaxed to produce reasonably similar frequency response on- and off-axis in a loudspeakers, yet one produce audible distortions but not the other. Can you tell this from looking at a Spin?
 

Shazb0t

Addicted to Fun and Learning
Joined
May 1, 2018
Messages
643
Likes
1,230
Location
NJ
You appear to be satisfied with current knowledge in regard to audibility thresholds. That is a choice you make.
We may know a lot about electronics and acoustics, perhaps even the physiology of the ear, but not as much in regard to psychoacoustics.

We don't seem to know much about transducer performance at very low levels (apart from a slight mention by Putzeys and by Dali).

I would venture that close-mindedness is not good science but then again I am not a scientist... (There's one sitting next to me right now who couldn't care less about audio)
Just so we're clear on what you're saying here. The implication is that I'm close-minded for asking that you provide evidence for your claims that are counter to the available research on the subject. That this is the "choice" I make, as opposed to your choice where everything and anything uttered is plausible and therefore valuable for discussion. Nothing is ever provable so therefore everything is up for grabs. Did I get that right?

Secondly, you're implying that there are differences between speakers that may affect something that happens in the brain which we are incapable of measuring. Some sort of brain magic attribute, if you will, and that this speaker attribute which hasn't been measured or shown to exist in controlled listening experiments is perhaps more strongly indicative of listener preference than any of the measurements that we consider on ASR. Is that also correct?

I've already touched much on distortion and that it is valuable and covered in the measurements here. It is disengenious to suggest Toole didn't consider it when specifying the spinorama. The point of using spinoramas as a comparison is when the other potentially audible issues, like distortion, are below an audible level. Nobody here is suggesting that two speakers with identical spins, except where one has massively audible distortion will sound the same. The only one suggesting that as an argument is you, in typical strawman fashion. I've made it very clear that we should consider other factors that have been proven to have audibility thresholds. I'm suggesting that those are already being accounted for here in addition to the spinorama. You have suggested there are additional "other" factors. Again I ask, what are they? The brain magic stuff?
 
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
You can grab two mid-woofers or tweeters that can be coaxed to produce reasonably similar frequency response on- and off-axis in a loudspeakers, yet one produce audible distortions but not the other. Can you tell this from looking at a spin?

I agree about the limitations of the spin (see above). But I do think (other than the lack of IMD measurements) that ASR takes measurements beyond the spin that allow us to characterise the loudspeaker's performance reasonably well. Between the spin, the polar response measurements, and the distortion measurements, we get a very good (albeit not quite perfect) picture of the speaker's performance.

I see people not only accepting the Spinorama as suficiently informative in regard to speaker performance (the simplistic tonal-balance-trumps-everything-else approach) but also dismissing the importance of measuring the driver behaviour and performance in the time domain.

What measurements would you suggest should be carried out on driver behaviour (as opposed to the behaviour of the whole speaker)?

And (keeping in mind the large body of research that shows that moderate, gradual changes in group delay that are typical of passive loudspeaker crossovers are inaudible) what specific aspects of time domain performance correlate to subjective preference in your opinion, and what is the scientific basis for your belief in that correlation?

If you picked a few loudspeakers of identical topology and performance potential, put them in a treated room (the BBC did this by the way, with sound engineers in their anechoic chamber) and EQ'd the response to be identical on-axis would trained listeners not be able to differentiate them from one another?
Tonal balance is the most obvious and primary characteristic. It may be enough to get this right for most people but we're supposed to be a demanding bunch.

I agree with this 100%. Some speakers would arguably produce low enough distortion that, for most music at moderate listening levels, differences would be negligible or indistinguishable, but the majority would certainly not.

You can grab two mid-woofers or tweeters that can be coaxed to produce reasonably similar frequency response on- and off-axis in a loudspeakers, yet one produce audible distortions but not the other. Can you tell this from looking at a spin?

Sure, but doesn't everyone agree here that nonlinear distortion is an important metric? Doesn't ASR measure that (at least HD, anyway) and publish the measurements alongside the spin? (I agree that IMD measurements would be good to have, too, ideally.)

EDIT: I forgot before - compression measurements would also be nice!
 
Last edited:

BDWoody

Chief Cat Herder
Moderator
Forum Donor
Joined
Jan 9, 2019
Messages
6,948
Likes
22,625
Location
Mid-Atlantic, USA. (Maryland)
How many sound systems do you have that you have one in your kitchen?

LS50's in the bedroom.
AudioEngines in kitchen, second pair in garage.
JBL708P's and 705P's in main listening room
Paradigm Studio 40's in office with the Devialet..
Misc Paradigm and Triad Speakers in downstairs theater.
JBL705P's on desk.
JBL306's in my son's room now as his keyboard speakers, but normally those are in the guest room.
2 pairs of outdoor speakers.

I think that's it.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,523
Likes
3,745
Location
Princeton, Texas
I see people... dismissing the importance of measuring the driver behavior and performance in the time domain.

Agreed.

My understanding is that one area of importance is the phase relationship between fundamentals and overtones. When the overtones arrive at the same instant as the fundamentals, there is a brief spike in SPL. When the overtones are smeared out in time, those spikes disappear.

The reason those spikes matter has to do with signal-to-noise ratio, which in turn has to do with clarity and intelligibility.

Here is David Griesinger giving a lecture which includes a demonstration of this phenomenon. Included in the clip is background information wherein David explains the auditory significance of "proximity", which is related to (and perhaps synonymous with) clarity, and which correlates well with preference and engages our attention. The clip should be cued up to start at 1:24, and the relevant section ends at about 6:16:


Note that the "Lokki" Griesinger quotes is Finnish acoustics researcher Tapio Lokki, rather than the step-brother and nemesis of Thor.
 
Last edited:
Top Bottom