• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Evidence-based Speaker Designs

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
It's still the case that no one can prove that a person who knows they are taking part in a listening test doesn't have changed perception or reduced critical faculties.

This is technically true, but no experiment can prove something like this.

How would you set up an experiment to try to establish that people who take part in listening tests do have changed perception (in some relevant way) or reduced critical faculties?
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,065
Location
Zg, Cro
How would you set up an experiment to try to establish that people who take part in listening tests do have changed perception (in some relevant way) or reduced critical faculties?

He wouldn't, because it cannot be done. But hey, it's Cosmik, we all expect him to turn everything into "philosophy". :D
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,923
Likes
7,616
Location
Canada
Yeah I mean, are we sure that Toole speakers DON'T dominate the market? AFAIK the various Harman brands are some of the biggest speaker manufacturers out there. Another big one is Bose, and I know they've done a lot of their own research, but I haven't seen much information about how their speakers stack up against Harman's research.

It's actually hard to think of big manufacturers that intentionally deviate from flat anechoic response? Maybe Klipsch?

There's certainly a wide variety of low-volume, high price boutique hifi manufacturers that ignore research or have their own "flavor" of research, but I don't think that segment of the market can even be considered significant at this point.
 

Bjorn

Major Contributor
Audio Company
Forum Donor
Joined
Feb 22, 2017
Messages
1,286
Likes
2,562
Location
Norway
You still haven't defined what you mean by "collapsing polar". The sloping down of power response?

Do you have literature on the audibility of time alignment to support your claims?

Agreed on vertical lobing, but given the relatively significant linear distortion (especially in the treble) I've seen measured from CBTs (which you seem to be implying), the question is one of whether the directivity performance outweighs the linear distortion within the beam. There is also the pragmatic issue of nearfield listeners not having the space to set up a ground-plane array.
I assumed a collapsing polar was well understood here. It basically refers to a directivity that gets a shift early in frequency, thus looses its directivity rather quickly and is only constant over a limited frequency area.

That's exactly what you get with most horns combined with a front firing woofer that's on the market as the horn/waveguide is way too small to have constant directivity over a wide frequency area. This leads to highly coloration from the reflected energy in the room and the frequency response becomes a roller coaster where the speaker collapses and get's broader. You need a seriously large horn to avoid this (something isn't easily sold in the market) or combine it with i.e. cardioid/dipole to match the directivity to some degree, but that will have other weaknesses.

The importance of signal alignment with a crossover near the sensitive area is very easily heard with decent speakers in a decent acoustic room when you AB test. That's so audible that I personally see no reason for a blind test to back it up. There should be several papers on this, but don't have time to look for them now. I believe "svart/hvitt" showed several papers/articles earlier in another thread at this forum not too long ago.

I wasn't referring to CBT or any specific speaker design. It was a general statement.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
OK. Bye.
 

Bjorn

Major Contributor
Audio Company
Forum Donor
Joined
Feb 22, 2017
Messages
1,286
Likes
2,562
Location
Norway
In your view, what weaknesses specifically?
Higher distortion, doesn't match the sensitivity of the horn, doesn't necessarily have the same beamwith either, and with a smaller horn the crossover causes phase anomalies and superposition in a sensitive area which is very audible IMO.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
Higher distortion, doesn't match the sensitivity of the horn, doesn't necessarily have the same beamwith either, and with a smaller horn the crossover causes phase anomalies and superposition in a sensitive area which is very audible IMO.

Higher distortion than what?

What's the problem in your view with a sensitivity mismatch?

The beamwidth can be matched quite easily (and is in every competent design).

Could you explain your last point please? What horn profiles (or do you mean all horn profiles)? "Smaller" relative to what? Causes phase anomalies how?
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,383
Likes
24,749
Location
Alfred, NY
Just quickly going back to this, what is your information on market dominance and its source? What speakers are dominant?

One also has to start with defining specifically what market you're talking about. The audio market is highly segmented with different requirements and preferences for different segments. Within the "traditional separates, high quality stereo" niche, the design guidelines from Toole's research may well be dominant.
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
I think Toole himslef has commented on the challenges of dealing with Harman marketing dept :) . Final products wont be based purely on the best technical design. They are also designed to sell and must appeal to the publics desires and conceptions (and misconceptions) of whats good. A prime example is the earlier discussion of wide baffles. Even if they are the right thing to do (no comment from me about that) I think most of us would agree the thin box will probably win on aesthetic grounds.

Im not sure I agree with the comments about idolators on ASR. Its just that there appears to be little, maybe no contradictory scientific views on what Toole has presented. What is presented is entirely un-contentious IMO. Are we surprised that speakers designed to those principles are found to be preferred? If so I dont know why as those principles are entirely logical and concurr with wider research.

Instead of pointing to weaknesses in Toole’s research, let me suggest some general problems, weaknesses of audio research and in audio research consensus. This is from a perspective in another discipline, where I have 20 years of experience in what we may call applied research, including leading a team of researchers for a decade (please note, I write «researchers», not «scientists» in white lab coats).

SAMPLE SIZE: In audio research, sample sizes are often (almost always?) small. I would be frustrated if the same were true in my field. However, many small samples can give you a weight of the evidence «feel», as a sort of a metastudy. Because studies across research teams don’t follow the same methodology and input can be different as well, the researcher is not in total control, however.

MEASUREMENT WITHOUT THEORY: Sometimes, audio researchers measure without an underlying theory of how the world works. An audio scientist is not a neurologist, an expert in the mind and the human information processor that is the brain. An audio researcher may have a strong opinion on how humans react to sound input, even if he doesn’t understand how humans’ process information. Audio science may need to pass borders into other disciplines in order to enter new frontiers. How curious are audio researchers really when it comes to neurology, even psychology?

STILL LACK OF CONSENSUS: Sometimes I am surprised at how easily people wave off research in audio. Some time ago I was in a forum discussion with the author of Audiolense (a discussion on the “correct” slope of the in-room frequency response curve) when he wrote: “I ploughed through Toole’s book years ago but found little of use”. So it seems like even if audio is a quantitative leaning science, development work takes place without countering or basing new R&D on established research. Is this the kind of behaviour you would expect in a mature science?

To come back to the thread title: What constitutes “evidence” if sample sizes are small, if there is (some!) measurement without theory and if consensus is lacking in some important areas of audio science?
 
Last edited:

DDF

Addicted to Fun and Learning
Joined
Dec 31, 2018
Messages
617
Likes
1,355
Audiophiles need to get out of their heads that anything but a few specialist nerd recordings are made for the purpose of accuracy. This is the circle of confusion. Most recordings are an artistic interpretation. The sound of the recording is an implicit part of the art, and artists do not seek accurate. They produce an artistic product, musically and aurally.

Sort of. Circle of confusion concept has little to do with artist intent. The concept is whether the playback chain reproduces the audio event in the same manner as experienced by the creators, regardless of intent.

I recall having a similar debate with jj on rec.audio in 1993 (!): I've long been of the opinion that since the end user has no access to the creator's environment, from an aesthetic perspective all that matters is if they think playback is closer to the desired intent and so their mistaken perspectives are an absolutely valid part of the process. But as engineers aiming to give users a facimile of artist intent, our role should be to try and have playback replicate the artist intent as objectively acurately as possible and of course the only way to do that is through standardization.

Its no different than any other data transmission protocol. If ethernet lacked standardization, what came out of the pipe would be garbage, objectively. Of course the line is greyer in audio because we are all unique decoders if untrained. I think commercially there is room for designs that cater to untrained perspectives and thats valid for what it is.
 

noobie1

Active Member
Joined
Feb 15, 2017
Messages
230
Likes
155
Location
Bay Area
IMHO you have it backwards here. The outcome can't render the methodology specious.

I take specious to mean something along the lines of "superficially plausible, but in fact wrong". But what you seem to be saying is that, because the outcome is superficially implausible (i.e. too perfect), the methodology must be wrong.

The methodology might be wrong, but looking simply at the outcome won't help us determine whether or not this is so. To make this determination, we need to look at the methodology per se. You've raised the stereo/mono point as a potential methodological issue. That's a way of looking at the problem that makes sense, in my view (even though I don't share this particular concern).



Essentially, you're asserting here that how a speaker sounds is the most important factor in its market success. What evidence do you have for this?



Again, I feel that this is a criticism that is so general in scope as to be nebulous. Asking what specifically is amiss in the research is a more fruitful approach than making a vague attempt to undermine the credibility of the entire field, surely?

Ok. I’m having difficulty finding the raw data of DBT. I found two graphs from a Sean Olive blog from 2009:

http://seanolive.blogspot.com/2009/04/dishonesty-of-sighted-audio-product.html?m=1

Couple of quick thoughts:

1) Test subjects are not randomized. You have 40 Harman employees. Having owned Harman speakers, I know their attributes and would be able to pick them out in a DBT. Obviously Harman employees would have brand loyalty and they may already be aware of Toole design principles and how those speakers would behave. Also 40 test subjects would be considered small in a pharma test.

2) Four speakers is not nearly enough to accurately represent all the different kinds of design principles that are available on the market. I know this test is intended to show the merits of blind testing.

3) The error bars are huge. At least 3 of the 4 speakers could be the winner based on the bars.

If someone could point me to more raw data, I would greatly appreciate it.
 
Last edited:

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,317
Location
Albany Western Australia
Sort of. Circle of confusion concept has little to do with artist intent. The concept is whether the playback chain reproduces the audio event in the same manner as experienced by the creators, regardless of intent.

I recall having a similar debate with jj on rec.audio in 1993 (!): I've long been of the opinion that since the end user has no access to the creator's environment, from an aesthetic perspective all that matters is if they think playback is closer to the desired intent and so their mistaken perspectives are an absolutely valid part of the process. But as engineers aiming to give users a facimile of artist intent, our role should be to try and have playback replicate the artist intent as objectively acurately as possible and of course the only way to do that is through standardization.

Its no different than any other data transmission protocol. If ethernet lacked standardization, what came out of the pipe would be garbage, objectively. Of course the line is greyer in audio because we are all unique decoders if untrained. I think commercially there is room for designs that cater to untrained perspectives and thats valid for what it is.

Yes, I mentioned the lack of standardisation in another post, not only of the the home replay system but equally of the monitoring system in the studio. Without standards you are buggered. Video/film production monitors are calibrated to standards. This standard extends all the way to the home reproduction device. Audio on the other hand is all over the place.

Sure some people may want whatever coloured sound and that's fine. Some inaccurate speakers will be enjoyed, that's also fine. hifi it's not though.
 

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,317
Location
Albany Western Australia
Ok. I’m having difficulty finding the raw data of DBT. I found two graphs from a Sean Olive blog from 2009:

http://seanolive.blogspot.com/2009/04/dishonesty-of-sighted-audio-product.html?m=1

Couple of quick thoughts:

1) Test subjects are not randomized. You have 40 Harman employees. Having owned Harman speakers, I know their attributes and would be able to pick them out in a DBT. Obviously Harman employees would have brand loyalty and they may already be aware of Toole design principles and how those speakers would behave. Also 40 test subjects would be considered small in a pharma test.

2) Four speakers is not nearly enough to accurately represent all the different kinds of design principles that are available on the market. I know this test is intended to show the merits of blind testing.

3) The error bars are huge. At least 3 of the 4 speakers could be the winner based on the bars.

If someone could point me to more raw data, I would greatly appreciate it.

The test you cite was not about the thoughts you commented, it was about the effect of sighted bias. So I don't see the relevance.
 

DDF

Addicted to Fun and Learning
Joined
Dec 31, 2018
Messages
617
Likes
1,355
Sure some people may want whatever coloured sound and that's fine. Some inaccurate speakers will be enjoyed, that's also fine. hifi it's not though.

What I said was not this. My point is some (perhaps more than an insignificant minority) of untrained listeners may think the less accurate playback is in fact more accurate. It’s a much deeper concept than who “enjoys” what

I think the concept of my post wasn’t grasped
 

pierre

Addicted to Fun and Learning
Forum Donor
Joined
Jul 1, 2017
Messages
962
Likes
3,046
Location
Switzerland
What I said was not this. My point is some (perhaps more than an insignificant minority) of untrained listeners may think the less accurate playback is in fact more accurate. It’s a much deeper concept than who “enjoys” what

I think the concept of my post wasn’t grasped

Is it the same idea as: users of Walkman got used to a certain tonal balance and they like speakers that match what they are used to?
 

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,317
Location
Albany Western Australia
What I said was not this. My point is some (perhaps more than an insignificant minority) of untrained listeners may think the less accurate playback is in fact more accurate. It’s a much deeper concept than who “enjoys” what

I think the concept of my post wasn’t grasped

Obviously not, but now you have articulated it more clearly.

In the data from Toole and Olive trained listeners are superior at picking faults with speakers, however untrained listeners still come to broadly the same conclusions with the same preferences.
 
Top Bottom