• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Why don't all speaker manufacturers design for flat on-axis and smooth off-axis?

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Anecdotal evidence versus larger scale trials which AFAIK only the Harman group has done in a systematic way. Would you take a medicine that was shown to be effective in a 5-10 patient trial or one that was shown to be effective in a systematically done trial of over a hundred patients? Bech may be distinguished in his field but in fields like medicine, expert opinion is considered the weakest form of evidence and certainly takes a backseat to properly done large scale blinded randomized studies.

Show us actual data from studies of the caliber of the Harman group that comes up with different conclusions then that's worth discussing. Otherwise it's a lot of brownian motion.

The audio society blind test is an anecdote, of course. My point was really this: Based on audio research, which of Salón and M2 would you expect to win in a large series of competently performed blind tests? Would frequency response guide you to your answer?
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Take the amount of variables @Xulonn mentioned. The fact that in a relative casual test the Salons were preffered, does'nt have to mean a lot. It could well mean that both speakers are technically excellent but the dispersion makes the differrence in taste. That would mean that in another test the M2's could be preffered. I'd say, take those results with a pinch of salt.

You said I should «take those results with a pinch of salt». That is my whole point: Many people put weight on the research effort of one research group even if a summary of a specific research field (say directivity) shows that the picture is s bit broader and more complex (cfr. the table fra Evan et al.) than one initially thought.

Let’s focus on established «truths» in audio science instead. I am curious, which of the Salón and the M2 would you expect to win in a large series of blind tests based on established audio science? Which factors would guide you to your answer? How much weight would you put on frequency response?
 
Last edited:

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,311
Location
Boquete, Chiriqui, Panama
This is proper research, grounded in the fundamental question of what listeners prefer in controlled testing. The data is offered to the world to benefit from it, competitors and all.

Thanks for the clarification, Amir. Those summaries and abstract excerpts are now indelibly seared into my memory, because embarrassment due to careless commenting has always been the best way for me to learn things - things that I should have already been at least aware of.

I would like my serving of crow sauteed in a garlic & butter sauce, if you don't mind. :cool:
Eating Crow.jpg


Even as a retired person, supposedly with lots of time on my hands, I have been overwhelmed with the number of references to journal papers here, and did not do my homework - something I often accuse others of doing. Shame on me!

It is interesting that the speakers I own - Paradigm Atoms - and the speakers I desire - Revel 206 - are both heavily influenced by loudspeaker research that started at Canada's National Research Center more than 30 years ago, and continues to this day at Harman's labs.
 

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,311
Location
Boquete, Chiriqui, Panama
which of the Salón and the M2 would you expect to win
To me, audio auditions - blind or not - are not about childish "winners" and "losers" games, but rather about determining the ability to detect differences and/or establishing preferences.
 
Last edited:

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
To me, audio auditions - blind of not - is not about childish "winners" and "losers" games, but rather about determining the ability to detect differences and/or establishing preferences.

Let me rephrase: Which would you think people on average would prefer, Salón or M2? Would frequency response guide you to your answer?
 

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,311
Location
Boquete, Chiriqui, Panama
Following up on the topic of Canadian audio research, the manufacturer of my current pair of loudspeakers says this, which doesn't sound like advertising hype to me:

How do you know when something sounds good? What does accuracy sound like?

To find out, Paradigm engineers worked closely with researchers at The National Research Council (NRC) in Ottawa, Canada, as they sought answers to these great questions of audiology, and more.

As a result of hundreds of scientific tests, over decades of research, analysts and investigators were able to show measurable characteristics of good sound, and what makes a speaker sound more accurate to our ears. They found a direct correlation between good sound and accurate measurements in three principle areas:

flate_midrange.jpg

Flat Midrange
The highest-scoring speakers did not emphasize any one midrange frequency, unwanted resonances were well controlled.

smooth.jpg

Smooth Total Energy Response
The best scoring speakers dispersed sound uniformly — on-axis and off-axis frequency response curves were smooth and similar.

low_dis.jpg

Low Distortion
The speakers with the highest scores also exhibited very low levels of distortion.

And, surprise, these three principle findings are among the most important and consistent characteristics of Paradigm loudspeakers. Using these kinds of fundamental scientific findings as a jumping off point for our own extensive program of research and development, we have been able to virtually eliminate speaker anomalies and performance-robbing design flaws.
 

MSNWatch

Active Member
Joined
Dec 17, 2018
Messages
142
Likes
171
The audio society blind test is an anecdote, of course. My point was really this: Based on audio research, which of Salón and M2 would you expect to win in a large series of competently performed blind tests? Would frequency response guide you to your answer?

Do the blind test with 100+ people not with a handful.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
Let me rephrase: Which would you think people on average would prefer, Salón or M2? Would frequency response guide you to your answer?
So your real question is which is going to be most preferred. The assumption is Harman has gotten the finest results of any speaker they've made in the M2 or that their predictive model predicts that would be the case. One less formal test resulted in a narrow win for the Salon.
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
632
So your real question is which is going to be most preferred. The assumption is Harman has gotten the finest results of any speaker they've made in the M2 or that their predictive model predicts that would be the case. One less formal test resulted in a narrow win for the Salon.
The Salon, which, incidentally, Dr. Toole uses in his system.
 

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,687
Likes
4,068
If we take any speaker, DSP it to get a flat on-axis and notice that it puts the 30° off-axis the same as for the Salón, would both sound the same?
 

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,311
Location
Boquete, Chiriqui, Panama
Let me rephrase: Which would you think people on average would prefer, Salón or M2? Would frequency response guide you to your answer?
Actually, although knowing the answer would be mildly interesting, I am not motivated to spend my time examining data and then trying to second-guess psychoacoustic science professionals.

I know that my interest in audio is related to my interest in human psychology and the seemingly infinite set of variations that determine our preferences - and influence our buying decisions. In spite of my reasonably well-rounded knowledge and understanding of music and audio, my own set of choices over the decades follows no obvious pattern, and often seems to be devoid of logical consistency. This would make it difficult to determine the primary influences that shape my constantly changing preferences, and what ultimately triggers my decision to make a purchase.

Rather than an interest in the likely preference for one of the two speaker models, I am more fascinated by your obsession with the hypothetical question related to these two particular loudspeakers, and your interaction with those who choose to respond to your comments. Perhaps you could query your favorite audio scientist or perhaps even Dr. Toole - oh, but wait - you don't appear to trust him or believe the results of his research...
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Actually, although knowing the answer would be mildly interesting, I am not motivated to spend my time examining data and then trying to second-guess psychoacoustic science professionals.

I know that my interest in audio is related to my interest in human psychology and the seemingly infinite set of variations that determine our preferences - and influence our buying decisions. In spite of my reasonably well-rounded knowledge and understanding of music and audio, my own set of choices over the decades follows no obvious pattern, and often seems to be devoid of logical consistency. This would make it difficult to determine the primary influences that shape my constantly changing preferences, and what ultimately triggers my decision to make a purchase.

Rather than an interest in the likely preference for one of the two speaker models, I am more fascinated by your obsession with the hypothetical question related to these two particular loudspeakers, and your interaction with those who choose to respond to your comments. Perhaps you could query your favorite audio scientist or perhaps even Dr. Toole - oh, but wait - you don't appear to trust him or believe the results of his research...

You wrote:

«I am more fascinated by your obsession with the hypothetical question related to these two particular loudspeakers...»

The reason why I chose these particular speakers was to make a practical case to apply audio science on gear that people know reasonably well. The case method is well known in situations where skill acquisition is a goal.

I am surprised you don’t think it’s intriguing to discuss the speakers’ attributes in order to put together a weight of the evidence case for a majority to prefer the one over the other. What’s the point of audio science if not to apply it to practical cases?

I also think the Salón vs M2 case is interesting because their frequency responses are similar, but people who took part in the audio society blind test describe their sound as quite different. Personally, I have only heard the M2

It would be great if ASR promoted a language - based on science - to discuss why people tend to prefer one speaker over the other even if the speakers appear similar frequency response wise.
 

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,311
Location
Boquete, Chiriqui, Panama
I also think the Salón vs M2 case is interesting because their frequency responses are similar, but people who took part in the audio society blind test describe their sound as quite different.

Because I have had a problem with short-term memory all of my life, I have difficulty in remembering the many details of your often long and complicated comments. (And of course, my own often long and complicates comments can cause the same problem for some people :rolleyes:.) I do best when I have two comments - or articles - in separate desktop windows so I can jump back and forth to compare details.

It just doesn't make sense to me that the variances in test results could be related to the frequency response similarities based on published specifications. Could there have been a problem in the signal path/chain for one of the tests, resulting in a condition where the frequency responses were not actually accurate and to spec? (Were the frequency responses of the systems used for blind testing actually verified in the test environment before the tests began?)

Most people who are at least reasonably experienced and knowledgeable about audio and psychoacoustics know that loudspeaker-room interactions are the biggest variable when using well-engineered, accurate electronics and loudspeakers. Therefore I would immediately focus on the differences between the rooms and other details of the test environment and equipment for the two independent sets of blind tests.

Perhaps even the instructions and expectations - the "psychological" preparations and "foreplay" preceding the "act" of the actual testing could be a factor.

I do find these possibilities intriguing, and since you are apparently familiar with both testing events, can you provide a concise response about where we might learn about those factors for both test environments?
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Because I have had a problem with short-term memory all of my life, I have difficulty in remembering the many details of your often long and complicated comments. (And of course, my own often long and complicates comments can cause the same problem for some people :rolleyes:.) I do best when I have two comments - or articles - in separate desktop windows so I can jump back and forth to compare details.

It just doesn't make sense to me that the variances in test results could be related to the frequency response similarities based on published specifications. Could there have been a problem in the signal path/chain for one of the tests, resulting in a condition where the frequency responses were not actually accurate and to spec? (Were the frequency responses of the systems used for blind testing actually verified in the test environment before the tests began?)

Most people who are at least reasonably experienced and knowledgeable about audio and psychoacoustics know that loudspeaker-room interactions are the biggest variable when using well-engineered, accurate electronics and loudspeakers. Therefore I would immediately focus on the differences between the rooms and other details of the test environment and equipment for the two independent sets of blind tests.

Perhaps even the instructions and expectations - the "psychological" preparations and "foreplay" preceding the "act" of the actual testing could be a factor.

I do find these possibilities intriguing, and since you are apparently familiar with both testing events, can you provide a concise response about where we might learn about those factors for both test environments?

One thing that strikes me regarding Salón vs M2 is the fact that M2 is a 2-way while Salón is a 4-way. 2-way is often a deficient compromise; why not in the case of M2? Is there any research on 2-ways vs 3- or 4-ways?

Another apparent difference is width: 28 cm vs 51 cm (directivity related).

As for weight M2 is lighter (60 vs 80 kg if I remember correctly).

The volume of M2 is, however, almost the double of Salón!

Can these attributes be put in a scientific context to predict preferences? Or are the attributes so complex only a thourough set of blind test would reveal which speaker is the most successful design?
 

MSNWatch

Active Member
Joined
Dec 17, 2018
Messages
142
Likes
171
One thing that strikes me regarding Salón vs M2 is the fact that M2 is a 2-way while Salón is a 4-way. 2-way is often a deficient compromise; why not in the case of M2? Is there any research on 2-ways vs 3- or 4-ways?

Another apparent difference is width: 28 cm vs 51 cm (directivity related).

As for weight M2 is lighter (60 vs 80 kg if I remember correctly).

The volume of M2 is, however, almost the double of Salón!

Can these attributes be put in a scientific context to predict preferences? Or are the attributes so complex only a thourough set of blind test would reveal which speaker is the most successful design?
You CANNOT make any firm conclusions from this comparison because of the small sample size (n=6)!
 
Last edited:

Kvalsvoll

Addicted to Fun and Learning
Audio Company
Joined
Apr 25, 2019
Messages
878
Likes
1,643
Location
Norway
FREQUENCY RESPONSE: EN ENTICING MARKETING STORY

! Warning! - - - long post!!! - - -

@Kvalsvoll, it’s not as if I am unfamiliar with the history or philosophy (for example epistemology) of science. You wrote:

«In science there is one correct answer, and then all the other answers are simply wrong. And the decision on what is correct and what is wrong is based in scientifical technical evidence, and the decision process in itself is based on logic. So there is no middle way».

What you describe here is actucally a binary system, with only two outcomes, one or zero, absolutely correct or absolutely wrong. Limited numbers of states can be described by pure mathematics. If all properties can be fully predicted by pure logic alone, we don’t need experiments to confirm our beliefs or wisdom. I believe @Cosmik is in the camp of pure mathematics, a camp which I would call “philosophical” not “scientific”.

Reproduction of sound is HIGHLY complex as it is a result of factors such as:

(1) The sound field coming from loudspeaker can be formulated by complex functions, and
(2) Our perception of sound follows a series of complex cognitive processes.

In other words, the prediction of reproduced sound – which means combining (1) and (2) above – is so complex one could liken it to opening a can of worms. However, smart people dealing with complexity have had a recipe for ages: Vox populi. In 1907, Francis Galton, who according to Wikipedia was a “an English Victorian era statistician, polymath, sociologist, psychologist, anthropologist, eugenicist, tropical explorer, geographer, inventor, meteorologist, proto-geneticist, and psychometrician”, had to revisit his estimations of the wisdom of the public. At the annual show of the West of England Fat Stock and Poultry Exhibition people had a chance to guess the weight of an ox. The most correct estimates won prizes. Surprised by the outcome of the public’s guesses, Galton wrote:

“It appears, then, that in this particular instance, that the vox populi is correct to within 1 percent of the real value (…) The result is, I think, more creditable to the trustworthiness of a democratic judgment than might have been expected”.
Source: http://galton.org/essays/1900-1911/galton-1907-vox-populi.pdf

The polling method was later refined by none other than the Project RAND to find fast solutions to complex problems utilizing the power of group judgment, by gathering a group of experts (see: https://en.m.wikipedia.org/wiki/Delphi_method). Needless to say, RAND was populated by the best science people, with the aim of winning the (cold) war. In other words, they didn’t search for a solution as if the outcome were a binary, philosophical one, but a forecasting solution that was adaptive, robust and able to deal with complexity.

So it is in this tradition, the democratic vox populi line of thinking, that audio researchers use polls too instead of mathematics alone to find solutions to the sound reproduction problem, given the fact that predicting both (1) and (2) above is complex when (1) is isolated from (2) and highly complex when (1) and (2) are merged together into one process or experience.

Vox populi and the Delphi method cannot be formulated using mathematics. The outcome may a binary one (“yes” or “no”), but the process leading to “correct” or “incorrect” is not one that can be described by an algorithm. Even the makers of “The Matrix” had to change the formula to yield a better outcome, and the formula needed tweaking all the time!

Because we have entered a scientific process that can be described as vox populi, the wisdom of crowds, we realize the critique against the outcomes – often called “the science” – can be summed up by the usual factors that argue against the polling method. I have for about two decades now worked on vox populi related issues in very big data sets. It gives me some familiarity with the discussion of scientific method, even if my field is not audio related.

My user name is svart-hvitt which means black-white in Norwegian. I chose this user name because of my fascination with science that takes the form of correct-incorrect, like a binary one-zero process. At the same time being in the know that the binary one-zero utcome is a drifting one. I also took the name because I am intrigued by the fact that our utilization of digital ones and zeros are able to represent reality in a convincing manner, while at the same time knowing that “this is not a pipe, it’s a picture of a pipe” (see my avatar).

Science is about getting it about right when we try and describe the world, as Isaac Asmivov brilliantly explained in the essay “The Relativity of Wrong”. He wrote:

“In short, my English Lit friend, living in a mental world of absolute rights and wrongs, may be imagining that because all theories are wrong, the earth may be thought spherical now, but cubical next century, and a hollow icosahedron the next, and a doughnut shape the one after.

What actually happens is that once scientists get hold of a good concept they gradually refine and extend it with greater and greater subtlety as their instruments of measurement improve. Theories are not so much wrong as incomplete”.
Source: https://chem.tufts.edu/AnswersInScience/RelativityofWrong.htm

I urge every ASR member to read the full story as it goes to the core of the scientific process., dealing with the naïve protests by a young student.

Asimov’s use of the word “incomplete” is critical here. When pointing out that ASR’s go to source of audio science may be incomplete, while also being potentially subject to biases (like position related sound fields, i.e. factor (1) above, funding etc.), it’s as if hell breaks loose.

Previously, @oivavoi wrote:

“But take the issue of the dipole speaker, for example. In Harman studies, no good. In one of Søren Bech's studies, however, the dipole received the highest rating given a particular placement, and the worst rating, given a different placement”.
Source: https://www.audiosciencereview.com/...s-and-smooth-off-axis.8090/page-4#post-199430

Instead of unleashing an interesting discussion of Harman’s vox populi process, @oivavoi was forced to take a defensive position, as if one needs to apologize for bringing up critique of Harman’s methods and research results. The paper that @oivavoi quoted didn’t come into the discussion, even if if contains much of interest to this debate.
View attachment 30019
Take a look at the excerpt from the article (Evans et al., 2009) above. The table shows at least one obvious weakness of the Harman research; the vox populi process was carried out from one loudspeaker position only. The researchers wrote:

“It is evident that the dependence of listeners’ fidelity ratings on position (and room) is also important. Whilst the dipole is rated as worst in Position 2 (less than 1m from the back wall, central), it is rated as best when moved to Position 1 (over 1m from back and side wall). This suggests that the perceived influence of directivity is de- pendent on both position and room type (…) It is also clear that investigations have been limited based upon the nature of the testing arrangement. The use of ‘clinical’ test environments is arguably not representative of true listening scenarios, and whilst the measured reverberation time may be similar to a domestic listening space, the nature of reflections in many standardised listening rooms are not. Also, the majority of tests are conducted with the listener remaining in a fixed ’sweet-spot’ position. If directivity is to be investigated fully, then listening at more than one position should be included, in order to exploit the characteristic traits of each radiation type — one type of directivity may pro- mote well-balanced timbral and spatial listening across a room, however this would be overlooked with traditional testing methods”.
Source: https://www.researchgate.net/public..._sound_quality_-_a_review_of_existing_studies

The table also shows that research is as much about getting an overview of different voices instead of seeking a favorite voice. That’s why research articles formally contain a part dedicated to literature overview. Meta research articles follow in the same path, where the entire article is devoted to existing research to make a sum-of-the-evidence judgment.

The concept of neutral reproduction of sound has been us for ages. People have searched for colour free sound for over 100 years and Harman have shown that vox populi on average prefers neutral to coloured. However, to reduce factors (1) and (2) above to “frequency response, frequency response and frequency response” is incomplete. It may work as an enticing marketing story if your goal is to push boxes at the highest rate possible, but from a scientific point there is more to factors (1) and (2) above than frequency response.

On the matter of directivity, taken from the Evans et al. (2009) article that @oivavoi quoted previously, the authors concluded thusly:

“An extensive review of studies and opinions regarding the listener response to loudspeaker directivity has been presented, and it is evident that no detailed conclusions with regard to this matter have been established. Whilst some of the literature indicates small preferential trends, most provides little insight and this can be attributed to the limited nature of the tests carried out. The authors propose to conduct a range of more specific tests which will consider the key research questions that have resulted from other studies, namely the isolation of the directivity feature alone, its influence on different auditory attributes and the nature of listening tests conducted”.

Please note that one of the authors of that article, Søren Bech, was one of the co-authors of the 2017 JAES article I brought up earlier, where the authors wrote:

“Loudspeaker specifications have traditionally described the physical properties and characteristics of loudspeakers: frequency response, dimensions and volume of the cabinet, diameter of drivers, impedance, total harmonic distortion, sensitivity, etc. Few of these directly describe the sound reproduction and none directly describe perception of the reproduction, i.e., takes into account that the human auditory system is highly non-linear in terms of spectral-, temporal-, and sound level processing (see, e.g., [3]). This disconnect between specifications and perception have made it challenging for acousticians and engineers (and consumers) to predict how a loudspeaker will sound on the basis of these specifications”.
Source: http://www.aes.org/e-lib/browse.cfm?elib=18729

Bech is professor of audio perception at Aalborg University, director of research for Bang & Olufsen and a member of government expert groups (https://www.linkedin.com/in/s%F8ren-bech-8882aa4/). So Bech is hardly a person who would fall victim of audiophoolery, which I have been accused of when I quoted from his research.

Science is a more complex than a binary process of mathematical properties, a solution waiting for a philosophical answer. “Frequency response, frequency response and frequency response” is an enticing solution to the sound reproduction problem; as enticing as it is incomplete.

This explains too, why I have chosen flat and smooth speakers of excellent vertical and horizontal directivity properties, but still have questions that go beyond the incomplete frequency response answer.

If you have noticed this con-artist that posts for the gullible on youtube - Ethan Winer called him out once, several threads on this forum on those videos, with claims and elaborations that are wrong or simply just fraud. When you say "there must be more to it.." without any further specification of what that more is, people are confused and start to put you in the same company as this guy.

And there is more - a lot more. On this forum, the Harman-direction gets much support, and some may put a bit too much into what those experiments actually show. And the idea that this was only a starting point, does not get a lot of support.

To design and execute experiments like the Harman listening tests is not easy and may require substantial resources. A good experiment is one that manages to isolate as few parameters as possible, to give the best and most reliable answer to one specific question. Since experiencing sound is composed of many different properties, it is then required to do many experiments to get a complete answer.

One of the problems is that it is required to have a lot of people to do a test, to get a statistically significant answer. This can be costly, takes a lot of time, and everything needs to be well planned. Some sorts of experiments are much easier - all electronics can be tested by making sound sample files available, so it is quite simple to achieve a large sample size. For loudspeakers and acoustics, most of what we want to study can not be solved by distributing sound samples, the test subjects need to be on location and listen there.

To find what specifications are necessary to fully describe the sound of a loudspeaker is a nice goal. If we reduce the requirement down to "most important for sound quality" it gets a lot more manageable. And the designed and executed experiments that show a correlation between frequency response and perceived sound quality. But this is not where sound ends, this was the beginning.
 
Top Bottom