• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

NORMS AND STANDARDS FOR DISCOURSE ON ASR

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
- - - warning ! ! ! - - - very long post, please take your time :) - - -

Have the standards for discourse on ASR led us to a place where irrelevance is celebrated, while difficult questions are not pursued? Do the norms and standards on ASR lead to sins of emission?

I will present a model of sins of emission in spirit of Ellison (2002) and named after Akerlof (2019). As you will see, the Akerlof model makes immediate sense for the audio science interested person too. From Akerlof (2019):

“An academic researcher selects from a set of possible research topics. These topics can be characterized along two dimensions: (1) Hardness (i.e., the ease or difficulty of producing precise work on the topic) and (2) Importance.

The researcher values both Hardness and Importance; but the weight he places on Hardness leads him to trade off Hardness and Importance in a non-optimal way. In this sense, he is biased.

Figure 1 depicts the solution to the researcher’s problem. While the researcher chooses a topic lying along the “frontier,” the frontier topic he chooses differs from the social optimum. His chosen topic (Topic A) is both Harder and less Important than the social optimum (Topic B).
If we aggregate across all researchers, we obtain a prediction about the “cloud” of topics the profession will address. Observe that there will be a set of Important but Soft topics which will not be pursued; in this sense, bias towards the Hard in the profession generates “sins of omission.””
F5E77824-AC19-461E-A456-620C4A23E6F8.png

Ellison (2002), which inspired Akerlof’s model, documented an important shift in academic research. Academic articles are longer and take longer to publish than in previous decades. This may not be clear to younger researchers, but is obvious to older professors and if one looks at the form of older journal articles. The table below, from Ellison, gives an overview.

E785CCDA-EE07-4DA5-8DCA-BF36F353AB82.jpeg

Both Ellison and Akerlof are economists. However, Ellison noted:

“The phenomenon I am describing is not unique to economics. Similar trends can be seen in many other academic disciplines”.

Both Ellison and Akerlof argue that the evolving standards gradually reflect gradually changing social norms.

“The slowdown of the process and the increased length of papers may reflect an increase in [Hardness]”, Ellison wrote (I have consistently used Akerlof’s concept of “Hardness” instead of Ellison’s more general “r-quality”).

“Furthermore, the emphasis on Hardness is likely at the expense of Importance”, Akerlof said.

Akerlof, recipient of the “Nobel prize” in economics in 2001, listed several consequences of the Hardness bias:

Consequence 1. Bias against New Ideas. So far, we have classified topics according to their “Importance” and their “Hardness.” Another relevant dimension is whether topics are New or Old—or, in Kuhn’s (2012) terminology, whether they entail “normal" or “revolutionary” science. Not all New topics are Important; but, clearly, the most Important topics are New. Hardness bias inhibits acceptance of New topics in at least two different ways.
First, Old topics/paradigms have a variety of tools that aid precision: such as established terminologies, conceptual frameworks, and empirical methodologies. With bias toward the Hard, academics working within such accepted paradigms have an advantage, since they can borrow at will from such toolkits to state their ideas precisely. In contrast, those who are presenting a New idea, are disadvantaged, since they must develop their own tools. As expressed by Frey (2003, p. 212)): "a new idea is less well-formulated than ... well-established ideas and therefore rejected for lack of rigor." In this way, demand for precision (for Hardness) impedes the introduction of New ideas.
Second, Hardness bias reduces the ability to challenge existing paradigms. According to usual procedure in economics, as in science more generally, Old ideas are only rejected when they are shown to be inferior in tests against New ideas”.

Consequence 2. Over-specialization. Bias towards the Hard also encourages over-specialization. Generalists need to meet the standards of precision for multiple fields, while specialists need only meet the standards of one. Hence, it is easier to be Hard as a specialist than as a generalist”.

Consequence 3. Evaluations based on publications in "Top Five". As we have seen, Hardness bias results in specialization. That, in turn, results in increased use of journal metrics for evaluations. In economics, this has especially taken the form of evaluations based on number of "Top Five" publications”.

So what does this mean for ASR?

I believe many on ASR suffer from “science envy” in the sense that they pick up bad habits (as well as some good habits) from their idols in science. In other words, I think the problems in the scientific community that Ellison (2002) and Akerlof (2019) discuss are reflected by ASR members too, possibly to a greater extent than in “real” science communities. One observation: @oivavoi is one of the most highly trained professionals on ASR in scientific discourse. Still, I get the impression that his writing sometimes makes friction where his intention was just to discuss an idea.

I have previously likened the focus on ASR with the so called streetlight effect (https://en.m.wikipedia.org/wiki/Streetlight_effect):

“A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is".”

My point was the fact that DACs are so transparent, one cannot argue that this part of the audio chain scores high on Importance. Amplifiers, too, have become so transparent it should be obvious where focus should be, i.e. on speakers.

Especially DACs, but modern amplifiers too, fit nicely into the Ellison-Akerlof models, where we observe that something unimportant can be dealt with using the Hard tools. As we move towards something of real importance, however, like speakers, it’s questionable if we have a readily available toolkit to separate good speakers from better speakers.

Not long ago, I quoted a JAES article from 2017:

“Loudspeaker specifications have traditionally described the physical properties and characteristics of loudspeakers: frequency response, dimensions and volume of the cabinet, diameter of drivers, impedance, total harmonic distortion, sensitivity, etc. Few of these directly describe the sound reproduction and none directly describe perception of the reproduction, i.e., takes into account that the human auditory system is highly non-linear in terms of spectral-, temporal-, and sound level processing (see, e.g., [3]). This disconnect between specifications and perception have made it challenging for acousticians and engineers (and consumers) to predict how a loudspeaker will sound on the basis of these specifications”.
Source: http://www.aes.org/tmpFiles/elib/20190728/18729.pdf

In a previous article from 2016, one of the authors, Pedersen, wrote something which seems to have inspired the later JAES article:

“ In other words, if you want to know how a loudspeaker sounds, it is more sensible to use a perceptual assessment of a loudspeaker’s sound based on a listening test rather than taking outset in the technical data”.
Source: https://assets.madebydelta.com/assets/docs/senselab/publications/TEKnotat_TN11_UK_v5.pdf

One of the same authours, professor Bech, made an interesting observation almost a decade (2009) earlier:

“It is evident that the dependence of listeners’ fidelity ratings on position (and room) is also important. Whilst the dipole is rated as worst in Position 2 (less than 1m from the back wall, central), it is rated as best when moved to Position 1 (over 1m from back and side wall). This suggests that the perceived influence of directivity is dependent on both position and room type”.
Source: https://www.researchgate.net/public..._sound_quality_-_a_review_of_existing_studies

Note that the often quoted Harman research found dipole to be no good. Which begs the question, how robust is Hard?

Why are the above quotes important? The above quotes from the Danish researchers are important because we have many measurements of speakers that don’t help us to predict how these loudspeakers will sound based on these Hard data. I illustrated this point by the case method (see Flyvbjerg for a discussion of the case method: https://arxiv.org/pdf/1304.1186.pdf), where I asked ASR members to try and figure out which is the best speaker, the Revel Salon or the JBL M2. Both speakers are made by Harman, which are regarded by ASR members as best in class when it comes to audio science, measurements and specifications. Besides, @Floyd Toole has written a book of 568 pages on the reproduction of audio during his time at Harman. Despite all the measurements and with Toole’s book in hand, nobody wanted to apply this science to predict how the M2 would sound compared to the Salon and which speaker is the best one based on Hard data. How many more pages of Toole research do people need to predict the sound of two good speakers for which we have lots of Hard data?

Let me quote Ellison (2002) again to make my point clearer:

“The other basic observation I make about the static model is that a continuum of social norms are possible. If the community agrees that quality is very important, then authors will spend most of their time developing main ideas. If [Hardness] is very important, then authors spend very little time on ideas and focus on revisions. Nothing in the model prevents either extreme or something in the middle from being part of an equilibrium. Differences in social norms provide another potential explanation for differences across fields or over time”.

The quotes from the Danish articles serve to illustrate my point that many on ASR prefer to continue using Hard methods that cannot predict the sound and quality of speakers to separate the good from the best speakers, while new ideas for evaluation of sound reproduction are met with hostility due to lack of Hardness. So Hardness wins over ideas. The same hostility to new ideas was registered when @oivavoi picked up the concept of “slow listening” in an AES paper (https://www.audiosciencereview.com/...f-benchmark-ahb2-amp.7628/page-45#post-190550). The article was waved off by ASR members due to lack of Hardness.

@Kvalsvoll criticised me for using the concept of “middle way” in a setting where science is celebrated. But isn’t the Ellison quote just above a reminder that there is a trade-off between Importance and Hardness, that Ideas and Data are two factors that need balancing, i.e. finding a middle way?

Audio is a science which combines multiple research fields; physics, psychoacoustics, psychology, neurology and more. Isn’t such a multi-disciplinar field a place where ideas are even more important than in a narrower field, like say mathematics or “pure” physics? Do people on ASR welcome input from other fields, or is such input better omitted?

Toole’s 568 pages long book is also a reminder, isn’t it, that legacy research brings you only so far, but not further as speakers start to reach a consensus on the “old” consensus factors like “flat” and “smooth”? Are we in need of new ideas to design, measure and describe speakers that go beyond “good”? Will we ever be able to describe say Salon and M2 in ways that make meaning to people and let us decide which speaker is the best for a majority of users and use cases?

And one more thing to disturb a little more. The celebrated research on ASR is heavily leaning towards vox populi, i.e. polls to find correlation between preferences and speaker attributes. What I have found as fascinating as lacking in logic, is the fact that people often celebrate certain vox populi processes (say market prices or speaker quality) while they at the same time attack the outcome of other vox populi processes (say democratic elections). Personally, I like the idea of vox populi as much as I am aware of its shortcomings.

Lastly, a point on behaviour in science and a point that is made by both Ellison (2002) and Akerlof (2019). The point is about what decides what is important and what is not in research and science. Akerlof wrote:

“This tendency for disagreement on Importance is exacerbated by tendencies to inflate the Importance of one’s own work and deflate the Importance of others’”.

A similar observation by Ellison (2002):

“Section VII adds the assumption that academics are biased and think that their work is slightly better than it really is”.

One final note. Why is this post so long? By now you have learned that academic articles have become longer, but this post is not long because of my “science envy”. The post is long because it takes more space and effort to make a divergent point, to present ideas that collide with consensus and status quo in a social setting.

- - - - - - -
REFERENCES:
Ellison (2002): https://pdfs.semanticscholar.org/8429/7d83186f86c963c61556e1e2d954b8fbed37.pdf
Akerlof (2019): https://assets.aeaweb.org/asset-server/files/9185.pdf
 
Last edited:
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Truth in advertising. You could get more juice out of this fruit if it was boiled down by 90%. There are some nuggets. I used to survive by being concise.

You remark on form, not content, is unsurprising. I wrote why the post is long:

“One final note. Why is this post so long? By now you have learned that academic articles have become longer, but this post is not long because of my “science envy”. The post is long because it takes more space and effort to make a divergent point, to present ideas that collide with consensus and status quo in a social setting”.
 

Ron Texas

Master Contributor
Forum Donor
Joined
Jun 10, 2018
Messages
6,234
Likes
9,363
You remark on form, not content, is unsurprising. I wrote why the post is long:

“One final note. Why is this post so long? By now you have learned that academic articles have become longer, but this post is not long because of my “science envy”. The post is long because it takes more space and effort to make a divergent point, to present ideas that collide with consensus and status quo in a social setting”.

I read that part twice and happen to disagree. "If I had more time, I would have written a shorter letter. " Probably attributable to Blaise Pascal.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,657
Likes
240,895
Location
Seattle Area
I believe many on ASR suffer from “science envy” in the sense that they pick up bad habits (as well as some good habits) from their idols in science.
Bad habits from our idols? What are those bad habits in our idols?

Toole’s 568 pages long book is also a reminder, isn’t it, that legacy research brings you only so far, but not further as speakers start to reach a consensus on the “old” consensus factors like “flat” and “smooth”?
Legacy research? Talk about a put down debating tactic.

No, we don't throw out established science because someone writes a new paper. For something to become valid, it would need to have the comprehensive work, analysis and back up across 270 references to be of similar value, let alone better.

The only envy we are dealing with here is that your idols are not worshipped. That emotional need continues to increase the noise in these discussions. Present your papers, but don't position them as this and that to hide your emotions. We see through that and takes away any value you want to purport on their behalf.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
So Hardness wins over ideas.
@svart-hvitt, what do you think to Eleanor Roosevelt's observation?
Great Minds Discuss Ideas; Average Minds Discuss Events; Small Minds Discuss People
Ignoring her ranking of the three types, I think she is right on the different types of minds, and although the three types seem to be speaking in the same language they don't engage at all. I would count scientific experiments as 'events' and only indirectly connected to ideas. The hypothesis and the interpretation of the results are supposed to be the bits that link experiments with ideas, but clearly they can be barely present, or nonsensical, and yet the experiment is still considered to be 100% science because of the 'event' part.

People will discuss scientific events until the cows come home - without ever producing any ideas.
 

gene_stl

Addicted to Fun and Learning
Forum Donor
Joined
Jun 14, 2019
Messages
867
Likes
1,200
Location
St.Louis , Missouri , U.S.A.
Your post is certainly thought provoking. I am not certain that I get your point though.

As it happens I just watched an hour and a half you tube lecture by Floyd Toole , which generally made me want to at least check a copy of his book out of a library. He evidences sufficient logos and ethos that I suppose he qualifies as an "idol". Though since I have been out of the hobby for a long while I only this year discovered him.

As he pointed out, establishing correlation between measurements and preferences is very difficult , time consuming and expensive. I wonder if it's "Important" in the audio context.

I notice that many non scientifically oriented audiophiles will fixate on a particular idea or principle which then influences their whole experience in the hobby. Such as "tubes sound better" or "I hate crossovers, I want a full range driver" or "I can hear the difference between class A and class D" (not to mention cables ,etc)

I also notice that people who are trying to sell things , frequently suborn scientific authority by making statements about the audibility of certain things. For example I was watching a lecture that John Curl was giving, to audiophiles , not the AES wherein he was talking about the evils of ninth harmonic distortion and how it contributes to listening fatigue and you don't really notice it but it just makes you turn your system off. Of course the
lecture was sponsored by Parasound and they were promoting their amplifiers. The guy that designs Chord D/A converters has given similar "Master classes" wherein he talked about distortions that were very far down but yet somehow audible. In both these cases it is very difficult for the average scientifically literate audio nut to challenge or accept such statements. These statements sound very similar to statements that ARE generally accepted such as "the first sound arrival at your ears is the most important/influential".

One thing that John Curl said which I liked. "You have to put in your 10,000 hours"

Interestingly Floyd Toole showed a chart which showed that audio salesman were second only to explicitly trained listeners in being able to detect audio problems.
 
Last edited:
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Bad habits from our idols? What are those bad habits in our idols?


Legacy research? Talk about a put down debating tactic.

No, we don't throw out established science because someone writes a new paper. For something to become valid, it would need to have the comprehensive work, analysis and back up across 270 references to be of similar value, let alone better.

The only envy we are dealing with here is that your idols are not worshipped. That emotional need continues to increase the noise in these discussions. Present your papers, but don't position them as this and that to hide your emotions. We see through that and takes away any value you want to purport on their behalf.

By bad habits I mean sins of omission.

By legacy research I mean «old», as per Akerlof (2019): «Another relevant topic is whether topic are New or Old - or, in Kuhn’s (2012) terminology they entail «normal» or «revolutionary» science»».

Toole and Olive research often dates back to their NRC time, decades ago.

You wrote: «For something to become valid, it would need to have the comprehensive work, analysis and back up across 270 references to be of similar value, let alone better». No wonder why academic articles get longer and less important, right? You seem to have misse the main point of Ellison (2002) and Akerlof (2019).

You wrote: «Present your papers». I have examples in my opening post.

You accuse me of emotions, yet I have been labelled a Finnish nationalist by moderator @Thomas savage (https://www.audiosciencereview.com/...ou-consistant-in-your-views.5986/#post-134392) even if I am not Finnish, you have called me a racist (https://www.audiosciencereview.com/...e-improbability-of-2l.6667/page-3#post-150214) and a flat earther (https://www.audiosciencereview.com/...-and-smooth-off-axis.8090/page-16#post-201076).

These are quite emotional master suppression techniques (https://en.m.wikipedia.org/wiki/Master_suppression_techniques).
 
Last edited:
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Your post is certainly thought provoking. I am not certain that I get your point though.

As it happens I just watched an hour and a half you tube lecture by Floyd Toole , which generally made me want to at least check a copy of his book out of a library. He evidences sufficient logos and ethos that I suppose he qualifies as an "idol". Though since I have been out of the hobby for a long while I only this year discovered him.

As he pointed out, establishing correlation between measurements and preferences is very difficult , time consuming and expensive. I wonder if it's "Important" in the audio context.

I notice that many non scientifically oriented audiophiles will fixate on a particular idea or principle which then influences their whole experience in the hobby. Such as "tubes sound better" or "I hate crossovers, I want a full range driver" or "I can hear the difference between class A and class D" (not to mention cables ,etc)

I also notice that people who are trying to sell things , frequently suborn scientific authority by making statements about the audibility of certain things. For example I was watching a lecture that John Curl was giving, to audiophiles , not the AES wherein he was talking about the evils of ninth harmonic distortion and how it contributes to listening fatigue and you don't really notice it but it just makes you turn your system off. Of course the
lecture was sponsored by Parasound and they were promoting their amplifiers. The guy that designs Chord D/A converters has given similar "Master classes" wherein he talked about distortions that were very far down but yet somehow audible. In both these cases it is very difficult for the average scientifically literate audio nut to challenge or accept such statements. These statements sound very similar to statements that ARE generally accepted such as "the first sound arrival at your ears is the most important/influential".

One thing that John Curl said which I liked. "You have to put in your 10,000 hours"

Interestingly Floyd Toole showed a chart which showed that audio salesman were second only to explicitly trained listeners in being able to detect audio problems.

Let me add that my post is not a critique of @Floyd Toole . He has helped establish what is a sound consensus on certain areas today. Yet, is audio science at a dead end with the Harman research. Has everything been discovered, or are sins of omission present in audio science too?

But I also noted that people with Toole’s 568 pages long book on audio reproduction in hand were not willing to apply this research to discuss the pros & cons of two good speakers where we have lots of measurements (Salón and M2). If 568 pages are not enough, when will we be able to enter into a practical debate on speakers where we have data? Has importance suffered somewhat in writing 568 pages?
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,509
Likes
25,338
Location
Alfred, NY
@gene_stl I would strongly recommend against taking much that Curl says these days seriously. The world left him behind decades ago and now he's just a brand and promotion. A very bitter brand and promotion at that, and more focused on being right than knowing what's right.
 
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
@svart-hvitt, what do you think to Eleanor Roosevelt's observation?

Ignoring her ranking of the three types, I think she is right on the different types of minds, and although the three types seem to be speaking in the same language they don't engage at all. I would count scientific experiments as 'events' and only indirectly connected to ideas. The hypothesis and the interpretation of the results are supposed to be the bits that link experiments with ideas, but clearly they can be barely present, or nonsensical, and yet the experiment is still considered to be 100% science because of the 'event' part.

People will discuss scientific events until the cows come home - without ever producing any ideas.

It’s a funny quote and some food for thought. Without any kernel of truth I believe the quote would have died out.

Show me the money. Show me the evidence.

Strong statements that often make sense. I will now quote Freeman Dyson:

«• It’s a very dangerous game. If you work with a computer model for years and years and years, always improving the model, in the end you end up believing it. So it’s difficult to remain objective.
• Observations always should tell you in the end who is right and who is wrong. Those people don’t look at the observations. They are in a world of their own.
• Generally speaking there are two kinds of information: Observations and there are theories. So generally speaking you can believe the observations and you don’t need to believe the theories».

I will start from bottom.

Theories can be dangerous, like communism in Soviet Union. The communist idea, to perform as to your ability and get according to your needs, is what most (?) families do. But the idea breaks down as you increase the social circle. So ideas and theories are nice but faced with conflicting evidence one needs to stop and evalue.

Observations count, but some people are rich by luck (randomness) and some poor due to unfortune.

If I were to launch a main critique agains Toole & Co, it would be this. Harman have become very used to their vox populi approach to the extent that blindness may occur? Vox populi can be criticized for being measurement without theory. Heavy math is absent in most (?) Harman research, and I don’t know if they collaborate with neurologists and other disciplines? A somewhat one-sided bet on vox populi is a risky one. Having said that, I believe vox populi is great in complex matters like audio preferences, but there are inherent risks too.

Can we say that the overriding idea of «Harman research» is vox populi? In that sense, Harman is idea based too, and the idea is vox populi.
 

gene_stl

Addicted to Fun and Learning
Forum Donor
Joined
Jun 14, 2019
Messages
867
Likes
1,200
Location
St.Louis , Missouri , U.S.A.
I think an analogy can be made between Harman research and medical research , a massive amount of which is paid for by drug and device companies. There is a great deal of bad medical research ,published by "hired guns" who sometimes have great reputations. But there is also lots of good work done, by people of integrity even if they work for a company that may want to sell something.

I did not think you were criticizing Dr. Toole.

An interesting side note is that your comment made me look up what a Revel Salon is because I didn't know. It is interesting for me to compare it to my own speakers which are also four way and have multiple drivers. I have never heard Salon's though I would expect them to sound pretty good.
I have also not heard M2s but I have heard the similar 4367. Although both are "two way" both drivers have multiple voice coils which seem to cover different bands. So they are almost four way in some sense.

I don't think I like the term vox populi especially in the current political environment. It makes it sound like mob rule when that is exactly the opposite of what Dr. Toole was trying to accomplish and seemingly did accomplish. Peoples preferences will follow the bell shaped curve but there are also lots of outliers.
 
Last edited:

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,700
Likes
10,386
Location
North-East
- - - warning ! ! ! - - - very long post, please take your time :) - - -

Have the standards for discourse on ASR led us to a place where irrelevance is celebrated, while difficult questions are not pursued? Do the norms and standards on ASR lead to sins of emission?

I will present a model of sins of emission in spirit of Ellison (2002) and named after Akerlof (2019). As you will see, the Akerlof model makes immediate sense for the audio science interested person too. From Akerlof (2019):

“An academic researcher selects from a set of possible research topics. These topics can be characterized along two dimensions: (1) Hardness (i.e., the ease or difficulty of producing precise work on the topic) and (2) Importance.

The researcher values both Hardness and Importance; but the weight he places on Hardness leads him to trade off Hardness and Importance in a non-optimal way. In this sense, he is biased.

Figure 1 depicts the solution to the researcher’s problem. While the researcher chooses a topic lying along the “frontier,” the frontier topic he chooses differs from the social optimum. His chosen topic (Topic A) is both Harder and less Important than the social optimum (Topic B).
If we aggregate across all researchers, we obtain a prediction about the “cloud” of topics the profession will address. Observe that there will be a set of Important but Soft topics which will not be pursued; in this sense, bias towards the Hard in the profession generates “sins of omission.””
View attachment 30226
Ellison (2002), which inspired Akerlof’s model, documented an important shift in academic research. Academic articles are longer and take longer to publish than in previous decades. This may not be clear to younger researchers, but is obvious to older professors and if one looks at the form of older journal articles. The table below, from Ellison, gives an overview.

View attachment 30225
Both Ellison and Akerlof are economists. However, Ellison noted:

“The phenomenon I am describing is not unique to economics. Similar trends can be seen in many other academic disciplines”.

Both Ellison and Akerlof argue that the evolving standards gradually reflect gradually changing social norms.

“The slowdown of the process and the increased length of papers may reflect an increase in [Hardness]”, Ellison wrote (I have consistently used Akerlof’s concept of “Hardness” instead of Ellison’s more general “r-quality”).

“Furthermore, the emphasis on Hardness is likely at the expense of Importance”, Akerlof said.

Akerlof, recipient of the “Nobel prize” in economics in 2001, listed several consequences of the Hardness bias:

Consequence 1. Bias against New Ideas. So far, we have classified topics according to their “Importance” and their “Hardness.” Another relevant dimension is whether topics are New or Old—or, in Kuhn’s (2012) terminology, whether they entail “normal" or “revolutionary” science. Not all New topics are Important; but, clearly, the most Important topics are New. Hardness bias inhibits acceptance of New topics in at least two different ways.
First, Old topics/paradigms have a variety of tools that aid precision: such as established terminologies, conceptual frameworks, and empirical methodologies. With bias toward the Hard, academics working within such accepted paradigms have an advantage, since they can borrow at will from such toolkits to state their ideas precisely. In contrast, those who are presenting a New idea, are disadvantaged, since they must develop their own tools. As expressed by Frey (2003, p. 212)): "a new idea is less well-formulated than ... well-established ideas and therefore rejected for lack of rigor." In this way, demand for precision (for Hardness) impedes the introduction of New ideas.
Second, Hardness bias reduces the ability to challenge existing paradigms. According to usual procedure in economics, as in science more generally, Old ideas are only rejected when they are shown to be inferior in tests against New ideas”.

Consequence 2. Over-specialization. Bias towards the Hard also encourages over-specialization. Generalists need to meet the standards of precision for multiple fields, while specialists need only meet the standards of one. Hence, it is easier to be Hard as a specialist than as a generalist”.

Consequence 3. Evaluations based on publications in "Top Five". As we have seen, Hardness bias results in specialization. That, in turn, results in increased use of journal metrics for evaluations. In economics, this has especially taken the form of evaluations based on number of "Top Five" publications”.

So what does this mean for ASR?

I believe many on ASR suffer from “science envy” in the sense that they pick up bad habits (as well as some good habits) from their idols in science. In other words, I think the problems in the scientific community that Ellison (2002) and Akerlof (2019) discuss are reflected by ASR members too, possibly to a greater extent than in “real” science communities. One observation: @oivavoi is one of the most highly trained professionals on ASR in scientific discourse. Still, I get the impression that his writing sometimes makes friction where his intention was just to discuss an idea.

I have previously likened the focus on ASR with the so called streetlight effect (https://en.m.wikipedia.org/wiki/Streetlight_effect):

“A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is".”

My point was the fact that DACs are so transparent, one cannot argue that this part of the audio chain scores high on Importance. Amplifiers, too, have become so transparent it should be obvious where focus should be, i.e. on speakers.

Especially DACs, but modern amplifiers too, fit nicely into the Ellison-Akerlof models, where we observe that something unimportant can be dealt with using the Hard tools. As we move towards something of real importance, however, like speakers, it’s questionable if we have a readily available toolkit to separate good speakers from better speakers.

Not long ago, I quoted a JAES article from 2017:

“Loudspeaker specifications have traditionally described the physical properties and characteristics of loudspeakers: frequency response, dimensions and volume of the cabinet, diameter of drivers, impedance, total harmonic distortion, sensitivity, etc. Few of these directly describe the sound reproduction and none directly describe perception of the reproduction, i.e., takes into account that the human auditory system is highly non-linear in terms of spectral-, temporal-, and sound level processing (see, e.g., [3]). This disconnect between specifications and perception have made it challenging for acousticians and engineers (and consumers) to predict how a loudspeaker will sound on the basis of these specifications”.
Source: http://www.aes.org/tmpFiles/elib/20190728/18729.pdf

In a previous article from 2016, one of the authors, Pedersen, wrote something which seems to have inspired the later JAES article:

“ In other words, if you want to know how a loudspeaker sounds, it is more sensible to use a perceptual assessment of a loudspeaker’s sound based on a listening test rather than taking outset in the technical data”.
Source: https://assets.madebydelta.com/assets/docs/senselab/publications/TEKnotat_TN11_UK_v5.pdf

One of the same authours, professor Bech, made an interesting observation almost a decade (2009) earlier:

“It is evident that the dependence of listeners’ fidelity ratings on position (and room) is also important. Whilst the dipole is rated as worst in Position 2 (less than 1m from the back wall, central), it is rated as best when moved to Position 1 (over 1m from back and side wall). This suggests that the perceived influence of directivity is dependent on both position and room type”.
Source: https://www.researchgate.net/public..._sound_quality_-_a_review_of_existing_studies

Note that the often quoted Harman research found dipole to be no good. Which begs the question, how robust is Hard?

Why are the above quotes important? The above quotes from the Danish researchers are important because we have many measurements of speakers that don’t help us to predict how these loudspeakers will sound based on these Hard data. I illustrated this point by the case method (see Flyvbjerg for a discussion of the case method: https://arxiv.org/pdf/1304.1186.pdf), where I asked ASR members to try and figure out which is the best speaker, the Revel Salon or the JBL M2. Both speakers are made by Harman, which are regarded by ASR members as best in class when it comes to audio science, measurements and specifications. Besides, @Floyd Toole has written a book of 568 pages on the reproduction of audio during his time at Harman. Despite all the measurements and with Toole’s book in hand, nobody wanted to apply this science to predict how the M2 would sound compared to the Salon and which speaker is the best one based on Hard data. How many more pages of Toole research do people need to predict the sound of two good speakers for which we have lots of Hard data?

Let me quote Ellison (2002) again to make my point clearer:

“The other basic observation I make about the static model is that a continuum of social norms are possible. If the community agrees that quality is very important, then authors will spend most of their time developing main ideas. If [Hardness] is very important, then authors spend very little time on ideas and focus on revisions. Nothing in the model prevents either extreme or something in the middle from being part of an equilibrium. Differences in social norms provide another potential explanation for differences across fields or over time”.

The quotes from the Danish articles serve to illustrate my point that many on ASR prefer to continue using Hard methods that cannot predict the sound and quality of speakers to separate the good from the best speakers, while new ideas for evaluation of sound reproduction are met with hostility due to lack of Hardness. So Hardness wins over ideas. The same hostility to new ideas was registered when @oivavoi picked up the concept of “slow listening” in an AES paper (https://www.audiosciencereview.com/...f-benchmark-ahb2-amp.7628/page-45#post-190550). The article was waved off by ASR members due to lack of Hardness.

@Kvalsvoll criticised me for using the concept of “middle way” in a setting where science is celebrated. But isn’t the Ellison quote just above a reminder that there is a trade-off between Importance and Hardness, that Ideas and Data are two factors that need balancing, i.e. finding a middle way?

Audio is a science which combines multiple research fields; physics, psychoacoustics, psychology, neurology and more. Isn’t such a multi-disciplinar field a place where ideas are even more important than in a narrower field, like say mathematics or “pure” physics? Do people on ASR welcome input from other fields, or is such input better omitted?

Toole’s 568 pages long book is also a reminder, isn’t it, that legacy research brings you only so far, but not further as speakers start to reach a consensus on the “old” consensus factors like “flat” and “smooth”? Are we in need of new ideas to design, measure and describe speakers that go beyond “good”? Will we ever be able to describe say Salon and M2 in ways that make meaning to people and let us decide which speaker is the best for a majority of users and use cases?

And one more thing to disturb a little more. The celebrated research on ASR is heavily leaning towards vox populi, i.e. polls to find correlation between preferences and speaker attributes. What I have found as fascinating as lacking in logic, is the fact that people often celebrate certain vox populi processes (say market prices or speaker quality) while they at the same time attack the outcome of other vox populi processes (say democratic elections). Personally, I like the idea of vox populi as much as I am aware of its shortcomings.

Lastly, a point on behaviour in science and a point that is made by both Ellison (2002) and Akerlof (2019). The point is about what decides what is important and what is not in research and science. Akerlof wrote:

“This tendency for disagreement on Importance is exacerbated by tendencies to inflate the Importance of one’s own work and deflate the Importance of others’”.

A similar observation by Ellison (2002):

“Section VII adds the assumption that academics are biased and think that their work is slightly better than it really is”.

One final note. Why is this post so long? By now you have learned that academic articles have become longer, but this post is not long because of my “science envy”. The post is long because it takes more space and effort to make a divergent point, to present ideas that collide with consensus and status quo in a social setting.

- - - - - - -
REFERENCES:
Ellison (2002): https://pdfs.semanticscholar.org/8429/7d83186f86c963c61556e1e2d954b8fbed37.pdf
Akerlof (2019): https://assets.aeaweb.org/asset-server/files/9185.pdf

I would welcome a much shorter summary of your position. Is there a specific thing the discourse on ASR is guilty of omitting? What research, study or topic areas are being ignored? What would you recommend should change?
 
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
I think an analogy can be made between Harman research and medical research , a massive amount of which is paid for by drug and device companies. There is a great deal of bad medical research ,published by "hired guns" who sometimes have great reputations. But there is also lots of good work done, by people of integrity even if they work for a company that may want to sell something.

I did not think you were criticizing Dr. Toole.

An interesting side note is that your comment made me look up what a Revel Salon is because I didn't know. It is interesting for me to compare it to my own speakers which are also four way and have multiple drivers. I have never heard Salon's though I would expect them to sound pretty good.
I have also not heard M2s but I have hear the similar 4367. Although both are "two way" both drivers have multiple voice coils which seem to cover different bands. So they are almost four way in some sense.

I have heard M2, which is great! But you seem to know more about its inner workings than I do.

@Floyd Toole , who I never met, has a golden reputation and is probably a very nice man of high integrity. Still, my own work is sometimes influenced by «forces», not always apparent and easy to spot ones, so I guess everybody is also a product of their social setting.

The urge to conform is a strong one in man, and that probably explains why it felt like a huge effort to write the opening post.
 

Floyd Toole

Senior Member
Audio Luminary
Technical Expert
Industry Insider
Forum Donor
Joined
Mar 12, 2018
Messages
367
Likes
3,907
Location
Ottawa,Canada
- - - warning ! ! ! - - - very long post, please take your time :) - - -

Have the standards for discourse on ASR led us to a place where irrelevance is celebrated, while difficult questions are not pursued? Do the norms and standards on ASR lead to sins of emission?

I will present a model of sins of emission in spirit of Ellison (2002) and named after Akerlof (2019). As you will see, the Akerlof model makes immediate sense for the audio science interested person too. From Akerlof (2019):

“An academic researcher selects from a set of possible research topics. These topics can be characterized along two dimensions: (1) Hardness (i.e., the ease or difficulty of producing precise work on the topic) and (2) Importance.

The researcher values both Hardness and Importance; but the weight he places on Hardness leads him to trade off Hardness and Importance in a non-optimal way. In this sense, he is biased.

Figure 1 depicts the solution to the researcher’s problem. While the researcher chooses a topic lying along the “frontier,” the frontier topic he chooses differs from the social optimum. His chosen topic (Topic A) is both Harder and less Important than the social optimum (Topic B).
If we aggregate across all researchers, we obtain a prediction about the “cloud” of topics the profession will address. Observe that there will be a set of Important but Soft topics which will not be pursued; in this sense, bias towards the Hard in the profession generates “sins of omission.””
View attachment 30226
Ellison (2002), which inspired Akerlof’s model, documented an important shift in academic research. Academic articles are longer and take longer to publish than in previous decades. This may not be clear to younger researchers, but is obvious to older professors and if one looks at the form of older journal articles. The table below, from Ellison, gives an overview.

View attachment 30225
Both Ellison and Akerlof are economists. However, Ellison noted:

“The phenomenon I am describing is not unique to economics. Similar trends can be seen in many other academic disciplines”.

Both Ellison and Akerlof argue that the evolving standards gradually reflect gradually changing social norms.

“The slowdown of the process and the increased length of papers may reflect an increase in [Hardness]”, Ellison wrote (I have consistently used Akerlof’s concept of “Hardness” instead of Ellison’s more general “r-quality”).

“Furthermore, the emphasis on Hardness is likely at the expense of Importance”, Akerlof said.

Akerlof, recipient of the “Nobel prize” in economics in 2001, listed several consequences of the Hardness bias:

Consequence 1. Bias against New Ideas. So far, we have classified topics according to their “Importance” and their “Hardness.” Another relevant dimension is whether topics are New or Old—or, in Kuhn’s (2012) terminology, whether they entail “normal" or “revolutionary” science. Not all New topics are Important; but, clearly, the most Important topics are New. Hardness bias inhibits acceptance of New topics in at least two different ways.
First, Old topics/paradigms have a variety of tools that aid precision: such as established terminologies, conceptual frameworks, and empirical methodologies. With bias toward the Hard, academics working within such accepted paradigms have an advantage, since they can borrow at will from such toolkits to state their ideas precisely. In contrast, those who are presenting a New idea, are disadvantaged, since they must develop their own tools. As expressed by Frey (2003, p. 212)): "a new idea is less well-formulated than ... well-established ideas and therefore rejected for lack of rigor." In this way, demand for precision (for Hardness) impedes the introduction of New ideas.
Second, Hardness bias reduces the ability to challenge existing paradigms. According to usual procedure in economics, as in science more generally, Old ideas are only rejected when they are shown to be inferior in tests against New ideas”.

Consequence 2. Over-specialization. Bias towards the Hard also encourages over-specialization. Generalists need to meet the standards of precision for multiple fields, while specialists need only meet the standards of one. Hence, it is easier to be Hard as a specialist than as a generalist”.

Consequence 3. Evaluations based on publications in "Top Five". As we have seen, Hardness bias results in specialization. That, in turn, results in increased use of journal metrics for evaluations. In economics, this has especially taken the form of evaluations based on number of "Top Five" publications”.

So what does this mean for ASR?

I believe many on ASR suffer from “science envy” in the sense that they pick up bad habits (as well as some good habits) from their idols in science. In other words, I think the problems in the scientific community that Ellison (2002) and Akerlof (2019) discuss are reflected by ASR members too, possibly to a greater extent than in “real” science communities. One observation: @oivavoi is one of the most highly trained professionals on ASR in scientific discourse. Still, I get the impression that his writing sometimes makes friction where his intention was just to discuss an idea.

I have previously likened the focus on ASR with the so called streetlight effect (https://en.m.wikipedia.org/wiki/Streetlight_effect):

“A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, "this is where the light is".”

My point was the fact that DACs are so transparent, one cannot argue that this part of the audio chain scores high on Importance. Amplifiers, too, have become so transparent it should be obvious where focus should be, i.e. on speakers.

Especially DACs, but modern amplifiers too, fit nicely into the Ellison-Akerlof models, where we observe that something unimportant can be dealt with using the Hard tools. As we move towards something of real importance, however, like speakers, it’s questionable if we have a readily available toolkit to separate good speakers from better speakers.

Not long ago, I quoted a JAES article from 2017:

“Loudspeaker specifications have traditionally described the physical properties and characteristics of loudspeakers: frequency response, dimensions and volume of the cabinet, diameter of drivers, impedance, total harmonic distortion, sensitivity, etc. Few of these directly describe the sound reproduction and none directly describe perception of the reproduction, i.e., takes into account that the human auditory system is highly non-linear in terms of spectral-, temporal-, and sound level processing (see, e.g., [3]). This disconnect between specifications and perception have made it challenging for acousticians and engineers (and consumers) to predict how a loudspeaker will sound on the basis of these specifications”.
Source: http://www.aes.org/tmpFiles/elib/20190728/18729.pdf

In a previous article from 2016, one of the authors, Pedersen, wrote something which seems to have inspired the later JAES article:

“ In other words, if you want to know how a loudspeaker sounds, it is more sensible to use a perceptual assessment of a loudspeaker’s sound based on a listening test rather than taking outset in the technical data”.
Source: https://assets.madebydelta.com/assets/docs/senselab/publications/TEKnotat_TN11_UK_v5.pdf

One of the same authours, professor Bech, made an interesting observation almost a decade (2009) earlier:

“It is evident that the dependence of listeners’ fidelity ratings on position (and room) is also important. Whilst the dipole is rated as worst in Position 2 (less than 1m from the back wall, central), it is rated as best when moved to Position 1 (over 1m from back and side wall). This suggests that the perceived influence of directivity is dependent on both position and room type”.
Source: https://www.researchgate.net/public..._sound_quality_-_a_review_of_existing_studies

Note that the often quoted Harman research found dipole to be no good. Which begs the question, how robust is Hard?

Why are the above quotes important? The above quotes from the Danish researchers are important because we have many measurements of speakers that don’t help us to predict how these loudspeakers will sound based on these Hard data. I illustrated this point by the case method (see Flyvbjerg for a discussion of the case method: https://arxiv.org/pdf/1304.1186.pdf), where I asked ASR members to try and figure out which is the best speaker, the Revel Salon or the JBL M2. Both speakers are made by Harman, which are regarded by ASR members as best in class when it comes to audio science, measurements and specifications. Besides, @Floyd Toole has written a book of 568 pages on the reproduction of audio during his time at Harman. Despite all the measurements and with Toole’s book in hand, nobody wanted to apply this science to predict how the M2 would sound compared to the Salon and which speaker is the best one based on Hard data. How many more pages of Toole research do people need to predict the sound of two good speakers for which we have lots of Hard data?

Let me quote Ellison (2002) again to make my point clearer:

“The other basic observation I make about the static model is that a continuum of social norms are possible. If the community agrees that quality is very important, then authors will spend most of their time developing main ideas. If [Hardness] is very important, then authors spend very little time on ideas and focus on revisions. Nothing in the model prevents either extreme or something in the middle from being part of an equilibrium. Differences in social norms provide another potential explanation for differences across fields or over time”.

The quotes from the Danish articles serve to illustrate my point that many on ASR prefer to continue using Hard methods that cannot predict the sound and quality of speakers to separate the good from the best speakers, while new ideas for evaluation of sound reproduction are met with hostility due to lack of Hardness. So Hardness wins over ideas. The same hostility to new ideas was registered when @oivavoi picked up the concept of “slow listening” in an AES paper (https://www.audiosciencereview.com/...f-benchmark-ahb2-amp.7628/page-45#post-190550). The article was waved off by ASR members due to lack of Hardness.

@Kvalsvoll criticised me for using the concept of “middle way” in a setting where science is celebrated. But isn’t the Ellison quote just above a reminder that there is a trade-off between Importance and Hardness, that Ideas and Data are two factors that need balancing, i.e. finding a middle way?

Audio is a science which combines multiple research fields; physics, psychoacoustics, psychology, neurology and more. Isn’t such a multi-disciplinar field a place where ideas are even more important than in a narrower field, like say mathematics or “pure” physics? Do people on ASR welcome input from other fields, or is such input better omitted?

Toole’s 568 pages long book is also a reminder, isn’t it, that legacy research brings you only so far, but not further as speakers start to reach a consensus on the “old” consensus factors like “flat” and “smooth”? Are we in need of new ideas to design, measure and describe speakers that go beyond “good”? Will we ever be able to describe say Salon and M2 in ways that make meaning to people and let us decide which speaker is the best for a majority of users and use cases?

And one more thing to disturb a little more. The celebrated research on ASR is heavily leaning towards vox populi, i.e. polls to find correlation between preferences and speaker attributes. What I have found as fascinating as lacking in logic, is the fact that people often celebrate certain vox populi processes (say market prices or speaker quality) while they at the same time attack the outcome of other vox populi processes (say democratic elections). Personally, I like the idea of vox populi as much as I am aware of its shortcomings.

Lastly, a point on behaviour in science and a point that is made by both Ellison (2002) and Akerlof (2019). The point is about what decides what is important and what is not in research and science. Akerlof wrote:

“This tendency for disagreement on Importance is exacerbated by tendencies to inflate the Importance of one’s own work and deflate the Importance of others’”.

A similar observation by Ellison (2002):

“Section VII adds the assumption that academics are biased and think that their work is slightly better than it really is”.

One final note. Why is this post so long? By now you have learned that academic articles have become longer, but this post is not long because of my “science envy”. The post is long because it takes more space and effort to make a divergent point, to present ideas that collide with consensus and status quo in a social setting.

- - - - - - -
REFERENCES:
Ellison (2002): https://pdfs.semanticscholar.org/8429/7d83186f86c963c61556e1e2d954b8fbed37.pdf
Akerlof (2019): https://assets.aeaweb.org/asset-server/files/9185.pdf

OOPS, your bias is showing again. You said: "Note that the often quoted Harman research found dipole to be no good. Which begs the question, how robust is Hard?"

You need to read more carefully. There are two dipole speakers recorded in my book as having both subjective and objective evaluations. One failed because it was a dense collection of resonances, easily observable (in the measurements) and audible (in the listening tests). It failed not because it was a dipole, but because it was a poor loudspeaker. A second dipole was criticized because of its extreme directivity at high frequencies (a 3-inch tweeter can do that), but in the total context of loudspeakers it was, overall, a respectable performer. Both deficiencies had nothing to do with their basic dipole radiation pattern and in fact, the second unit, a Quad, is not a pure dipole but by their own description a modified one, with felt pads damping the rear radiation.

What you said is simply not true.

" . . . how robust is Hard?" Very robust. But "Hard" is hard, or difficult, because most interested parties lack the technical facilities and/or knowledge to collect anechoic data and process it for presentation to two eyes and a brain. Unless the subjective input is double-blind, preferably involving positional substitution, and is balanced by competent and comprehensive "hard" data, rational conclusions are not possible. This has been the normal case, in the past, and even now.
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,449
Likes
4,817
Are we supposed to deeply reflect about the epistemology of hypotheticals, but possible, sins and omissions alleged to have occurred here?
It's only about music reproduction and the enjoyment thereof, ffs...
 

gene_stl

Addicted to Fun and Learning
Forum Donor
Joined
Jun 14, 2019
Messages
867
Likes
1,200
Location
St.Louis , Missouri , U.S.A.
And continuing the analogy to medical research I remember being lectured by an Oncologist at Washington University , in the 1980s about the difference between an anecdotal study and a randomized study because the company I worked for did not have same to support their"data" , which had been bought and paid for.

Dr.Toole points out the difference between single blind , double blind and randomized. This is the only way this kind of work can be done. Medical researchers have the advantage that the eventual success or failure of a technique shows up in survival studies which we have no correlate for in audio.
 
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
OOPS, your bias is showing again. You said: "Note that the often quoted Harman research found dipole to be no good. Which begs the question, how robust is Hard?"

You need to read more carefully. There are two dipole speakers recorded in my book as having both subjective and objective evaluations. One failed because it was a dense collection of resonances, easily observable (in the measurements) and audible (in the listening tests). It failed not because it was a dipole, but because it was a poor loudspeaker. A second dipole was criticized because of its extreme directivity at high frequencies (a 3-inch tweeter can do that), but in the total context of loudspeakers it was, overall, a respectable performer. Both deficiencies had nothing to do with their basic dipole radiation pattern and in fact, the second unit, a Quad, is not a pure dipole but by their own description a modified one, with felt pads damping the rear radiation.

What you said is simply not true.

" . . . how robust is Hard?" Very robust. But "Hard" is hard, or difficult, because most interested parties lack the technical facilities and/or knowledge to collect anechoic data and process it for presentation to two eyes and a brain. Unless the subjective input is double-blind, preferably involving positional substitution, and is balanced by competent and comprehensive "hard" data, rational conclusions are not possible. This has been the normal case, in the past, and even now.

Sorry for my lack of precision. My main point, however, was this quote by Evans et al. (2009):

“It is evident that the dependence of listeners’ fidelity ratings on position (and room) is also important. Whilst the dipole is rated as worst in Position 2 (less than 1m from the back wall, central), it is rated as best when moved to Position 1 (over 1m from back and side wall). This suggests that the perceived influence of directivity is dependent on both position and room type”.
Source: https://www.researchgate.net/public..._sound_quality_-_a_review_of_existing_studies

The same article says Toole (2008) has only one speaker position («conventional»). Therein lies a vulnerability in method, doesn’t it?
 

JJB70

Major Contributor
Forum Donor
Joined
Aug 17, 2018
Messages
2,905
Likes
6,156
Location
Singapore
I work in a completely separate but nevertheless technical field. I think some interesting points have been raised in this discussion on the nature of research. I've written seven technical papers, on combustion thermodynamics and emissions, and would offer two very generic (and probably so general as to be rather empty) observations:

-Don't assume that the fact that a paper has been published means that any ideas, theories, analysis and conclusions should be accepted unquestioningly. I follow the transactions of the two engineering institutes of which I am a member and attend conferences and over the years I've seen some real howlers; and

-In most cases to really understand a technical paper requires a certain level of foundation/core knowledge and a basic understanding of the subject. There are all sorts of nuances and interdependencies in engineering and it is very easy for the interested amateur (and I do not use amateur in a pejorative way, I'm an amateur audio enthusiast) to end up barking up some seriously wrong trees.

Usually these sort of arguments end up polarised and polarising in my experience.
 

Floyd Toole

Senior Member
Audio Luminary
Technical Expert
Industry Insider
Forum Donor
Joined
Mar 12, 2018
Messages
367
Likes
3,907
Location
Ottawa,Canada
Sorry for my lack of precision. My main point, however, was this quote by Evans et al. (2009):

“It is evident that the dependence of listeners’ fidelity ratings on position (and room) is also important. Whilst the dipole is rated as worst in Position 2 (less than 1m from the back wall, central), it is rated as best when moved to Position 1 (over 1m from back and side wall). This suggests that the perceived influence of directivity is dependent on both position and room type”.
Source: https://www.researchgate.net/public..._sound_quality_-_a_review_of_existing_studies

The same article says Toole (2008) has only one speaker position («conventional»). Therein lies a vulnerability in method, doesn’t it?

The statement: "This suggests that the perceived influence of directivity is dependent on both position and room type” is not proved. We now know that bass performance accounts for about 30% of the factor weighting in subjective evaluations. Changing locations dramatically changes this aspect of what is heard. This is why we do positional substitution evaluations.

I would add that in my earlier response, I failed to note that one of the two tests was done at the Canadian taxpayers expense (at the NRCC) and the second at Harman's expense. There is no commercial bias, as keeps on being implied.
 
Top Bottom