• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Horn Speakers - Is it me or.......

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,579
Likes
3,901
Location
Princeton, Texas
The point I am making is that people are allowed to judge speakers in whatever way they want. You had made a distinction that people were not allowed to judge spatial qualities. I showed that they could if they value it. [emphasis Duke's]


I have no idea where you got that from. I do not think I ever said anything like that; at least I never intended to. Can you tell me where you saw it so I can clarify?

No, it would NOT be a tie. It would be “too hard to tell — thanks to stereo”. Like I said the first time.

It’s important to know how to interpret statistics. Get it wrong and you go down all sorts of rabbit holes.


I do not claim to be a statistician, but how is "too hard to tell - thanks to stereo" NOT a change in ranking from "Rega is the clear winner in mono"?

What I am saying is exactly what I said: you don’t have evidence that it does change spatial quality rankings by changing to stereo.


Then what exactly is the change in order of the ranking squares that I've circled in Figure 7.14 (post #334) evidence of?

This shouldn’t be so hard.


Agreed! I am looking at the spatial quality rankings in Figure 7.14 and taking them at face value. How are you arriving at your conclusion?

And while we're at it, perhaps you could present your evidence that spatial quality rankings do not change from mono to stereo?
 
Last edited:

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,530
Likes
4,371
...I do not claim to be a statistician, but how is "too hard to tell - thanks to stereo" NOT a change in ranking from "Rega is the clear winner in mono"?
Because stereo, with its poor ability to show differences, is squishing the results all together so much that the respondents can no longer reliably tell the difference in spatial quality between Rega and KEF. The test protocol forces them to pick one or the other, i.e. guess if necessary, and that illustration 7.14 is the sort of result that is possible when forced to always make a choice.

A statistician, purely with his statistician's hat on, coming in cold and being handed nothing but the two sets of data for Rega and KEF, probably would say that the ranking changed, BUT, a researcher with some breadth of experience and knowledge of the design of this particular experiment and all the other results, would say more like "hang on, I'm seeing a pattern here across numerous tests, where stereo has less resolution of spatial quality to the point where respondents are probably guessing a fair bit of the time, so the correct interpretation of the Rega vs KEF spatial score in stereo is to say that the test isn't good enough to determine the relative spatial attributes of the speakers. That result is a toss-away in terms of learning about the spatial attributes of those two speakers, but not a toss-away in terms of learning about the limitations of using stereo for comparing speakers."

It's like using your reading glasses to compare the sharpness of two photographs, and deciding A is sharper than B, then taking off your reading glasses, and deciding they both look blurry but if forced to pick one you might say B is slightly sharper. The real lesson isn't about the change in ranking -- that's actually irrelevant -- the real lesson is not to compare sharpness of photos without your reading glasses! :cool:

Then what exactly is the change in order of the ranking squares that I've circled in Figure 7.14 (post #334) evidence of?

To answer that I will have to repeat the same words for a FOURTH time. So, my answer is, "see above".
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,728
Likes
38,936
Location
Gold Coast, Queensland, Australia
The debate will keep rearing its head until the proponents admit that neither single speaker nor paired speaker stereo testing covers enough of the important characteristics of domestic loudspeakers.

The use-case is clear. Nobody sits in front of a single speaker like they did at the turn of last century listening to a console radio. Nobody seriously listens to mono recordings on one single speaker.

The entire premise of stereo is spatial representation, accuracy, stability in image placement and the promise of a 3D space. I've been quite critical of these real facets of a loudspeakers which are missed in ASR reviews, but I also understand there is no current mechanism to objectively test for these characteristics.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,579
Likes
3,901
Location
Princeton, Texas
Because stereo, with its poor ability to show differences, is squishing the results all together so much that the respondents can no longer reliably tell the difference in spatial quality between Rega and KEF. The test protocol forces them to pick one or the other, i.e. guess if necessary, and that illustration 7.14 is the sort of result that is possible when forced to always make a choice.


First of all Newman, THANK YOU VERY MUCH for posting a thorough explanation of your position.

I don't think the protocol in the Rega/KEF/QUAD test forced listeners to guess. Please examine Figure 7.13, the questionnaire used for the tests whose results are shown in Figure 7.14 (see post #334 for both.)

I do understand that spatial quality preference results “bunch up” in stereo relative to mono, but for people who listen exclusively in stereo (or multichannel), arguably it would be the heightened differentiation in mono spatial quality preference which is of limited real-world relevance.

Single-speaker mono evaluation has no reliable way of identifying a loudspeaker whose spatial quality in stereo is exceptional. (In contrast, single-speaker mono evaluation does have a reliable way of identifying a loudspeaker whose sound quality in stereo is exceptional.) This claim of course presumes that loudspeakers with exceptional spatial quality in stereo may exist.

One thing not clear to me is why the spatial quality ratings bunch up in stereo. You suggested “a pattern across numerous tests, where stereo has less resolution of spatial quality to the point where respondents are probably guessing a fair bit of the time.” That may be the case. Here is another possible explanation:

Loudspeaker characteristics which excel at creating credible stereo images tend to result in a less-than-credible sense of spaciousness, and vice-versa. So if the Rega excelled in one area and the KEF in the other, their overall spatial quality scoring in stereo might trend towards being very similar despite their perceived spatial qualities actually having dissimilar characteristics. If this is indeed the case, then two spatial quality categories are arguably called for.

It's like using your reading glasses to compare the sharpness of two photographs, and deciding A is sharper than B, then taking off your reading glasses, and deciding they both look blurry but if forced to pick one you might say B is slightly sharper.


Persuant your reading glasses analogy: Listening in stereo is like using reading glasses (assuming you use them), as that's the way it's done for most enjoyable results. Listening in mono is like not using your reading glasses to look at something you would normally use them for. (Whether your results are more differentiated one way than the other is of academic interest only if you always use them anyway.)

So in my opinion even though the spatial quality rankings bunch up in stereo, stereo is still the most real-world-relevant way to evaluate spatial quality, perhaps separating out stereo imaging from sense of space to improve the resolution of the results. I'm not suggesting Harman or anyone else expend the time and energy and money to do this; I'm only arguing that despite all of its other highly desirable attributes, mono listening is inadequate for evaluating stereo spatial quality.

The entire premise of stereo is spatial representation, accuracy, stability in image placement and the promise of a 3D space. I've been quite critical of these real facets of loudspeakers which are missed in ASR reviews, but I also understand there is no current mechanism to objectively test for these characteristics. [emphasis Duke's]


Very well said!
 
Last edited:

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Because stereo, with its poor ability to show differences, is squishing the results all together so much that the respondents can no longer reliably tell the difference in spatial quality between Rega and KEF.

Which differences are you referring to and if you mean spatial differences can you provide evidence of this?

Or are you basing your claim on Toole’s interpretation of the data?
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,530
Likes
4,371
I don't even know how to respond to that, tuga. It's as if a cohort of people simply refuse to learn the lessons that Toole's research teaches them, because it doesn't fit with their wishes. "Oh, they didn't do the test perfectly, therefore it is so flawed that I can and will dismiss it." "Oh, they didn't test this aspect and that aspect, which proves they missed critically important stuff that I need to know." "Oh, I know better the conclusion that he needed to draw instead of the conclusion he did draw, despite him being 1000x more experienced and capable than I, it's obvious that he jumps to conclusions too easily. His interpretation of the results and data is not something that anyone would want to accept, so you had better provide separate evidence if you are in agreement with Toole."
:facepalm:
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
I don't even know how to respond to that, tuga. It's as if a cohort of people simply refuse to learn the lessons that Toole's research teaches them, because it doesn't fit with their wishes. "Oh, they didn't do the test perfectly, therefore it is so flawed that I can and will dismiss it." "Oh, they didn't test this aspect and that aspect, which proves they missed critically important stuff that I need to know." "Oh, I know better the conclusion that he needed to draw instead of the conclusion he did draw, despite him being 1000x more experienced and capable than I, it's obvious that he jumps to conclusions too easily. His interpretation of the results and data is not something that anyone would want to accept, so you had better provide separate evidence if you are in agreement with Toole."
:facepalm:

The data is there to be analysed and interpreted. I disagree with his interpretation (in relation to the aforementioned study) and find that it compromised some of the ensuing research and methodology.
Some decisions (eg shuffler design) may have been made for practicality and simplicity instead of accuracy and effectiveness. I can’t help but feel that some corners may have been cut and at times oversimplification may have compromised the design and the methodology. I prefer to read his research and opinions at face value with an inquisitive approach, not as gospel...
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,728
Likes
38,936
Location
Gold Coast, Queensland, Australia
Some decisions (eg shuffler design) may have been made for practicality and simplicity instead of accuracy and effectiveness.
I can’t help but feel that some corners may have been cut and at times oversimplification may have compromised the design. I prefer to read his research and opinions at face value with an inquisitive approach, not as gospel...

This is so true. Take what you want from the research and form your own opinions and conclusions. It is not gospel and never will be.

For the life of me, I cannot comprehend the over complication and ridiculousness of the speaker shuffler. It could have been done, in stereo, in relative silence, stage consistency and speed using a conveyor with affixed speakers behind a nice black curtain much more effectively than the device they used.

Sure this company sponsored research was decades ago, but to trot it out in 2021 as the only 'research' worth anything is tiresome.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
It is a blessing (pun intended) that so many designers/engineers have joined ASR or one might confused it with a church...
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,579
Likes
3,901
Location
Princeton, Texas
The data is there to be analysed and interpreted. I disagree with his interpretation (in relation to the aforementioned study) and find that it compromised some of the ensuing research and methodology.


"I have no use for blind and double blind listening tests the way Harman implements them. Sound systems and their environments are very complicated. No speaker is even close to sounding "real" so personal opinion is always a major consideration. Most blind tests are based on a series of assumptions that enable the test to be easy or practical to implement. Unfortunately, these assumptions often invalidate or color the results because they cover up or accentuate aspects of the loudspeaker design." - Greg Timbers, designer of many of JBL's best loudspeakers [emphasis Duke's]

In his peer-reviewed paper entitled "A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements", Sean Olive acknowledges limitations of the methodology used:

"The conclusions of this study may only be safely generalized to the conditions in which the tests were performed. Some of the possible limitations are listed below.

"1. Up to this point, the model has been tested in one listening room.

"2. The model doesn't include variables that account for nonlinear distortion (and to a lesser extent, perceived spatial attributes).

"3. The model is limited to the specific types of loudspeakers in our sample of 70.

"4. The model's accuracy is limited by the accuracy of the subjective measurements." [emphasis Duke's]

Apparently @restorer-john and you and I are not alone in seeing some limitations with some of the methodology and/or conclusions. Perhaps we're not the idiots @Newman portrays us to be.
 
Last edited:

Inner Space

Major Contributor
Forum Donor
Joined
May 18, 2020
Messages
1,285
Likes
2,939
"Most blind tests are based on a series of assumptions that enable the test to be easy or practical to implement. Unfortunately, these assumptions often invalidate or color the results because they cover up or accentuate aspects of the loudspeaker design." - Greg Timbers, designer of many of JBL's best loudspeakers [emphasis Duke's]

Agree strongly. I don't have the credentials to offer a serious critique or identify flaws, but I am uneasy ... 99% of the time, Toole's test participants were all required to listen at different off-axis angles ... and the consistently preferred loudspeakers were those that perform well off-axis. Well, duh. The result is predetermined by the methodology.

Such testing means nothing to me, who listens on-axis in a nonreflective space.

Further, for all his virtues, Toole is significantly intemperate about what he personally and privately perceives as the inadequacy of conventional stereo. He seems to want something different than it's designed to deliver. He prefers to smear a huge spurious halo all around the intended image, via reflections, in search of something he calls "envelopment". Again, that's of no use to me, who prefers to hear what I created in the mix, nothing less, and certainly nothing more.

No question that much of his research is interesting and useful, but a measure of skepticism is always a good idea, surely?
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,530
Likes
4,371
Sure, but a measure of scepticism based on what? 99% (probably 1% more than that but I’ll cut a bit of slack) of the scepticism you see on audio forums is based on the research findings being inconsistent with the reader’s sighted listening experiences and the sighted reviewing reports from self-appointed guru reviewers.

Such scepticism means nothing, and will never tell anyone anything about sound waves coming out of audio gear. It is not a valid basis of objection. Pretending that it is valid or even scientific is an attempt to delude the audience — and perhaps even more importantly, oneself. (I saw a recent article by climate change deniers who claimed that they were being more scientific than people who accepted the scientific consensus, because they were being more sceptical, and scepticism is a hallmark of the scientific approach — !!!??? The sheer gall!)

So, one needs to be careful about scepticism, too. It can be for all the wrong reasons. I can easily see through the self-serving scepticism that is such an ongoing feature of audio discussion forums. It is literally scepticism based on the inconvenience of the truth that the research has indicated.

I am sceptical about audio research. There isn’t enough unbiased money in it, and not enough money overall. From about the late 80s to early 2010s the private money that Dr Harman threw into academic non-profit audio research, like a benefactor, was wildly exceptional. With more unbiased money would come larger experiments with more expense put into experimental design, and more of them, plus, importantly, more validation experiments to validate unexpected conclusions from prior research.

But if that money was there, we still would complain. Because the stuff that ‘we’ want to see re-proven and re-validated with ‘more and better’ research, over and over and over again, are not what researchers are interested in, because to the research community those things are actually done deals but we audiophiles just can’t bring ourselves to accept the inconvenience they pose by contradicting our sighted home listening experiences. Stuff like DSD vs PCM, high-res vs standard-res, valve vs transistor, cable vs cable, which loudspeaker driver material, NFB vs non-NFB — in a nutshell, seeking confirmation of every sighted listening experience, and only believing the research if it confirms sighted impressions. If research inconveniently contradicts sighted experiences, then it is ‘flawed’ and deserves ‘healthy scepticism’. You think researchers, no matter how moneyed and independent, would want to step on that treadmill? Not a chance.

So yes, I am sceptical, but I am also rational about it, and that means two things: accepting the working conclusions from the best available research as the basis for proceeding; and, NOT entertaining doubts based on sighted listening impressions. If someone expresses ‘healthy scepticism’ of the best available research I have seen, I hope I will get an answer when I ask them for the equal or better contradictory research that they are drawing their scepticism from. That would be most welcome, and a pleasant change.

cheers
 

MakeMineVinyl

Major Contributor
Joined
Jun 5, 2020
Messages
3,558
Likes
5,875
Location
Santa Fe, NM
The debate will keep rearing its head until the proponents admit that neither single speaker nor paired speaker stereo testing covers enough of the important characteristics of domestic loudspeakers.

The use-case is clear. Nobody sits in front of a single speaker like they did at the turn of last century listening to a console radio. Nobody seriously listens to mono recordings on one single speaker.

The entire premise of stereo is spatial representation, accuracy, stability in image placement and the promise of a 3D space. I've been quite critical of these real facets of a loudspeakers which are missed in ASR reviews, but I also understand there is no current mechanism to objectively test for these characteristics.
I completely agree, but I do find that older mono recordings, like Blue Note jazz titles, sound better to me when played through just either the left or right speaker. With large horns like those used in studios of the era, the music just sounds 'right' with a single speaker playing mono recordings, just like the musicians heard when tapes were played back out in the studio at the original sessions.
 
Last edited:

jhaider

Major Contributor
Forum Donor
Joined
Jun 5, 2016
Messages
2,874
Likes
4,676
Such testing means nothing to me, who listens on-axis in a nonreflective space.

Why would you do that to yourself?

A “non-reflective” room makes me think solitary-confinement-in-a-mental-health-facility.
 

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,206
Likes
16,949
Location
Central Fl
A “non-reflective” room makes me think solitary-confinement-in-a-mental-health-facility.
Speaking from experience Jay ? :p
 

ctrl

Major Contributor
Forum Donor
Joined
Jan 24, 2020
Messages
1,633
Likes
6,241
Location
.de, DE, DEU
I completely agree, but I do find that older mono recordings, like Blue Note jazz titles, sound better to me when played through just either the left or right speaker. With large horns like those used in studios of the era, the music just sounds 'right' with a single speaker playing mono recordings,

I'm not surprised. There are clearly measurable differences between monophonic and stereophonic listening. A mono recording optimized for playing on a single speaker, can't sound right on a stereophonic system. This is not to say that such a recording can't sound good when listened to stereophonically, but not as intended.
1630140814801.png


These differences are huge and should therefore be clearly audible. At the risk of sounding like senile grandpa, here's my experience again: Anyone who develops loudspeakers learns very quickly what it means to optimally tune a monophonic speaker, only to experience how this speaker sounds one or two classes worse when listening stereophonic.

More details and source can be found here
.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
No one is questioning the validity and effectiveness of the on-axis single speaker listening assessment for some aspects of performance such as tonal balance and resonances, only for assessing spatiality/stereo and preference.

I also question the use of off-axis assessment in the shuffler, not using toe-in in the stereo shuffler, the downplay of every other parameter that is not frequency response / directivity.

And if some of these assertions are indeed false then some of the ensuing research is compromised.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
It’s easy to fall prey to the beautiful simplicity and utmost comforting ease of taking the book as the end all definitive all there is to know about speakers...
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
These differences are huge and should therefore be clearly audible. At the risk of sounding like senile grandpa, here's my experience again: Anyone who develops loudspeakers learns very quickly what it means to optimally tune a monophonic speaker, only to experience how this speaker sounds one or two classes worse when listening stereophonic.

Are you advocating that anyone buying new speakers should listen to just one in mono?
 
Top Bottom