• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Danny Richie's latest...

From Sean Olive (bolding mine):

"In summary, the sighted and blind loudspeaker listening tests in this study produced significantly different sound quality ratings. The psychological biases in the sighted tests were sufficiently strong that listeners were largely unresponsive to real changes in sound quality caused by acoustical interactions between the loudspeaker, its position in the room, and the program material. In other words, if you want to obtain an accurate and reliable measure of how the audio product truly sounds, the listening test must be done blind. It’s time the audio industry grow up and acknowledge this fact, if it wants to retain the trust and respect of consumers. It may already be too late according to Stereophile magazine founder, Gordon Holt, who lamented in a recent interview:


“Audio as a hobby is dying, largely by its own hand. As far as the real world is concerned, high-end audio lost its credibility during the 1980s, when it flatly refused to submit to the kind of basic honesty controls (double-blind testing, for example) that had legitimized every other serious scientific endeavor since Pascal. [This refusal] is a source of endless derisive amusement among rational people and of perpetual embarrassment for me..”"

http://seanolive.blogspot.com/2009/04/#:~:text=In summary, the,embarrassment for me..”
I think it largely depends on how significant the differences between the loudspeakers are. I have a copy of Toole’s book and looked into the experiment in question. The two speakers, which were almost identical except for having different crossovers tuned for separate markets, were consistently chosen over the other two systems in both the blind and sighted tests by a considerable margin. I won’t directly quote from the book to avoid any copyright issues.

However, when the test moved to different room positions, the results shifted, and one of the other systems became the preferred choice. As we know, room acoustics significantly influence the sound, so it’s not surprising that certain speakers perform better in some positions than others. Given that preference is strongly influenced by bass response, it’s easy to see how some speakers could trigger room modes and perform worse, while others do better. One of the other systems was a satellite and subwoofer setup BTW.

In any case, the results of the initial test align with my belief that a sighted test, conducted with speakers in the same room and position, can still be reliable as long as the differences between the speakers are substantial. The user mentioned having had both the KEF R3 and a pair from GR Research. While I may be making some assumptions here, I tend to believe that KEF produces speakers with greater neutrality, lower distortion, and less resonance than GR Research. Given these differences, we’re likely dealing with two very different sounding speaker pairs.
I have no problem with his preference for the GR Research speakers. An issue arises when one uses a personal and subjective experience to convince others they're wrong.
 
In any case, the results of the initial test align with my belief that a sighted test, conducted with speakers in the same room and position, can still be reliable as long as the differences between the speakers are substantial.
I agree, providing the test is conducted over a long period of time (half a day or more) and uses a wide variety of programme. A quick A-B with a couple of songs can lead to a bad decision.

What is the GR Research speaker in question? Not that awful single driver thing? Some of his other designs do look quite promising.
 
I agree, providing the test is conducted over a long period of time (half a day or more) and uses a wide variety of programme. A quick A-B with a couple of songs can lead to a bad decision.

What is the GR Research speaker in question? Not that awful single driver thing? Some of his other designs do look quite promising.
I’m not sure. I looked through the thread but couldn’t determine which one he has or had. I don't think he wrote it.
 
He did also find that people could prefer a different speaker when listening blind to what speaker they preferred when listening sighted.

Which is really quite scary when you think about it.

Trouble is the practical implementation is almost non-existent. We can't easily do blind testing when choosing a speaker, and we'll listen sighted in normal use, with the bias intact.

So back to using the measurements to choose. :)

...and back to the original thread topic as Danny really dumped on the new SVS Evolution bookshelf recently. It was so critical that made me wonder why the owner sent to GR if they found them so objectionable, So searched for some measurements (SVS only posts specs) and found only his. Here is the FR he posted using his usual zoomed in scale...

1728831795083.png


Admittedly, this does look ugly but if you look at it on a more 50 dB typical scaling, the speaker appears to meet its +/- 3 dB spec (as usual, Danny does not measure below 200 Hz)...

1728831698778.png


After some of his analysis (even tries replacing the woofer), Danny appears to declare the speaker unfixable. But unsurprisingly, he does manage to create an upgraded crossover! To his credit, it does flatten the response nicely, but this comes with a ~$400 upcharge over the original cost of the speaker pair ($1200).
 

Attachments

  • 1728822584479.png
    1728822584479.png
    62.2 KB · Views: 78
  • 1728822835191.png
    1728822835191.png
    43 KB · Views: 81
  • 1728830384021.png
    1728830384021.png
    63.5 KB · Views: 37
Last edited:
...and back to the original thread topic as Danny really dumped on the new SVS Evolution bookshelf recently. It was so critical that made me wonder why the owner sent to GR if they found them so objectionable, so searched for some measurements (SVS only posts specs) and found only his. Here is the FR he posted using his usual zoomed in scale...

View attachment 398591

Admittedly, this does look ugly but if you look at it on a more typical scaling, the speaker appears to meet its +/- 3 dB spec although (as usual, Danny does not measure below 200 Hz)...

View attachment 398598

After some of his analysis (even tries replacing the woofer), Danny appears to declare the speaker unfixable. But unsurprisingly, he does manage to create an upgraded crossover! To his credit, it does flatten the response nicely, but this comes with a ~$400 upcharge over the original cost of the speaker pair ($1200).
The "pre-upgrade" graph has a Y-axis scale from 104 to 78 dB, but after the "upgrade," it shifts to 120 to 63 dB. It's still within a +/- 3 dB range, right? :facepalm:
 
Right - I don't recall seeing it either so just thought I may have missed it.

He has NX-Oticas. Can find it here,,,


Here are Danny's posted measurements...

1728829136341.png

Not all that great for a $3000 kit (but it is his usual scaling), More confusingly, The NX-Tremes have the exact same posted measurements. The only 3rd party measures are for a comparable GR open baffle are from Amir's testing of a center channel. It did not fare well. :(

P.S. The response under 1.2 kHz appears to be resonating badly (also indicated on the posted CSD). Power response is very uneven above that. Not going to EQ around it. Am sure it gets lavished with typical Danny superlatives. Much better alternatives can be found by gainphile and demonstrate a what a well-designed Ob is capable of achieving...

1728846571243.png
 
Last edited:
The "pre-upgrade" graph has a Y-axis scale from 104 to 78 dB, but after the "upgrade," it shifts to 120 to 63 dB. It's still within a +/- 3 dB range, right? :facepalm:

Yes, but both graphs are the original speaker response as measured by Danny. He uses a 25 dB scale whereas the standard industry is 50 dB.

My earlier post shows this roughly, I will make it clearer.
 
Yes, but both graphs are the original speaker response as measured by Danny. He uses a 25 dB scale whereas the standard industry is 50 dB.

My earlier post shows this roughly, I will make it clearer.
Oh, I understood it as the top one being the original and the bottom one showing the corrected crossover.
 
Oh, I understood it as the top one being the original and the bottom one showing the corrected crossover.

I understand, but was only trying to show how Danny makes the response look worse by manipulating the scale. To be fair, he does post his own speakers to the same scale.

P.S. Amir shows the same scaling game at the start of post #225
 
He did also find that people could prefer a different speaker when listening blind to what speaker they preferred when listening sighted.

Which is really quite scary when you think about it.

Trouble is the practical implementation is almost non-existent. We can't easily do blind testing when choosing a speaker, and we'll listen sighted in normal use, with the bias intact.

So back to using the measurements to choose. :)

The problem there, as I have brought up before, is the conundrum implied by all of the above.

The measurement criteria was derived under blind listening conditions, but you will be listening under sighted conditions.

If sighted listening is by nature so unreliable, how are the measurements going to help predict your experience listening sighted to your loudspeakers?

Either the measurements will help predict what you will perceive under sighted listening or they will not.

If not, what use are they?

But if so, it suggests that sighted listening can be usefully accurate. (that under sighted conditions in your home you will accurately perceive the characteristics that made those speakers sound good under blind conditions.).

But we’ve been down this road before…:)
 
He did also find that people could prefer a different speaker when listening blind to what speaker they preferred when listening sighted.

Which is really quite scary when you think about it.

Trouble is the practical implementation is almost non-existent. We can't easily do blind testing when choosing a speaker, and we'll listen sighted in normal use, with the bias intact.

So back to using the measurements to choose. :)

I'm thinking about a test, the manufacturers could possibly do. Or maybe a third party could do it as independent research.

Make a speaker that performs extremely well objectively, then do a test on a target group where you hide the speaker behind some acoustically transparent fabric and present different fake appearances via augmented reality or holographic projection.

Could be fun to see what kind of aesthetics will add or subtract to/from the experience. It would probably also differ depending on the type of music being played.
 
Make a speaker that performs extremely well objectively, then do a test on a target group where you hide the speaker behind some acoustically transparent fabric and present different fake appearances via augmented reality or holographic projection.

I’m interested in how the blind tests translate to sighted listening impressions.

So take a very well designed speaker that scores very high under blind listening, and then set the speaker up for a group and check against the groups sighted impressions.

Actually, that brings up a question; if I’ve seen this, I can’t remember:

Does anybody know how Revel speakers scored in SIGHTED listening, versus blind listening?

It would be interesting if there is some relevant amount of consistency in the sighted/blind assessments.
 
I’m interested in how the blind tests translate to sighted listening impressions.

So take a very well designed speaker that scores very high under blind listening, and then set the speaker up for a group and check against the groups sighted impressions.

Actually, that brings up a question; if I’ve seen this, I can’t remember:

Does anybody know how Revel speakers scored in SIGHTED listening, versus blind listening?

It would be interesting if there is some relevant amount of consistency in the sighted/blind assessments.

The test mentioned in Toole's book, referenced in this post, concludes that in both sighted and blind evaluations, the two nearly identical Harman speakers outperformed the other speakers in the comparison.
This indicates that one can reliably test speakers with noticeable differences when placed in the same position in the same room under sighted conditions. Unfortunately, the specific speakers used in the test were not disclosed, so we can't determine the sound characteristics of the four sets of speakers.
 
I'm thinking about a test, the manufacturers could possibly do. Or maybe a third party could do it as independent research.

Make a speaker that performs extremely well objectively, then do a test on a target group where you hide the speaker behind some acoustically transparent fabric and present different fake appearances via augmented reality or holographic projection.

Could be fun to see what kind of aesthetics will add or subtract to/from the experience. It would probably also differ depending on the type of music being played.
Wharfedale did a consumer test. Same speakers in different colours. Sighted listening. The colours directly related to the subjects' feedback on the sound balance, red being warm sounding etc.
 
The test mentioned in Toole's book, referenced in this post, concludes that in both sighted and blind evaluations, the two nearly identical Harman speakers outperformed the other speakers in the comparison.
This indicates that one can reliably test speakers with noticeable differences when placed in the same position in the same room under sighted conditions. Unfortunately, the specific speakers used in the test were not disclosed, so we can't determine the sound characteristics of the four sets of speakers.

Right. I’ve seen that. Thanks. They were seemingly some indications that sided listening was tracking the general trend of blind, listening with the Revel speakers, but as you say, there wasn’t really enough data to be conclusive. And not near enough I guess to answer the question I was asking.


Wharfedale did a consumer test. Same speakers in different colours. Sighted listening. The colours directly related to the subjects' feedback on the sound balance, red being warm sounding etc.

Interesting. Not surprising. Do you happen to have a link?
 
Right. I’ve seen that. Thanks. They were seemingly some indications that sided listening was tracking the general trend of blind, listening with the Revel speakers, but as you say, there wasn’t really enough data to be conclusive. And not near enough I guess to answer the question I was asking.
This is a feature of some of the tests mentioned in the book. While it offers interesting and valuable insights, in certain cases, its relevance stems from it being the best available source of information at the time.
 
Right. I’ve seen that. Thanks. They were seemingly some indications that sided listening was tracking the general trend of blind, listening with the Revel speakers, but as you say, there wasn’t really enough data to be conclusive. And not near enough I guess to answer the question I was asking.




Interesting. Not surprising. Do you happen to have a link?
no, it was in the early 1980s
 
Back
Top Bottom