• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Revel Salon2 vs Genelec 8351B - Blind Test Preparations

OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,697
Location
California
I think a constructive thing is to pool our resources and build a few turntables for blind speaker testing. We can then send them around for people to try and cut out a lot of the arguments on speaker placement. I am happy to fund such an effort, just need the mechanical engineers/designers/hobbyists to figure this out for us.
I would also donate towards this to help cover materials costs etc.

Also, it would be ideal if we can standardize to a single testing methodology document that will work for most people (once equipped with such a turntable). Even if this just consists of compiling the methodology used by Harman into a simple “crash course” and step-by-step guide anyone can use, I think the benefits of standardizing would be significant.
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
21,543
Likes
21,832
Location
Canada
I'm wondering if an electric potter's wheel would work.
What is needed is floor level silent, slow and gentle assembly driven by a stepper motor and have either sensors for limit switches or use a processor
 

Chromatischism

Major Contributor
Forum Donor
Joined
Jun 5, 2020
Messages
4,800
Likes
3,744
What is needed is floor level silent, slow and gentle assembly driven by a stepper motor and have either sensors for limit switches or use a processor
It might be good to start a new thread on this instead of taking this thread off in that direction. Then, link to it here.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
But the answers may already be here to those questions. The issue is that you haven't looked widely enough at the information that is already available which leads to questions based on false premises.

In some cases that is true. I did not look widely enough to find the information that Amir posted, and it was extremely helpful to me.

So go and find or do some research that provides further information to establish the impact of those limitations instead of this pointless endless speculation! ;)

I'm not on a crusade to take down anybody's research. My point was that even the researchers acknowledge limitations.

I have conducted numerous informal controlled single-blind tests in search of specific answers. This thread is not the place for me to describe them.

It seems to me that you object to "speculation", which is "the forming of a theory or conjecture for which there is not firm evidence". Even scientists form conjectures for which there is not firm evidence; it's part of the scientific method.

And I think it's ridiculous that I'm sitting here trying to justify having questions and searching for answers on THIS of all forums. I'm throwing in the towel [edit: on this conversation]. No mas.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,597
Likes
239,668
Location
Seattle Area
And I think it's ridiculous that I'm sitting here trying to justify having questions and searching for answers on THIS of all forums. I'm throwing in the towel. No mas.
Hang in there Duke.

Alan, Duke is "one of the good guys" here. Please don't tangle with him this way. Some amount of probing into the research is good.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
Hang in there Duke.

Alan, Duke is "one of the good guys" here. Please don't tangle with him this way. Some amount of probing into the research is good.

Thanks.

Just to clarify, I'm not throwing in the towel on this forum! Just on that particular conversation.
 

TimVG

Major Contributor
Forum Donor
Joined
Sep 16, 2019
Messages
1,193
Likes
2,644
in particular about the absence of early sidewall reflections in the Harman/Olive mono test configuration

When I've compared (in mono) small domestic loudspeakers in a large auditorium, one surprising finding was that the wider dispersing model still came across as more spacious, despite being 4 to 5 meter away from the nearest wall.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
Why do you think blind tests, e.g. such as those performed by Harman, are ”nebulous subjective whatever” good for “the Journal of Irreproducible Results”? It seems you are confused about something. Either way, this is not the place for that discussion, unless you have something specifically constructive to add.

That remark is not entirely honest. Well, I suppose I might as well say that it isn't the least bit honest. I did not say that blind tests, e.g., such as those performed by Harman, are nebulous subjective whatever, good for the Journal of Irreproducible Results. I absolutely, positively did not say that. And it is all but certain that you did not really believe that I said the thing that you are implying that I said, which is why I am motivated to write this response.

What you did amounts to a surreptitious way of asserting that the comparison test being undertaken here is not in any way different from other blind comparison tests undertaken by Harman. Clearly this is an assertion that you wanted to make, but for some reason you didn't want to say it in a straightforward way. By doing it the way you did it, you further insinuated that it is entirely obvious to everyone that there are no notable differences between the one-on-one comparison being undertaken here and the kinds of one-on-one comparisons that Harman is likely to do. Even if your assertion happens to be correct, i.e., even if there are no notable differences between the one-on-one comparison undertaken here and the kinds of one-on-one comparisons that Harman is likely to do, I doubt very much that this is as obvious as you have implied. I think that from the standpoint of rationale and purpose, that there is a reasonable chance that there are notable differences between the one-on-one comparison test being undertaken here and the kinds of one-on-one comparison tests that Harman is likely to do.

A fundamental tenet of all science is that in order for a discovery or finding to be scientifically meaningful it must be inherently reproducible. In ordinary scenarios such as this, quantifiability is a prerequisite for reproducibility. (There are other scenarios where the fact of some event having simply occurred is meaningful, but this does not apply to our circumstance.) In our circumstance, a discovery or finding that is not inherently capable of being expressed quantitatively is altogether meaningless as far as scientific methodology is concerned. Suppose for example that the preponderance of blinded listeners agree that the sound of one speaker indicates an excessive, unnatural emphasis in treble. It is not essential to express this quantitatively, because tonality is inherently measurable as we all know. Subjective listening tests are prone to misunderstanding with respect to what is scientifically meaningful because nothing is expected to be expressed quantitatively, not even an observed anomaly in tonality even though it certainly could be measured (and should be measured toward corroboration of the subjective finding). The fact that nothing at all is measured and expressed quantitatively does not mean that every construct that might seem substantive is fair game. I will hazard a guess that most people do not understand this, but it is a very important point. A finding is scientifically meaningful if and only if it is inherently capable of being measured and expressed quantitatively, because otherwise the finding is not inherently capable of being reproduced and verified by other testers, and any finding that is not inherently capable of being reproduced and verified by other testers is altogether meaningless as far as scientific methodology is concerned. Since Audio Science Review is now the de facto audio science forum, we should all make a strong and honest effort to stick to the established rules of scientific methodology.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
When I've compared (in mono) small domestic loudspeakers in a large auditorium, one surprising finding was that the wider dispersing model still came across as more spacious, despite being 4 to 5 meter away from the nearest wall.

YES!! That makes sense to me.

My understanding is that a nice long time delay between the direct sound and the strong onset of reflections does a good job of presenting the spatial cues on the recording. ALL reflections (assuming they're spectrally correct) convey the spatial cues on the recording, BUT the early ones (in a typical home listening room) ALSO strongly convey the playback room's spatial cues. Having ballpark 30+ milliseconds path-length-induced time delay for the sidewall reflections in your auditorium prevents these reflections from superimposing "playback room cues" atop the spatial cues on the recording. However there will also be a fair amount of attenuation of those reflections as the propagate along that long air path. The wider dispersion speaker is doing a better job of putting enough energy into those long-path sidewall reflections, so that they make a more beneficial contribution.
 
Last edited:
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,697
Location
California
That remark is not entirely honest. Well, I suppose I might as well say that it isn't the least bit honest. I did not say that blind tests, e.g., such as those performed by Harman, are nebulous subjective whatever, good for the Journal of Irreproducible Results. I absolutely, positively did not say that. And it is all but certain that you did not really believe that I said the thing that you are implying that I said, which is why I am motivated to write this response.

What you did amounts to a surreptitious way of asserting that the comparison test being undertaken here is not in any way different from other blind comparison tests undertaken by Harman. Clearly this is an assertion that you wanted to make, but for some reason you didn't want to say it in a straightforward way. By doing it the way you did it, you further insinuated that it is entirely obvious to everyone that there are no notable differences between the one-on-one comparison being undertaken here and the kinds of one-on-one comparisons that Harman is likely to do. Even if your assertion happens to be correct, i.e., even if there are no notable differences between the one-on-one comparison undertaken here and the kinds of one-on-one comparisons that Harman is likely to do, I doubt very much that this is as obvious as you have implied. I think that from the standpoint of rationale and purpose, that there is a reasonable chance that there are notable differences between the one-on-one comparison test being undertaken here and the kinds of one-on-one comparison tests that Harman is likely to do.

A fundamental tenet of all science is that in order for a discovery or finding to be scientifically meaningful it must be inherently reproducible. In ordinary scenarios such as this, quantifiability is a prerequisite for reproducibility. (There are other scenarios where the fact of some event having simply occurred is meaningful, but this does not apply to our circumstance.) In our circumstance, a discovery or finding that is not inherently capable of being expressed quantitatively is altogether meaningless as far as scientific methodology is concerned. Suppose for example that the preponderance of blinded listeners agree that the sound of one speaker indicates an excessive, unnatural emphasis in treble. It is not essential to express this quantitatively, because tonality is inherently measurable as we all know. Subjective listening tests are prone to misunderstanding with respect to what is scientifically meaningful because nothing is expected to be expressed quantitatively, not even an observed anomaly in tonality even though it certainly could be measured (and should be measured toward corroboration of the subjective finding). The fact that nothing at all is measured and expressed quantitatively does not mean that every construct that might seem substantive is fair game. I will hazard a guess that most people do not understand this, but it is a very important point. A finding is scientifically meaningful if and only if it is inherently capable of being measured and expressed quantitatively, because otherwise the finding is not inherently capable of being reproduced and verified by other testers, and any finding that is not inherently capable of being reproduced and verified by other testers is altogether meaningless as far as scientific methodology is concerned. Since Audio Science Review is now the de facto audio science forum, we should all make a strong and honest effort to stick to the established rules of scientific methodology.
I just read your entire wall of text here, and found it to contain nothing of positive value to this thread. Please take your gripes with amateur blind testing elsewhere.

As I have described above, I believe this attitude of hostility to good-faith amateur blind tests to be toxic and detrimental to the goals of ASR, IMO. We should strive to encourage more contributions of data gathering from members, not shut them down (because they don’t match or exceed the same quality bar as that of a multi million dollar research lab).

Now, if you have suggestions as to how to improve the reproducibility and/or quality of such blind tests in a way that is realistically feasible, you are more than welcome to contribute helpful constructive ideas here: that’s what this thread is about.

On that topic, reproducibility is quite easy (just document everything meticulously). The harder part is maximizing the quality and statistical significance of the results on a tight budget, which is what we are working on here.
 
Last edited:

Juhazi

Major Contributor
Joined
Sep 15, 2018
Messages
1,723
Likes
2,908
Location
Finland
I find it funny when people in this thread seem to think that Genelec is a narrow directivity design. Don't you remember eg. GedLee Summa, Synergy horn? JBL M2 is somewhere in-between. Well done dipoles/panels like Linkwitz LX521 have third special radiation pattern and omnis fourth (MBL Radialstrahler)

More directivity measurements here https://www.princeton.edu/3D3A/Directivity.html

Those looking for second opinion vs. Toole and extra info, should google eg. Earl Geddes and David Griesinger. Geddes focuses on stereo imaging and speaks for high directivity. Griesinger is the leading researcher and lecturer of spatial audio research, inluding concert halls and headphones! Here in Finland Tapio Lokki is the guru now.
 
Last edited:

vavan

Senior Member
Joined
Apr 29, 2019
Messages
341
Likes
212
Location
Kazan, Russia

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,319
Location
Albany Western Australia
In some cases that is true. I did not look widely enough to find the information that Amir posted, and it was extremely helpful to me.
I'm not on a crusade to take down anybody's research. My point was that even the researchers acknowledge limitations.
I have conducted numerous informal controlled single-blind tests in search of specific answers. This thread is not the place for me to describe them.

It seems to me that you object to "speculation", which is "the forming of a theory or conjecture for which there is not firm evidence". Even scientists form conjectures for which there is not firm evidence; it's part of the scientific method.

And I think it's ridiculous that I'm sitting here trying to justify having questions and searching for answers on THIS of all forums. I'm throwing in the towel. No mas.

Not at all. The conversation so far has been "I dont trust the message in the Toole research". This is fine. You speculate that the spatial impact of stereo listening will fundamentally change peoples preferences over mono. This contradicts Tooles message. This is also fine.

But where do you go from there? You have nothing to take that speculation further. Speculation with out basis sort of becomes just noise.

So the first thing you need to do is establish what are the specific reasons and basis to contradict Tooles assertions that mono is more revealing of problems and stereo doesnt change the fundamental speaker preferences. What are the flaws in the research which make its conclusions incorrect? Once you have done that there might be something to talk about.
 

TimVG

Major Contributor
Forum Donor
Joined
Sep 16, 2019
Messages
1,193
Likes
2,644
YES!! That makes sense to me.

My understanding is that a nice long time delay between the direct sound and the strong onset of reflections does a good job of presenting the spatial cues on the recording. ALL reflections (assuming they're spectrally correct) convey the spatial cues on the recording, BUT the early ones (in a typical home listening room) ALSO strongly convey the playback room's spatial cues. Having ballpark 30+ milliseconds path-length-induced time delay for the sidewall reflections in your auditorium prevents these reflections from superimposing "playback room cues" atop the spatial cues on the recording. However there will also be a fair amount of attenuation of those reflections as the propagate along that long air path. The wider dispersion speaker is doing a better job of putting enough energy into those long-path sidewall reflections, so that they make a more beneficial contribution.

This makes sense. I'm no expert by all means. The speakers in question were the Genelec G2 and the Neumann KH80. The Neumann, while very controlled, is quite directional - timbrally they are both excellent mini-monitors. I wonder when @amirm will test the KH310 if he'll notice something similar since they share a similar waveguide. I remember he found the KH80 to sound "small" - which could be different wording for the same thing I noticed. I must say however, in stereo listening -where things tend to be panned out- this does mean that it's slightly easier to listen 'into the recording' - at least that's my experience, sorry for the subjective jargon.

I've been following this discussion somewhat from the observer point of view, and although it's quite interesting there's still some things unmentioned so far: When is a loudspeaker narrow/medium/wide dispersion? What is the required beamwidth. Down to what frequency does the beamwidth need to remain constant? Does the horizontal off-axis behave as constant or gradually increasing directivity and how does this all translate to how we perceive the reflected sounds in a room?
 

ctrl

Major Contributor
Forum Donor
Joined
Jan 24, 2020
Messages
1,632
Likes
6,232
Location
.de, DE, DEU
Simple fact is that they have tested the same speakers in mono and stereo configurations on multiple occasions. To quote Toole "without exception" the same speakers were preferred in both scenerios. This is the case regardless of what you are saying the speaker/room position configurations were.

I'm not really interested in arguing with forum members about this. It's a bit tedious TBH. Your opinion simply doesnt carry weight against that of Toole et Al.

Toole himself has written the following:
Therefore, the implication is that spatial factors were strongly influential, if not the deciding factors, in both tests...

Obviously, there is much yet to be investigated, including the tantalizing notion that wide-dispersion loudspeakers with what would appear to be compromised sound quality (AA or E) are given a higher sound quality and spatial quality ratings than a narrow-dispersion loudspeaker with potentially superior sound quality (BB).
The provocative suggestion is that the two domains are interrelated and that the spatial component is greatly influential. Listeners appeared to prefer the sound from wide-dispersion loudspeakers with somewhat colored off-axis behavior to the sound from a narrow-dispersion loudspeaker with less colored off-axis behavior.
Source: Sound Reproduction

@Duke has explained several times that Toole's own data is in one case not consistent (this is not just an opinion, it's in the diagram) and the moment "spatial quality" is rated higher than "sound quality" the order between AA and E is reversed when comparing monophonic to stereophonic.
No more was claimed and certainly no one questioned Tooles paper's general position.

But it applies to this interpretation, of course, that the sample was very small (9 or 10 listener) and that the standard deviation of the ratings is high, compared to the differences in the mean values of loudspeaker AA and E.
Toole did not indicate this in the diagram, so here is the standard deviation for the first evaluations:
1598281716693.png


There is a difference between critically questioning science (this is how science works) and rejecting it without arguments or simply expressing his own opinion.


But you didn't go into the much more important point at all, namely that the monophonic test setup (if used) is carried out in a way that is meaningful for typical listening rooms - as described in the Toole paper.

This means that the loudspeaker should not be placed freely in the room (which is often done, e.g. Harman/Olive), otherwise the spatial information and possible radiation problems of the loudspeaker will get lost (or is severely degraded).

During my picture search in the previous post I had overlooked an interesting statement by Sean Olive which supports this very argument. He is quoted as follows:
The experiment takes place in a custom built room with a hidden mechanical speaker shuffler. Recently renovated with a slightly tighter width, Sean says the smaller room size helps increase a greater sense of 1st reflection thus making the test more sensitive to the off-axis performance of the loudspeaker.
Source: https://audio-head.com/the-harman-tour-part-1-loudspeaker-audio/
1598281107512.png

To depict a typical listening room, the distance to the side walls is still too far, even in the new listening room, if the listening tests are performed as shown in the picture - IMHO


There was a much more recent test in 2008:

Olive Sean E., Hess Sean, Devantier Allan,
”Comparison of Loudspeaker-Room Equalization
Preferences for Multichannel, Stereo, and Mono
Reproductions: Are Listeners More Discriminating
in Mono
?” presented at the 124th Convention of the
Audio Engineering Society, preprint 7492, (May
2008).

View attachment 79625

Listeners were far more picky about the flaws in the "No EQ" mode when in mono (blue) versus stereo (red) and multi-channel (square). Indeed in Multichannel test listeners barely cared about these large tonality differences.

Also here I am missing some test details. What was the exact test setup?

Some speculations:

Could this have been the test environment?
1598282155961.png


In a surround setting it is likely that the center speaker was used for the monophonic test and not the left stereo speaker, for example. This would considerably reduce the meaningfulness of the comparison, since the center speaker has no near lateral boundary surfaces.

When listening without EQ, the standard deviation is lowest in the monophonic test, indicating that the listeners were most in agreement there - which argues for speaker tests performed in mono - except, as already mentioned, when the center speaker was used (instead of a stereo speaker).

It seems to me that the room modes have been handled in all EQ methods. That would explain why with EQ the ratings are high. Without EQ, the surround setting gets high ratings because the room modes are also well suppressed there.

Why does the loudspeaker monitored in mono get the highest ratings with EQ?
My assumption would be that the mono speaker was the center speaker and that the tested speaker shows an uneven radiation pattern. Due to the minor lateral reflections, the center speaker could "sound most pleasant" when EQ.


When I've compared (in mono) small domestic loudspeakers in a large auditorium, one surprising finding was that the wider dispersing model still came across as more spacious, despite being 4 to 5 meter away from the nearest wall.

I would agree with that " intuitively". However, it could happen that a loudspeaker with a particularly uneven radiation pattern might sound most interesting because it makes vocals sound particularly close, for example.

In a "normal" listening environment with relatively close lateral boundaries, however, this could then prove to be a negative characteristic.
 
Last edited:

patate91

Active Member
Joined
Apr 14, 2019
Messages
253
Likes
137
I find it funny when people in this thread seem to think that Genelec is a narrow directivity design. Don't you remember eg. GedLee Summa, Synergy horn? JBL M2 is somewhere in-between. Well done dipoles/panels like Linkwitz LX521 have third special radiation pattern and omnis fourth (MBL Radialstrahler)

More directiviy measurements here https://www.princeton.edu/3D3A/Directivity.html

Those looking for second opinion vs. Toole and extra info, should google eg. Earl Geddes and David Griesinger. Geddes focuses on stereo imaging and speaks for high directivity. Griesinger is the leading researcher and lecturer of spatial audio research, inluding concert halls and headphones! Here in Finland Tapio Lokki is the guru now.

The Tapio Lokki study about small room acoustics is very interesting.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,865
Location
Princeton, Texas
This is fine... This is also fine.

So how about this: Rather than arguing about arguing, let's agree to disagree.

@Duke has explained several times that Toole's own data is in one case not consistent (this is not just an opinion, it's in the diagram) and the moment "spatial quality" is rated higher than "sound quality" the order between AA and E is reversed when comparing monophonic to stereophonic.

No more was claimed and certainly no one questioned Tooles paper's general position.

Thank you for recognizing this. It is much easier to argue against Duke if one can portray the argument as "Duke versus Toole".

I don't have the energy & enthusiasm go to through March Audio's assertions and try to clarify what I did or did not say.

And the only thing more boring to me than arguing about arguing is watching other people argue about arguing, and March and I have subjected ya'll to too much of that already.
 
Last edited:
Top Bottom