• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Blind Listening Test Results (N=4) : Revel F206 vs Ascend Sierra RAAL Towers

Tonygeno

Active Member
Forum Donor
Joined
Sep 11, 2019
Messages
192
Likes
248
Location
Massachusetts
When comparing speakers I like to listen to acoustic music, recorded in a real space. For me that is usually classical music (orchestral, vocal, instrumental) recorded in a concert hall. Although I realize that aural memory isn't always the best, having listened to acoustic instruments in concert halls pretty much my whole life, I have a good idea what to expect in sound. None of the recordings used in this test (it appears) would allow me to determine which speaker is closest to the original sound.

Except maybe the Duel.
 
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,697
Location
California
When comparing speakers I like to listen to acoustic music, recorded in a real space. For me that is usually classical music (orchestral, vocal, instrumental) recorded in a concert hall. Although I realize that aural memory isn't always the best, having listened to acoustic instruments in concert halls pretty much my whole life, I have a good idea what to expect in sound. None of the recordings used in this test (it appears) would allow me to determine which speaker is closest to the original sound.

Except maybe the Duel.
Sure, but this just brings up the old "circle of confusion". How do you know if you're testing the speaker, or the recording? Or the ability of the speaker to "fix" the recording? The reality is you just can't know. Assuming all these speakers aim to be fairly neutral, the best we can do is test a wide range of music and genres representative of the listener / target market of the speaker, and see what the outcome is.
 

Tonygeno

Active Member
Forum Donor
Joined
Sep 11, 2019
Messages
192
Likes
248
Location
Massachusetts
Sure, but this just brings up the old "circle of confusion". How do you know if you're testing the speaker, or the recording? Or the ability of the speaker to "fix" the recording? The reality is you just can't know. Assuming all these speakers aim to be fairly neutral, the best we can do is test a wide range of music and genres representative of the listener / target market of the speaker, and see what the outcome is.
True. But this test didn't seem to have lots of acoustic music recorded in a real space. Just my opinion.
 

Feanor

Senior Member
Forum Donor
Joined
Feb 22, 2019
Messages
382
Likes
497
Location
southwestern Ontario
Sorry, I just couldn't resist replying the way I did -- no hard feelings I assure you, it was just fun to pick on you picking on non-classical -- all in good fun over here :)

BTW I'd be curious to see some links to good orchestral recordings you'd recommend to use for comparisons. For some reason, it seems rather difficult to find good recordings in these genres.
Apologies are all mine.

I'm happy to list a few Classical that I use often for evaluation of any type of equipment, since you ask; (in no particular order):
  • Kent Nagano & Germany Symphony, Berlin - Gustav Mahler: Symphony No. 3 on Elatus 0927 49618 2
  • Michael Tilson Thomas & San Francisco Symphony: Gustav Mahler: Symphony No. 1 on San Francisco Symphony 821936-0002-2
  • Paul Hillier & Theatre of Voices - Arvo Pärt: De Profundis on Harmonia Mundi France HMU 807182
  • Paavo Järvi & Cincinnati Symphony Orchestra - Hector Berlioz: Symphonie Fantastique on Telarc 80578
  • Oliver Knussen & BBC Symphony Orchestra - Elliott Carter: Clarinet Concerto: Symphonia on Deutsche Grammophon 459 660-2
  • Timothy Seelig & Turtle Creek Chorale, Women's Chorus of Dallas - John Rutter: Requiem; Five Anthems on Reference Recordings RR-57CD
  • Eiji Oue & Minnesota Orchestra - Igor Stravinsky: The Song Of The Nightingale, The Firebird, Rite of Spring on Reference Recordings RR-70
  • Richard Hickox & BBC National Orch & Choir of Wales - Charles Villiers Stanford: Orchestral Songs on Chandos CHSA
  • Masaaki Suzuki & Bach Collegium Japan - Johann Sebastian Bach: Complete Cantatas on BIS CD-751, etc.
  • Tokyo String Quartet - Ludwig van Beethoven: The Early String Quartets on RCA Red Seal 09026-61284-2
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,146
Likes
8,717
Location
NYC
Probably something like "Wow, well would you look at that -- this is science in action, folks! Thank you Harman International for showing us how it's done -- yet another confirmation the true infallibility of this wholly owned subsidiary of Samsung Electronics! Go out and buy Revel speakers ASAP, everyone!"

Just kidding (mostly) :) But yes, the Revel/Harman fan-ism here is strong. I can't blame anyone though, they make fantastic speakers based on solid science, and many of the people behind the brand seem worthy of our trust. But speaking from experience, Ascend makes some pretty fantastic stuff too (of a different flavor).

The Revel (of Harman, of Samsung) brand makes great speakers. I am undoubtedly a fan. So much so that I ended up placing an order for Revel Salon2's today instead of my originally planned F228Be's! :D

Very much looking forward to those Salon2 impressions!

I actually leaned on the Ascends winning, as you know my predilection for wide directivity =] a tie makes sense! Seems like they won in terms of treble and spatial presentation overall, but the revels got the lower frequencies right.

The Salon2s are super wide for a forward firing speaker while having that evenness of the waveguide. I suspect you'll really enjoy them!
 

Kal Rubinson

Master Contributor
Industry Insider
Forum Donor
Joined
Mar 23, 2016
Messages
5,303
Likes
9,867
Location
NYC
Definitely agree. It would be nice if there were a standardized set of tracks we could all use. I suppose I could use the list of songs from here:
https://www.audiosciencereview.com/...sic-tracks-for-speaker-and-room-eq-testing.6/
Mebbe but, although I know many of those, none (mebbe one) are tracks I would choose myself for evaluations. It may just be too personal a choice in the end.
Sure, but this just brings up the old "circle of confusion".
I don't think that is relevant since the recording is the only thing we have to go on. That is why the familiarity with a particular one aids in hearing details.
 
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,697
Location
California
Very much looking forward to those Salon2 impressions!

I actually leaned on the Ascends winning, as you know my predilection for wide directivity =] a tie makes sense! Seems like they won in terms of treble and spatial presentation overall, but the revels got the lower frequencies right.

The Salon2s are super wide for a forward firing speaker while having that evenness of the waveguide. I suspect you'll really enjoy them!

It really does make sense, and aligns so well with my subjective impressions over time. We all (I think) know the importance of a robust bass response. I think there is ever growing evidence of the importance of wide horizontal dispersion, at least to most people's preference. I see this as evidence of just that, given that the Ascend won on so many tracks with comments explicitly stating that the preferred treble of the Ascends overpowered the otherwise preferred bass of the Revels! This was a consistent trend.

Considering how important we know bass to be, that's saying quite a lot! I can attest to this subjectively. I've tested these speakers side by side extensively (not blind though), and I can say that there does seem to be something magical about particularly wide horizontal dispersion, assuming that this is why the Ascends sound the way they do. If there's some other magic to ribbons, I suppose I'll find out when I compare to my upcoming Salon 2s :)
 
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,697
Location
California
Here you are referring to this test where each participant could listen to a personal selection of tracks they are intimately familiar with:
> Excellent music choices!
If you say so. I don't recognize all but one.

Here you are responding to the idea of an alternate test procedure (in response to your critique!) where each participant would listen to a standardized set of tracks, instead of personalized tracks that you seemed to dislike (emphasis mine):
> [...] It would be nice if there were a standardized set of tracks we could all use [...]
Mebbe but, although I know many of those, none (mebbe one) are tracks I would choose myself for evaluations. It may just be too personal a choice in the end.

What then would you suggest, for controlled listening tests?

Seems we can't win. If we encourage people to test with tracks they're intimately familiar with, you complain that you never heard any of them. But if we ask people to listen to a standardized set of tracks, you also complain you wouldn't use any of them for tests.

Maybe what you're looking for is the ability to just inflict your own opinion on other reviews/tests. But that's not what this is about.

I hope/assume this isn't the case. So, perhaps you believe you have a better set of standardized music tracks than those derived from the existing scientific research? If so, then you should publish it and your superior scientific methods used to derive it. Or, perhaps you are disputing the ability to apply the scientific method here at all? It's hard to know what point you're actually trying to make here.

Please don't beat around the bush with these low-effort drive-by criticisms. Clarify.

With all of the above said, please forgive me if I am completely and totally misreading your posts.
 
Last edited:

CDMC

Major Contributor
Forum Donor
Joined
Sep 4, 2019
Messages
1,172
Likes
2,321
Ok, so aside from the calibration effort itself, there is still a major catch: How do we achieve the DSP capabilities needEd without adding more really expensive equipment to the stack, or without compromising a lot of convenience

For example, in a few rooms I use the Sonos Amp for the convenience it offers. Not only does it make streaming music to speakers extremely easy, but allows multiple rooms to play in sync when you want, AND supports HDMI ARC + CNC so you can use it connected to a TV as well.

But there is no way (as far as I know) you can perform room correction through a Sonos Amp while retaining all these convenient features.

So maybe I should phrase it as a question: How else can I achieve all these features with a equal or greater level of convenience, with 200wpc, in a small form factor, while also enabling room correction?

I do expect some compromise of convenience in optimizing/calibrating my primary system, but it sure would be nice if I didn’t have to compromise. And beyond that, my impression is that the vast majority of people will favor convenience and simplicity of setup most of the time.

If you are using Roon and the Sonos as an endpoint, you can DSP in Roon for each endpoint individually.
 
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,697
Location
California
If you are using Roon and the Sonos as an endpoint, you can DSP in Roon for each endpoint individually.
Hence my qualification about "without adding more really expensive equipment" and "while retaining all these convenient features." Roon is both relatively expensive and relatively inconvenient given that it does not support Spotify natively. I know the lack of Spotify support isn't necessarily their fault, but to the consumer it's just the reality. The price issue (particularly the snakeoil-level pricing of their Nucleus, but also their software price) is definitely their fault, though.
 

GelbeMusik

Senior Member
Joined
Apr 19, 2020
Messages
445
Likes
290
What then would you suggest, for controlled listening tests?

Seems we can't win. If we encourage people to test with tracks they're intimately familiar with, you complain that you never heard any of them. But if we ask people to listen to a standardized set of tracks, you also complain you wouldn't use any of them for tests.

Maybe what you're looking for is the ability to just inflict your own opinion on other reviews/tests. But that's not what this is about.

Well, it is what it is. There is music which is so tremendously robust against any attack from bad--not even stereo but mono BS shiit speakers! For me it is e/g Alban Berg Lyric Suite, Massacre Killing Time, all the Derek Bailey stuff. You won't find a music lover who would trade good speaker performance for good music. I personally would and do read through all the faults to get my individual piece of the cake from the musical performance. To hell with high fidelity. Fidelity is a grade of convenience, not more.

I always wondered why uneducated personnel would be able to identify a "good" speaker while listening to fully artifical music production like "Fast Car". Can a program be more bland, less interesting, more pseudo? It is an icon of an irrelevant void somewhere else. And still it is a top notch discriminator for loudspeaker quality, Dr. Toole says.

The use of 'music' in surveys bears the risk of preferring speakers which adopt to hifi-ish sound. That could not be in the interest of music as an artform..
 
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,697
Location
California
Well, it is what it is. There is music which is so tremendously robust against any attack from bad--not even stereo but mono BS shiit speakers! For me it is e/g Alban Berg Lyric Suite, Massacre Killing Time, all the Derek Bailey stuff. You won't find a music lover who would trade good speaker performance for good music. I personally would and do read through all the faults to get my individual piece of the cake from the musical performance. To hell with high fidelity. Fidelity is a grade of convenience, not more.

I always wondered why uneducated personnel would be able to identify a "good" speaker while listening to fully artifical music production like "Fast Car". Can a program be more bland, less interesting, more pseudo? It is an icon of an irrelevant void somewhere else. And still it is a top notch discriminator for loudspeaker quality, Dr. Toole says.

The use of 'music' in surveys bears the risk of preferring speakers which adopt to hifi-ish sound. That could not be in the interest of music as an artform..

So, is the idea that you believe you have a better list of test tracks? Then please -- provide it! Bonus points if you can justify it scientifically as Amir did when citing a scientifically-derived list of tracks here!

I would love to see your proposed test tracks lists, and the justification behind them (e.g. follow the example of @Feanor above) -- I mean this genuinely, because I will be taking these suggestions seriously up to the point when I eventually do a blind test comparing the Revel Salon2 vs Genelec 8351B and perhaps others.

Otherwise, this is all talk and no substance.

To be clear though, you all are more than welcome to your opinion regarding the ideal choice of speaker test tracks! I very much look forward to you posting your controlled blind tests results based upon them! ;)
 
Last edited:

phoenixdogfan

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
3,335
Likes
5,236
Location
Nashville
A comparison with the Genelec’s will definitely happen eventually. I actually sold the Revel F206 locally this weekend (which is what forced me to actually do this blind test, since it was my last chance to do so).

To replace the F206, I am planning to order a Revel F228be today, which I think will be a very interesting comparison against the Genelec 8351B :)
Would be a real service to the ASR community if you let Amir put an F228be on his Klippel.
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,336
Likes
6,705
Probably something like "Wow, well would you look at that -- this is science in action, folks! Thank you Harman International for showing us how it's done -- yet another confirmation the true infallibility of this wholly owned subsidiary of Samsung Electronics! Go out and buy Revel speakers ASAP, everyone!"

Just kidding (mostly) :) But yes, the Revel/Harman fan-ism here is strong. I can't blame anyone though, they make fantastic speakers based on solid science, and many of the people behind the brand seem worthy of our trust. But speaking from experience, Ascend makes some pretty fantastic stuff too (of a different flavor).

The Revel (of Harman, of Samsung) brand makes great speakers. I am undoubtedly a fan. So much so that I ended up placing an order for Revel Salon2's today instead of my originally planned F228Be's! :D

Would LOVE to hear your impressions of the 8351b vs Salon 2.
 

GelbeMusik

Senior Member
Joined
Apr 19, 2020
Messages
445
Likes
290
Bonus points if you can justify it scientifically as Amir did when citing a scientifically-derived list of tracks here!

Between the lines I read that I've been misunderstood. I do not dismiss a specific list. The pivotal point of my argumentation is the observation, that subjects actually identify a technically o/k speaker by listening to music which (a) has no reference in real life, and (b) which intentions are not known.

How could people decide whether, say a subjective lack of bass is due to a flaw of the loudspeaker or due to some esthetic decision of the studio personnel? Curtis Mayfield, Move On Up: bass is unusually subdued on purpose, people who don't feel the toxic groove would say "nah, scrappy speaker". And the falsetto voice, "huuh, that's a shrill speaker".

With Led Zeppelin, Stairway To Heaven people should appreciate a speaker that would by overdrive mimic the declining quality of a vinyl record towards the end. Because, I suggest that is intended with this piece.

So, back to the list You quoted. I understand and agree, because that it actually been proven, that some program is revealing. It would allow the common man to identify more or less (very) good speakers. But why?! How come that people mirror the esthetic intentions of the musician / studio with their expectations listening to a speaker? Even, or even more so with quite synthetic material as "Fast Car"?!

That is where I feel the science begins. Answer that question--this request goes to me.

Next question is, if an esthetic verdict based on such "standards" would apply to non-standard music. Yes with an exclamation mark! Because the preferred speakers are all objectively well designed .

My conclusion: the first question above can be left open. It should spawn an investigation once, though. Dismiss listening tests. Evaluate speakers based on objective data alone. Listening tests won't show anything new. One might say, that the subjective listening tests with the program list given were only done to identify the objective parameters, by which a speaker shall be measured. You don't have to redo this all the time. It should, in case only approve that the parameters are chosen right. You did.
 
Last edited:

Feanor

Senior Member
Forum Donor
Joined
Feb 22, 2019
Messages
382
Likes
497
Location
southwestern Ontario
...
Next question is, if an esthetic verdict based on such "standards" would apply to non-standard music. Yes with an exclamation mark! Because the preferred speakers are all objectively well designed .

My conclusion: the first question above can be left open. It should spawn an investigation once, though. Dismiss listening tests. Evaluate speakers based on objective data alone. Listening tests won't show anything new. One might say, that the subjective listening tests with the program list given were only done to identify the objective parameters, by which a speaker shall be measured. You don't have to redo this all the time. It should, in case only approve that the parameters are chosen right. You did.
The highest quest of audiophile as well as engineers ought be to find the definitive set of measures that will infallibly separate the great from the mediocre from the inferior equipment of whatever type.

This site and its principal Amir are contributing greatly to the effort. To be sure, it isn't always welcomed by equipment makers, or for the matter, subjectivists with their own perverse inclinations.

The objectivists must beware, however -- as good scientists -- of the conviction that they already have all the answers. I'm perfectly happy to agree with Floyd Toole that the critical speaker characteristics and flat frequency response and uniform dispersion, yet one wonders whether they explain, e.g. differences in resolution and what (we audiophiles describe as) transparency and "air". There are no direct, objective measures for these attributes which most listeners are aware of. Probably they are subsumed by distortion measurements but maybe not entirely.

Likewise I recently questioned whether there are measurements for what I called "dynamic contrast" of amplifiers. I was assured by a number of people here that this quality was adequately described by the standard SINAD, power/load measures, etc. -- either that, or the amp differences were totally due to unmatched volume settings ... or my imagination. Maybe so.
 

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,715
Likes
6,003
Location
US East
Likewise I recently questioned whether there are measurements for what I called "dynamic contrast" of amplifiers. I was assured by a number of people here that this quality was adequately described by the standard SINAD, power/load measures, etc. -- either that, or the amp differences were totally due to unmatched volume settings ... or my imagination. Maybe so.
Then the onus is on you to define precisely what "dynamic contrast" is. So find two or more amplifiers that have different "dynamic contrasts", have a panel of listeners at a level matched double blind test to confirm that they can all agree on the one that has the better "dynamic contrast", and the one that is poorer. If we can't find an explanation using results from existing measurement methods, then we can talk about new measurements.
 

GelbeMusik

Senior Member
Joined
Apr 19, 2020
Messages
445
Likes
290
The highest quest of audiophile as well as engineers ought be to find …

Exactly! That is the aftermath I didn't dare to mention. The Yes with the exclamation mark!

Question was, whether that standard program material, like "Fast Car" would reveal the qualification of a speaker in respect to not so standard program content.

Dr. Toole says, that the parameter set, that was derived from the listening tests is sufficient. Again, it is flat and even amplitude response over frequency and direction. And maybe the concert hall ideal of immersion too. He ignores, within some bounds, non-linear distortion. He says, that it would anyway be a matter of available volume and bass extension. Only the latter is taken into account by Dr. Olive's rating number.

From my personal experience there is something to be considered. A Death Metal addictive would happily trade clarity, smoothness etc for more noise. I think that this guy on the other hand would not take this personal preference (!) into account, when he was asked to judge a speaker at Harman for its "fidelity".

Sorry for so much off topic. But I couldn't resist to discuss the esthetic preference with the test tracks listed. It's all except of a few too good music, I think ;-)
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
So, different soundtracks vary in terms of their correlation with ... something. What is the something? Potentially, the something is the objective sound quality of the speaker. But the objective sound quality of a speaker is knowable only by way of subjective listening tests using listeners who have been "trained" to hear the qualities that are uh, believed to matter. When we identify a particular speaker as being of exceptional objective quality, what we truly mean is that listeners prefer that speaker with greater consistency than other speakers. It follows that the soundtracks that are deemed superior for doing speaker evaluation are the soundtracks that produce more consistent results in user preference testing.

It seems entirely legitimate to me with one possible caveat, which I doubt is a legitimate concern. If the listeners were trained to prefer a particular flavor of sound, then the soundtracks selected would be the soundtracks that best enable the listeners to detect that particular flavor of sound. While this is a theoretical concern, I am inclined to doubt that this sort of thing is taking place, because I don't think it likely that the trained listeners would have been trained to prefer a particular flavor of sound other than sound that is truly, objectively more accurate. I just wanted to point out that if it were true that listeners were trained to prefer a flavor of sound that did not equate entirely to accuracy, then the soundtracks that were selected through statistical means for conducting speaker evaluations would be soundtracks that best permit the listeners to identify the flavor of sound that they had been trained to prefer. For example, if the listeners had been trained to prefer speakers with exaggerated treble in the on-axis or first-arrival response, then the soundtracks that would be most strongly correlated with consistency in speaker preference would be soundtracks that best enable the trained listeners to identify speakers with exaggerated treble in the on-axis or first-arrival response.

To say it bluntly, Harman should not be using "trained" listeners in their speaker evaluations, and should not be endeavoring to train listeners. The same goes for anyone else doing this. While they perceive that there are good reasons for doing this, they are deceiving themselves. While there may be certain advantages of doing this, it utterly destroys the scientific legitimacy of the process. And it concerns me that they do not understand this. If they really were the true scientists that they represent themselves to be, they would be fully cognizant of this problem and that would have forbidden the use of listeners that had undergone any sort of training. What they should be using, and should have been using all along, is professional musicians, especially musicians that have been classically trained. There are very many thousands of them all over everywhere, so it isn't like there is any shortage that would have forced them to use trained listeners.

Consider the possibility that there is a subtle quality in speaker accuracy that the people who train the trained listeners have overlooked. Since the trained listeners would not have been trained to listen for it, the process by which soundtracks are selected would have the effect of eliminating soundtracks that are good for identifying that subtle quality.

This is my serious post on this topic. The next one will be my humorous post on this topic.
 
Top Bottom