• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

[Cross-Post]: KEF R3 vs Ascend Sierra 2EX - Blind Listening Results

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,696
Location
California
Note: This is a cross-post from my initial results thread posted here. I figure due to the emphasis of these forums, if anything it's even more applicable/appropriate content for this forum. I didn't see any rules disallowing cross-posting like this, but if there are just let me know and I can remove this post.

I recently bought KEF R3 and have been impressed with them. Below are the results from an initial blind test comparing these speakers to my existing Ascend Sierra 2-EX bookshelf speakers. In addition to the test itself being blind, the third party doing the subjective listening has no bias coming into this and no opinion on these speakers (nor do they recognize or know anything about them or the brands).

My Bias:

My bias going into this test was that the KEF R3 would have a mid-bass advantage that would cause it to win on half the songs (even though I'd be controlling for deep bass differences via subwoofer), and that the Ascend Sierra 2-EX would have treble advantages that would cause it to win on half the songs. My bias should not be a factor in the results, since I was not a participant in the listening tests, but it's mentioned here for completeness and because it turns out my expectations were quite wrong (as you'll see below).

It turns out that contrary to my expectations, there WAS a clear winner. It's revealed below at the bottom after all the data is presented minus the speaker names, to prevent bias from the reader from creeping in as well.

Test Procedure:

Both speakers were set up side by side on stands of equal height, placed equal distances from the wall behind. A single JL E112 subwoofer was integrated with both speakers, crossed over at 100hz (in this room, localization is not a problem). Both pairs of speakers were level matched with SPL meter (C-weighted), which showed a ~2db sensitivity advantage to the KEF R3 (exactly as per spec).

Stereo pairs of speakers here are being compared (not mono), and so to factor out soundstage differences due to speaker placement width, the speakers were arranged with speaker A immediately to the left of speaker B in each case (e.g. "AB ------ AB" rather than "AB ------ BA"). The distance between left and right channels for each speaker brand therefore are measured to be equal. The only audible difference then when switching speakers is a small angular difference left/right between the two, which was controlled for by occasionally physically shifting the speaker pairs left/right in the room between songs (while the tester keeps eyes closed).

Listener is not a "trained listener", but someone who appreciates music and listens to a lot of live music. Not a speaker audiophile or familiar with traditional descriptive terms here.

When comparing a song, a segment of the song is played on one speaker, then on the other. The listener is told they're listening to "speaker A" vs "B" where "A" from the listener's perspective is always the first speaker played, but which speaker is chosen for this is actually randomized (this prevents the listener from accumulating a preferential bias that pollutes the independence of each subsequent song test).

For each song, the following questions were asked afterwards: (1) On which speaker did you prefer this song? (2) Can you explain why? Followed by (3) neutral follow-up questions asking for clarification when necessary.

The list of songs was chosen half by me, and half by the listener participant. The following list was used:
  1. Tempus Illusio (Intro) - Gramatik
  2. The Duel - Anne-Sophie Mutter, The Recording Arts Orchestra of Los Angeles, John Williams
  3. Gravity - Sara Bareilles
  4. Prelude in E Minor - Gerry Mulligan Sextet
  5. Pusher Love Girl - Justin Timberlake
  6. Selenium Forest - Plini
  7. Madness - Muse
  8. Black Skinhead - Kanye West
  9. Super Rich Kids - Frank Ocean, Earl Sweatshirt

Results

I will reveal the names of the speakers in the next section below, after this write-up of the results from each song, where each of these results (and 'interview transcript') use the terms 'speaker A' and 'speaker B' to withhold the actual names until the end. The double quoted sections are replies by the listener to follow-up questions.

1. Winner: Speaker A. Why? "More full sound, better instrument clarity." What do you mean by 'full'? "The sound is more 3-dimensional and all around me, and I can pick out the individual tones and instruments more clearly. The bass doesn't sound as flat as the other." What do you mean flat? "Flat, like more compressed. Like when they put the music together, it's all mixed together rather than all the different notes clearly distinguishable."

2. Winner: Speaker B. Why? "The violins sound sharper, stronger, harsher. On the other speaker they were smoother but I don't think they're supposed to sound like that. I think the harsher treble is probably more realistic. I'm not sure but for that reason I choose B."

3. Winner: Speaker A. Why? "The vocals sound so much more real! Realistic, like you're there. For vocals I imagine as if the singer would be singing in front of me. And this speaker sound much more authentic."

4. Winner: Speaker A. Why? "I can hear all the instruments all at once on this speaker, while on the other they're more muffled together. Percussion seemed similar. But on the other one I could hear the piano better, but not the rest of the instruments."

5. Winner: Speaker A. Why? "The same as before, I can hear all instruments independently more clearly. Vocals are more pronounced, too. On B it sounds blended or muddy, in contrast."

6. Winner: No Preference. Notes: "Guitar sounds more harsh on high tones on speaker B, so maybe speaker A is more gentle, but other than that observation, they both sound equally good."

7. Winner: Speaker A (but very close, weak preference). Why? "The climactic part of this song sounds a bit more 'grand' and 'surround sound' when it transitions in that moment. Also, the vocals on the other speaker sound more compressed." What do you mean 'compressed'? "Like more blended into the background and hard to hear separately."

8. Winner: Speaker A. Why? "Vocals sound more harsh on the other speaker, and not in a good way. The vocals had too strong 'SS' sounds that were off-putting and unpleasant. Speaker A didn't have this problem. The bass sounded cleaner as well."

9. Winner: No preference. Notes: "Couldn't discern any clear difference between the speakers on this song."

Conclusion

Vote tally:
  • Speaker A preferred on 6 songs.
  • Speaker B preferred on 1 songs.
  • No preference for 2 songs.
Ready to learn which corresponds to speaker A vs speaker B? Expand the spoiler to find out:

Speaker A is Ascend Sierra 2-EX ($1500) - WINNER!
Speaker B is KEF R3 ($2000).


Honestly, I was not expecting this. The Sierra 2-EX won, and not by a small margin either. And on most of these, the preference was not subtle or a tossup.

Some summary of the descriptions by the unbiased blind listener here about the Ascend Sierra 2-EX: On multiple occasions the Sierra 2-EX was described as having generally better 'fidelity', for lack of a better word (on my part): better instrument separation, clarity, less muddying of melodies, clearer vocals, cleaner bass. The KEF R3 was described additionally as being more bright and having more treble spikes I presume, due to the descriptions above of some sibilance in voices and some instruments sounding brighter on the KEF R3.

What is intriguing to me is that the one and only song where the KEF R3 won was where the listener preferred that the violin in the song sounded brighter and harsher. To me, as someone who has heard violin in real life, on this song I very much prefer the Ascend. Violin has treble bite, but it does not sound harsh or bright; it's smooth and yet intricate. I have never heard a speaker anywhere that renders it as well as the RAAL tweeter in the Ascend does, but this is all my bias talking and clearly is the opposite of this listener's experience at least: they preferred the KEF R3's rendition of the violin on this song.

For everything else, the Ascend Sierra 2-EX either won by a clear margin (the decision was surprisingly quick and confident) as in six cases, or was a tie as in two cases.

The results of this test point to the Ascend Sierra 2-EX as the better speaker, when we control for bass via a subwoofer. I know, it's hard to fight the bias when looking at this speaker, when we see it's apparently small size, or lower price, or perhaps some people here may even try to critique the design and make predictions about how it will sound from that. But ultimately what matters is what wins in the blind test, and in this case the results are pretty clear -- the Ascend Sierra 2-EX has better sound quality.
 
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,696
Location
California
More participants are needed to establish a trend.

I agree, but: How many? Is it useless to even attempt small scale blind tests?

P.S. I created a separate thread here as well, specifically for the discussion of blind test methodology.
 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,077
Likes
8,913
I agree, but: How many? Is it useless to even attempt small scale blind tests?

P.S. I created a separate thread here as well, specifically for the discussion of blind test methodology.

That's why so few blind tests are done. Harmann spent a lot of money over years to do their research.
 
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,696
Location
California
That's why so few blind tests are done. Harmann spent a lot of money over years to do their research.

Didn’t Harman’s expensive testing actually show that in general, humans have largely the same preference in speakers, with few outliers?

I understand that it was expensive to determine this, but once it is established, we can use this assumption to come to relatively generalizable results with relatively small participant sample sizes.

As for precisely “how small”, we’d have to look up the actual speaker preference distributions that Harman found, and either model their variance to calculate precisely for any N (count of participants) the corresponding probability that such a blind test’s results will be valid generally across the universal population of humans (as sampled by Harman’s expensive research).
 

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,077
Likes
8,913
Didn’t Harman’s expensive testing actually show that in general, humans have largely the same preference in speakers, with few outliers?

I understand that it was expensive to determine this, but once it is established, we can use this assumption to come to relatively generalizable results with relatively small participant sample sizes.

You are correct. The preferences were statistically significant and consistent. However, the majority group was around 66%. There are a lot of different ways to turn electricity into sound, but ultimately our ears are our ears. Perhaps there is something egalitarian about it.
 
OP
E

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,696
Location
California
You are correct. The preferences were statistically significant and consistent. However, the majority group was around 66%. There are a lot of different ways to turn electricity into sound, but ultimately our ears are our ears. Perhaps there is something egalitarian about it.

Ah, thank you -- 66% for a significantly sized majority group is important but probably not enough to derive any high confidence trends from even N=10 participant tests :/

I still think small sample size blind listening tests more valuable than most online reviews (which tend to be completely subjective and therefore orders of magnitude still less reliable than even an N=1 blind listening test). But I may have to give up hope of any of the tests I have the resources to do being generally applicable in a rigorous sense.

The only way to scale to a sufficiently large sample size would be if as many people as possible get involved and do similar tests. In that case, the evidence will slowly stack up over time and a trend will become clear.
 
Last edited:

Hemi-Demon

Senior Member
Joined
Jan 17, 2019
Messages
435
Likes
514
Thanks for the write-up, I am book-shelf shopping and I might look into the Sierra's now.
 

archerious

Active Member
Joined
Aug 10, 2020
Messages
255
Likes
148
Interesting thread
 
Top Bottom