• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

KEF R3 Speaker Review

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,623
Location
London, United Kingdom
More advanced eye candy you asked for as hor/ver reflections points is avaiable in edechamps loudspeaker explorer https://colab.research.google.com/g...dspeaker_Explorer.ipynb#scrollTo=BTZVC63nDGZB

Sadly, you won't find the KEF R3 there, nor any of the reviewed speakers from the past 10 days or so. This is because I'm trying to sort out something with @amirm first. Hopefully Loudspeaker Explorer will be back in sync with the latest reviews in the next 24 to 48 hours, so bear with me. It breaks my heart that people in this thread resort to loading data manually in REW for comparison, which is precisely what Loudspeaker Explorer is designed to help with :(
 

ctrl

Major Contributor
Forum Donor
Joined
Jan 24, 2020
Messages
1,640
Likes
6,279
Location
.de, DE, DEU
It breaks my heart that people in this thread resort to loading data manually in REW for comparison, which is precisely what Loudspeaker Explorer is designed to help with
Is it possible to run the tool without google login?
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,623
Location
London, United Kingdom
Is it possible to run the tool without google login?

Yes. You can run the tool on your local machine if you follow the instructions to setup a developer environment. It's more complicated than just clicking a button on a web page though.

I also have an item on my TODO list to make Loudspeaker Explorer work with Binder which is an alternative to Colab, but not sure if/when I can make that work. The main downside of Binder is that it is slower to load/run than Colab, but nothing egregious.
 

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
What's Directivity Index?
"Traditional DI is defined as the difference between the on-axis curve and the normalized sound-power curve. It is thus a measure of the degree of forward bias—directivity—in the sound radiated by the loudspeaker. It was decided to depart from this convention because it is often found that, because of symmetry in the layout of transducers on baffles, the on-axis frequency response contains acoustical interference artifacts, due to diffraction, that do not appear in any other measurement. It seems fundamentally wrong to burden the directivity index with irregularities that can have no consequential effects in real listening circumstances. Therefore, the DI has been redefined as the difference between the listening window curve and the sound power. In most loudspeakers the difference is small; in highly directional systems it can be significant. [...]. Obviously, a DI of 0 dB indicates omnidirectional radiation. The larger the DI, the more directional the loudspeaker in the direction of the reference axis. [...]. Because of the special importance of early reflections in what is measured and heard in rooms, a second DI is calculated, the Early Reflections DI, which is the difference between the listening window curve and the early-reflections curve. Because of the importance of early reflections in common sound reproduction venues, this is arguably the more important metric."
 

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,698
Location
California
One of the things that’s so wonderful about these measurements is, no matter what your opinion on the importance of low directivity vs other more traditional smoothness scores, the data being provided by Amir here is going to equip everyone (who wants) to make a powerfully informed choice of the best speaker for them!

There is continually growing evidence that wide horizontal dispersion and possibly even limited vertical dispersion (vs horizontal) is something I (and many others) have found preferable in speakers, perhaps often even more important than traditional spinorama preference metrics. It occurs to me that all of my speakers I’ve loved enough to keep have a non-symmetric dispersion: limited vertical vs horizontal dispersion (Ascend, Revel, Neumann).
 

Spocko

Major Contributor
Forum Donor
Joined
Sep 27, 2019
Messages
1,621
Likes
3,001
Location
Southern California
One of the things that’s so wonderful about these measurements is, no matter what your opinion on the importance of low directivity vs other more traditional smoothness scores, the data being provided by Amir here is going to equip everyone (who wants) to make a powerfully informed choice of the best speaker for them!

There is continually growing evidence that wide horizontal dispersion and possibly even limited vertical dispersion (vs horizontal) is something I (and many others) have found preferable in speakers, perhaps often even more important than traditional spinorama preference metrics. It occurs to me that all of my speakers I’ve loved enough to keep have a non-symmetric dispersion: limited vertical vs horizontal dispersion (Ascend, Revel, Neumann).
So something like the Genelec "The Ones" design which is noted for equally great vertical dispersion is not subjectively favorable? It would be interesting if you came up with some sort of vertical dispersion cut-off that represents the "ideal" situation.
 

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,698
Location
California
So something like the Genelec "The Ones" design which is noted for equally great vertical dispersion is not subjectively favorable? It would be interesting if you came up with some sort of vertical dispersion cut-off that represents the "ideal" situation.

Like I said, I simply don’t know. But in my KEF R3 vs Ascend Sierra 2EX blind test results thread(s), three major categories of hypotheses were put forward towards explaining the KEF R3's loss (by a large margin) despite having superior (traditionally interpreted) spin measurement results:

Hypothesis 1. It's an outlier: That it was just random chance that both blind test participants overwhelmingly preferred the Ascend 2EX over the KEF R3. Or that by chance my room (or subwoofer crossover) interacted more favorably with the 2EX somehow, even though the KEF R3 may be a bit flatter (though they’re both quite neutral speakers and sound quite similar sonically).

Hypothesis 2. Since it’s unlikely we’re interpreting the measurements wrong, KEF therefore must be “massaging” their published R3 measurements somehow to look better than the performance of the production models they ship.

Hypothesis 3. The KEF R3 published measurements are not wrong, and therefore our model used to predict speaker preference from spin measurements is not quite right. Either there's an important dimension we're missing when predicting preference (e.g. perhaps something that shows up in polar plots or the full sound field, but gets averaged out in the spinorama), or we're incorrectly weighting the degree to which some factors (e.g. directivity index) contribute to overall speaker preference, or a bit of both.

Hypothesis #1 is pretty much always a valid concern (which always comes up in any blind test), but similarly, it’s therefore not productive to dwell on this unless it means actively working to gather more blind test data.

This review from Amir is particularly fascinating because it completely rules out hypothesis #2! We now have independent confirmation that the KEF R3 does indeed have among the best measurements ever seen in a speaker.

This leaves us only with hypothesis #3 (and #1 relating to not having enough data) to explain why the KEF R3 didn’t dominate in the blind listening test (or in many other anecdotal sighted comparisons, including Amir’s).

I have absolutely nothing invested in one theory or the other, except towards us finding the truth — we’re all here because we want to find the closest possible thing to objective audio perfection.

Again, the data on the KEF R3 is really fascinating here because if our current interpretation of the measurements is correct, the R3 should be dominating blind shootouts against just about anything else — including $20,000 - $30,000 Revel speakers like the Salon 2’s, which don’t measure as well IIRC.

Fortunately, this is something we can test, in theory. So far, multiple anecdotal sighted tests and one blind test (mine) has not reinforced the measurement-predicted outcome that the KEF R3 should dominate pretty much everything else on the market. So we need an explanation for that, and/or more data from the blind testing side.
 
Last edited:

Spocko

Major Contributor
Forum Donor
Joined
Sep 27, 2019
Messages
1,621
Likes
3,001
Location
Southern California
Like I said, I simply don’t know. But in my KEF R3 vs Ascend Sierra 2EX blind test results thread(s), three major categories of hypotheses were put forward towards explaining the KEF R3's loss (by a large margin) despite having superior (traditionally interpreted) spin measurement results:

Hypothesis 1. It's an outlier: That it was just random chance that both blind test participants overwhelmingly preferred the Ascend 2EX over the KEF R3. Or that by chance my room (or subwoofer crossover) interacted more favorably with the 2EX somehow, even though the KEF R3 may be a bit flatter (though they’re both quite neutral speakers and sound quite similar sonically).

Hypothesis 2. Since it’s unlikely we’re interpreting the measurements wrong, KEF therefore must be “massaging” their published R3 measurements somehow to look better than the performance of the production models they ship.

Hypothesis 3. The KEF R3 published measurements are not wrong, and therefore our model used to predict speaker preference from spin measurements is not quite right. Either there's an important dimension we're missing when predicting preference (e.g. perhaps something that shows up in polar plots or the full sound field, but gets averaged out in the spinorama), or we're incorrectly weighting the degree to which some factors (e.g. directivity index) contribute to overall speaker preference, or a bit of both.

Hypothesis #1 is pretty much always a valid concern (which always comes up in any blind test), but similarly, it’s therefore not productive to dwell on this unless it means actively working to gather more blind test data.

This review from Amir is particularly fascinating because it completely rules out hypothesis #3! We now have independent confirmation that the KEF R3 does indeed have among the best measurements ever seen in a speaker.

This leaves us only with hypothesis #2 (and the others relating to not having enough data) to explain why the KEF R3 didn’t dominate in the blind listening test (or in many other anecdotal sighted comparisons, including Amir’s).

I have absolutely nothing invested in one theory or the other, except towards us finding the truth — we’re all here because we want to find the closest possible thing to objective audio perfection.

Again, the data on the KEF R3 is really fascinating here because if our current interpretation of the measurements is correct, the R3 should be dominating blind shootouts against just about anything else — including $20,000 - $30,000 Revel speakers like the Salon 2’s, which don’t measure as well IIRC.

Fortunately, this is something we can test, in theory. So far, multiple anecdotal sighted tests and one blind test (mine) has not reinforced the measurement-predicted outcome that the KEF R3 should dominate pretty much everything else on the market. So we need an explanation for that, and/or more data from the blind testing side.
Maybe the R3 is too flat/neutral and would benefit if it more aligned with the Harman curve? Would be interesting to compare and overlay the frequency response of the Ascend 2EX against the R3.
 

amadeuswus

Active Member
Forum Donor
Joined
Jul 8, 2019
Messages
279
Likes
266
Location
Massachusetts
"As I [Amir] noted in the [KEF R3] review, broad deviations in the measurements, despite their low level, may have a much larger subjective difference." (The quote comes toward the end of his review.)

The R3 has just such a broad deviation (a rise) in a treble region where the ear is very sensitive--and where many close-miked recordings tend to be "hot" anyway. I have not heard it, but EQ and boundary reinforcement from the front wall might help a lot. Maybe a bit ironic for a speaker described (by another member) as having some of the best measurements ever seen.
 

echopraxia

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
1,109
Likes
2,698
Location
California
Maybe the R3 is too flat/neutral and would benefit if it more aligned with the Harman curve? Would be interesting to compare and overlay the frequency response of the Ascend 2EX against the R3.

The interesting thing is that if I had any critique for my Ascend speakers, it's that they're also "too flat/neutral". The sound signature seemed quite similar (though we won't know for sure unless/until we get measurements from Amir[1]) between the R3 and 2EX.

In fact, I actually prefer the frequency response of my Revel F206 over that of my Ascend Sierra RAAL Towers, despite the latter actually winning my overall favor when I include some mild equalization and/or the addition of my subwoofers. (I still haven't done a blind test of Revel F206 vs Ascend Sierra RAAL Towers, but I plan to.)

So I agree frequency response is very important, and I'm a fan of the Harman curve, but I don't think this explains my R3 vs 2EX comparison because I think both of them are pretty similar in terms of FR. It could explain the results of Amir's Revel M16 vs KEF R3 subjective comparison here, though.

[1] Don't worry, I am already in contact with Amir about measuring some of my speakers in the future (he has a long queue I'm sure) :) Though I leave the choice of which speaker(s) to borrow first completely up to Amir.
 
Last edited:

tecnogadget

Addicted to Fun and Learning
Joined
May 21, 2018
Messages
558
Likes
1,012
Location
Madrid, Spain
Is there a reason why the port is not placed at the center of the back side?
#aesthetic

Let's have fun, but let's also enlighten our members.

Quoted from the KEF R&D R Series 2018 Whitepaper

"Resonances inside the cabinet
The port(s) are positioned on the rear panel of the cabinet. This has two advantages:
• Resonances inside the cabinet are at relatively high frequencies and the sounds are not directed towards the listener.
• There is more freedom on the rear panel compared to the front in positioning the port so it is placed close to antinodes (nulls) of the resonances, so less energy is transmitted through the port.


To illustrate this last point, figures 6 & 7 show the difference in output when the port is placed at a node and an antinode of an internal resonance. For the first of these measurements, there is no wadding inside the cabinet, so the effect is made clearer. The resonance can clearly be seen in the total response.

When the ports are optimally placed and the wadding
added, the resonances are greatly depressed and cannot be

detected in the overall response."

port.png


Is the Klippel system able to measure individual drivers and port ??
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Like I said, I simply don’t know. But in my KEF R3 vs Ascend Sierra 2EX blind test results thread(s), three major categories of hypotheses were put forward towards explaining the KEF R3's loss (by a large margin) despite having superior (traditionally interpreted) spin measurement results:

Hypothesis 1. It's an outlier: That it was just random chance that both blind test participants overwhelmingly preferred the Ascend 2EX over the KEF R3. Or that by chance my room (or subwoofer crossover) interacted more favorably with the 2EX somehow, even though the KEF R3 may be a bit flatter (though they’re both quite neutral speakers and sound quite similar sonically).

Hypothesis 2. Since it’s unlikely we’re interpreting the measurements wrong, KEF therefore must be “massaging” their published R3 measurements somehow to look better than the performance of the production models they ship.

Hypothesis 3. The KEF R3 published measurements are not wrong, and therefore our model used to predict speaker preference from spin measurements is not quite right. Either there's an important dimension we're missing when predicting preference (e.g. perhaps something that shows up in polar plots or the full sound field, but gets averaged out in the spinorama), or we're incorrectly weighting the degree to which some factors (e.g. directivity index) contribute to overall speaker preference, or a bit of both.

Hypothesis #1 is pretty much always a valid concern (which always comes up in any blind test), but similarly, it’s therefore not productive to dwell on this unless it means actively working to gather more blind test data.

This review from Amir is particularly fascinating because it completely rules out hypothesis #3! We now have independent confirmation that the KEF R3 does indeed have among the best measurements ever seen in a speaker.

This leaves us only with hypothesis #2 (and the others relating to not having enough data) to explain why the KEF R3 didn’t dominate in the blind listening test (or in many other anecdotal sighted comparisons, including Amir’s).

I have absolutely nothing invested in one theory or the other, except towards us finding the truth — we’re all here because we want to find the closest possible thing to objective audio perfection.

Again, the data on the KEF R3 is really fascinating here because if our current interpretation of the measurements is correct, the R3 should be dominating blind shootouts against just about anything else — including $20,000 - $30,000 Revel speakers like the Salon 2’s, which don’t measure as well IIRC.

Fortunately, this is something we can test, in theory. So far, multiple anecdotal sighted tests and one blind test (mine) has not reinforced the measurement-predicted outcome that the KEF R3 should dominate pretty much everything else on the market. So we need an explanation for that, and/or more data from the blind testing side.

The answer is:

Hypothesis 4. Preference is just that. I like my black tea with sugar, others plain, others still with milk. Honey or perhaps a few drops of rum?
 

spacevector

Addicted to Fun and Learning
Forum Donor
Joined
Dec 3, 2019
Messages
554
Likes
1,008
Location
Bayrea
The answer is:

Hypothesis 4. Preference is just that. I like my black tea with sugar, others plain, others still with milk. Honey or perhaps a few drops of rum?
I'm maple syrup guy myself.
 

ROOSKIE

Major Contributor
Joined
Feb 27, 2020
Messages
1,936
Likes
3,527
Location
Minneapolis
Like I said, I simply don’t know. But in my KEF R3 vs Ascend Sierra 2EX blind test results thread(s), three major categories of hypotheses were put forward towards explaining the KEF R3's loss (by a large margin) despite having superior (traditionally interpreted) spin measurement results:

Hypothesis 1. It's an outlier: That it was just random chance that both blind test participants overwhelmingly preferred the Ascend 2EX over the KEF R3. Or that by chance my room (or subwoofer crossover) interacted more favorably with the 2EX somehow, even though the KEF R3 may be a bit flatter (though they’re both quite neutral speakers and sound quite similar sonically).

Hypothesis 2. Since it’s unlikely we’re interpreting the measurements wrong, KEF therefore must be “massaging” their published R3 measurements somehow to look better than the performance of the production models they ship.

Hypothesis 3. The KEF R3 published measurements are not wrong, and therefore our model used to predict speaker preference from spin measurements is not quite right. Either there's an important dimension we're missing when predicting preference (e.g. perhaps something that shows up in polar plots or the full sound field, but gets averaged out in the spinorama), or we're incorrectly weighting the degree to which some factors (e.g. directivity index) contribute to overall speaker preference, or a bit of both.

Hypothesis #1 is pretty much always a valid concern (which always comes up in any blind test), but similarly, it’s therefore not productive to dwell on this unless it means actively working to gather more blind test data.

This review from Amir is particularly fascinating because it completely rules out hypothesis #3! We now have independent confirmation that the KEF R3 does indeed have among the best measurements ever seen in a speaker.

This leaves us only with hypothesis #2 (and the others relating to not having enough data) to explain why the KEF R3 didn’t dominate in the blind listening test (or in many other anecdotal sighted comparisons, including Amir’s).

I have absolutely nothing invested in one theory or the other, except towards us finding the truth — we’re all here because we want to find the closest possible thing to objective audio perfection.

Again, the data on the KEF R3 is really fascinating here because if our current interpretation of the measurements is correct, the R3 should be dominating blind shootouts against just about anything else — including $20,000 - $30,000 Revel speakers like the Salon 2’s, which don’t measure as well IIRC.

Fortunately, this is something we can test, in theory. So far, multiple anecdotal sighted tests and one blind test (mine) has not reinforced the measurement-predicted outcome that the KEF R3 should dominate pretty much everything else on the market. So we need an explanation for that, and/or more data from the blind testing side.

You know generally I do not really get to know a speaker until several hours have passed on multiple days. My hearing is not the same everyday and what I prefer sonic-ally changes somewhat when excited, tired, energized, meditative.
I also tend to like exciting speakers or forward sounding speakers for a few tracks and then over time grow tired of their character. Other changes like that take place.
I also really like to change the volume and listen at different volumes and especially listen to different types of music at different levels.
Additionally and this is big, I challenge a test if it was only done when the speakers are "level matched" because that is not generally possible. Additionally, I find I must be able to fine tune the volume between speakers to get a perceived volume match and even then there is no reason the speakers have to be playing at exactly the same volume. That is utter nonsense because what is most important is that the speaker is playing at level that sounds best for that track on that speaker for me. I need to be able to adjust volume to match my senses. In any case over time the speaker I prefer reveals itself to me. Sometimes this is not quite what I would have bet on.

I appreciate any and all testing that you have done, really I do very much, but a blind test with 2 people and 2 speakers is a tough one to promote as very telling. (it is interesting though) I would get much more out of a subjective review you wrote yourself based on several weeks of using both speakers. Time can be a real truth teller just like a blind fold.

Keep doing these tests and maybe a pattern will appear. I'd really like to see some of these type of tests with some very well "trained/experienced" listeners.
Blind testing is really hard. (and unlike double blind, still leaves all sorts of room for bias affecting the outcome) You might drop the sub-woofer out as that creates all sorts of complexity for inequality. (despite doing it to equalize the test, I am suggesting it made it less accurate) Additionally you must know your tracks and chose very wisely, perhaps in the beginning picking only recordings of the highest standard. Otherwise it will be very difficult to account for the difference between a speaker revealing poorly recorded material and a speaker not sounding good of its own accord.
 

ROOSKIE

Major Contributor
Joined
Feb 27, 2020
Messages
1,936
Likes
3,527
Location
Minneapolis
The answer is:

Hypothesis 4. Preference is just that. I like my black tea with sugar, others plain, others still with milk. Honey or perhaps a few drops of rum?

I am a gongfu cha guy myself. Meaning I am all about the vessels. Little pots and little cups :)
 

aarons915

Addicted to Fun and Learning
Forum Donor
Joined
Oct 20, 2019
Messages
686
Likes
1,142
Location
Chicago, IL
Like I said, I simply don’t know. But in my KEF R3 vs Ascend Sierra 2EX blind test results thread(s), three major categories of hypotheses were put forward to possibly explain why the KEF R3 lost (by a large margin) despite having superior (traditionally interpreted) spin measurement results:

Hypothesis 3. The KEF R3 published measurements are not wrong, and therefore the way we interpret spin measurements may not be quite right. Or, maybe there’s something important not captured in the spins (which are summaries, after all), which might show up in polar plots or other data we have.

We still don't know if the R3 have superior measurements, once we get a 2 EX Spin from the same Klippel we can say for sure but the Sierra 2 measured pretty well and the midwoofer in the EX should be better.

I actually agree with you that the R3 are somewhat harsh and I was starting to think the official measurements were BS but now I see they aren't. I think the Spin does show the harshness where Amir was talking about the broad rise from about 3-8k, its' either that or the similar bump in the early reflections, EQ that area just slightly and I bet they become amazing speakers. I also doubt the harshness would be an issue in a bigger room but in my smaller living room, reflections are pretty strong since the sidewall is only 3 ft away from the speaker.
 

tecnogadget

Addicted to Fun and Learning
Joined
May 21, 2018
Messages
558
Likes
1,012
Location
Madrid, Spain
I can't understand how this doubt has gone out of proportion.
On the one hand, we religiously pursue objective results through measurement and scientific method without forgiveness.
But then more credit has been given to a couple of subjective opinions.

In this particular case, I can understand that another speaker might be better or might like it more than the one that has been analyzed. But I find it hard to believe that the difference between the A, B and C speakers that have been discussed constitutes a difference from day to night, validating that the one analyzed is basically not worth it.

Let's be serious. The measurements perform. Pretty much all over different forums, every user has been in love with the previous generation as for the current. The specialized press loves it too (not that this matters a lot since they are being paid for). Still, there is a general consensus towards sounding great and measurements correlates.
 

Ilkless

Major Contributor
Forum Donor
Joined
Jan 26, 2019
Messages
1,782
Likes
3,520
Location
Singapore
I can't understand how this doubt has gone out of proportion.
On the one hand, we religiously pursue objective results through measurement and scientific method without forgiveness.
But then more credit has been given to a couple of subjective opinions.

In this particular case, I can understand that another speaker might be better or might like it more than the one that has been analyzed. But I find it hard to believe that the difference between the A, B and C speakers that have been discussed constitutes a difference from day to night, validating that the one analyzed is basically not worth it.

Let's be serious. The measurements perform. Pretty much all over different forums, every user has been in love with the previous generation as for the current. The specialized press loves it too (not that this matters a lot since they are being paid for). Still, there is a general consensus towards sounding great and measurements correlates.

KEF being called out for being tuned similarly to the well-reviewed 705P, but better, is still hard for me to tally
 

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,313
Location
Boquete, Chiriqui, Panama
I also tend to like exciting speakers or forward sounding speakers for a few tracks and then over time grow tired of their character. Other changes like that take place.
If, at age 78, I survive the Corona Virus pandemic, I will buy that MiniDSP OpenDRC-DI and have four EQ curves available at the touch of a button. ASR has really convinced me that starting with at least 320kbps audio files, a transparent electronics chain, and a good, flat-measuring pair of loudspeakers - and using DSP fo fine-tune to my "sonic preference of the day" - is the best path to audio happiness.
 
Top Bottom