• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

KEF R3 Speaker Review

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Read together with your your comments on subjective listening and the PIR curve in the 705p review, this review comes across as a bit disingenuous.

Disingenuous is a bit of a strong word imho - it's all relative to the orders of the speakers being tested. Forst example, the first amplifiers and AVRs were reviewed a bit harshly imho, because we were expecting DAC like performance for amplifiers and stand alone amplifier performance for AVRs. As the sample size builds up, expectations were a bit reset and an AVR that would have been panned if it had been tested first now gets a pass on cost/performance/field average.

In my view, the subjective listening tests, as currently performed, are a pointless exercise and the information is of no use to others.
 
Last edited:

Jon AA

Senior Member
Forum Donor
Joined
Feb 5, 2020
Messages
466
Likes
907
Location
Seattle Area
In my view, the subjective listening tests, as currently performed, are a pointless exercise and the information is no use to others.
You seem to hate everything about these reviews (and the science upon which they are based). Why do you read them?
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
21,614
Likes
21,897
Location
Canada
In my view, the subjective listening tests, as currently performed, are a pointless exercise and the information is no use to others.
What do you propose as a alternative and a improvement?
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
You seem to hate everything about these reviews (and the science upon which they are based). Why do you read them?

I don't read the reviews, I just look at the graphs.

I don't hate anything.
But I disagree with some of the standings regarding directivity and target curve. This data was indeed collected in a scientific manner but it's merely a survey on listening preference, an opinion poll.
I am also bothered by the dismissive attitute towards performance in the time-domain.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
What do you propose as a alternative and a improvement?

I think it is obvious that Amir needs help. He doesn't have time to do everyting.

Also the methodology needs to be bullet-proof, the assessor has to be trained listener and the listening assessment should be observation-driven not a tasting session.
 
Last edited:

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
21,614
Likes
21,897
Location
Canada
I think it is obvious that Amir needs help. He doesn't have time to do everyting.
It does seem that he is overflowing with gear for testing. Shipping-receiving consumes time like a parasite and the testing and review may suffer accordingly. A proper situation would not require somebody with @amirm 's education, skills and capability to do the mundane tasks. To introduce another person requires a certain kind of relationship. A proper partner or employee would maybe be suitable?
 
Last edited:

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
It does seem that he is overflowing with gear for testing. Shipping-receiving consumes time like a parasite and the testing and review may suffer accordingly. A proper situation would not require somebody with @amirm 's education, skills and capability to not do the mundane tasks. To introduce another person requires a certain kind of relationship. A proper partner or employee would maybe be suitable?


Speakers should be evaluated in mono as well as stereo, properly positioned in the room over a sufficently long period.
Adequate music programme should be used to highlight specific shortcomings of speaker performance.
The listener should be familiar with the acoustics of the room and with the test tracks.
Benchmarks for reproduction should be defined.

As for measurements I would like to see on-axis FR and CSD plots of the mid-woofer when possible, as well port FR measurements.
An in-room response measurement with the speakers properly setup.
Possibly some simple cabinet-resonance measurements as well, à la Stereophile, or some other way deemed more representative of the cabinet contribution.

Frequency response measurement plots are insuficient to characterise speaker performance.


How long does it take to properly setup a pair of speakers in a room for stereo listening?

Would those submiting speakers for evaluation be willing to do without them for a month?
 
Last edited:

dwkdnvr

Senior Member
Joined
Nov 2, 2018
Messages
418
Likes
698
In other words, you want Amir to test speakers exactly the same way as everyone else. Why? 'Everyone' already does what you're asking for, and it goes against pretty much every piece of established research - auditory perception and auditory memory are extremely unreliable, and don't add anything to the 'science' part of this.

Frequency response measurement plots are insuficient to characterise speaker performance.

Feel free to produce/quote the research that establishes this. I really think you're failing to appreciate the significance of the Harman research - they looked at a lot of parameters including distortion etc, but the conclusion *based on testing and research* is that frequency response really is so dominant in terms of establishing preference that the rest basically doesn't matter. They didn't start with that assumption and try to back into a justification.

If you disagree, then you're free to perform and/or cite competing research. Until then, it's the best we have.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,621
Location
London, United Kingdom
As for measurements I would like to see on-axis FR and CSD plots of the mid-woofer when possible, as well port FR measurements.
An in-room response measurement with the speakers properly setup.
Possibly some simple cabinet-resonance measurements as well, à la Stereophile, or some other way deemed more representative of the cabinet contribution.
Frequency response measurement plots are insuficient to characterise speaker performance.

I can concede that, when speakers already show similarly good frequency response characteristics, other properties become relatively important if one wants to know which speaker offers the best sound quality. Problem is, existing studies don't seem to provide good guidance as to what these "other properties" should be, how they rank against each other, and where the thresholds lie.

Why do you think CSD plots, port measurements, or cabinet-resonance measurements provide important, perceptually meaningful data on top of what is already shown on far-field frequency response graphs? And why focus on these as opposed to, say, non-linear distortion? How do you know how these metrics rank against each other and how they relate to perception of sound in real-world scenarios (i.e. real loudspeaker, real room, real listener, real content)? Do you have access to research we don't know about? If you don't know, then how do you propose we leverage this additional data in a meaningful way?

Would those submiting speakers for evaluation be willing to do without them for a month?

I don't think that would be the bottleneck for what you're proposing. @amirm has made it abundantly clear he prioritizes speed/quantity over quality, and for that reason has repeatedly refused requests to do additional work per speaker unless there's a strong case for it. I understand his perspective - why cut down on the frequency of speaker reviews to add data that doesn't seem that meaningful from a perceptual perspective? He's just applying the law of diminishing returns.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
I can concede that, when speakers already show similarly good frequency response characteristics, other properties become relatively important if one wants to know which speaker offers the best sound quality. Problem is, existing studies don't seem to provide good guidance as to what these "other properties" should be, how they rank against each other, and where the thresholds lie.

Why do you think CSD plots, port measurements, or cabinet-resonance measurements provide important, perceptually meaningful data on top of what is already shown on far-field frequency response graphs? And why focus on these as opposed to, say, non-linear distortion? How do you know how these metrics rank against each other and how they relate to perception of sound in real-world scenarios (i.e. real loudspeaker, real room, real listener, real content)? Do you have access to research we don't know about? If you don't know, then how do you propose we leverage this additional data in a meaningful way?

CSD informs of performance in the time domain, as does Wavelet and Wigner-Ville. A resonance-induced blip in the FR may resonate for long enough to be audible.

I agree that no requesting for IMD plots was a serious omission.

It doesn't hurt to have the data. That is probably the first step towards correlation with listening and determining audibility.

I don't think that would be the bottleneck for what you're proposing. @amirm has made it abundantly clear he prioritizes speed/quantity over quality, and for that reason has repeatedly refused requests to do additional work per speaker unless there's a strong case for it. I understand his perspective - why cut down on the frequency of speaker reviews to add data that doesn't seem that meaningful from a perceptual perspective? He's just applying the law of diminishing returns.

How do you know that it is not meaningful from a perceptual perspective? Existing studies seem to indicate that but should we stop there? I for one find the methodology in some of those studies somewhat fragile. Should we take the results of studies as the gospel, never to be contested?
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
In other words, you want Amir to test speakers exactly the same way as everyone else. Why? 'Everyone' already does what you're asking for, and it goes against pretty much every piece of established research - auditory perception and auditory memory are extremely unreliable, and don't add anything to the 'science' part of this.

You must be joking. It was Amir's choice to publish his listening tests.
People think that these reviews and the Preference Table meant to have an impact in onsumer choice but are not critical of the methodology used in the listening tests nor the dissonance of some of the comments overlayed on the plots?

And what do you mean by "test speakers exactly the same way as everyone else"?
A performance test (measurements) must be performed the same as anyone else, as long as those are adequately performed.
And if a listening assessment is to be published then it has to be fit for purpose.

Feel free to produce/quote the research that establishes this. I really think you're failing to appreciate the significance of the Harman research - they looked at a lot of parameters including distortion etc, but the conclusion *based on testing and research* is that frequency response really is so dominant in terms of establishing preference that the rest basically doesn't matter. They didn't start with that assumption and try to back into a justification.

I have reservations about some of their conclusions. I also think that there is strong bias towards a "frequency response tells it all" (for practical reasons perhaps?) and towards wide-directivity (for personal preference - Toole uses an upmixer).
 

Shazb0t

Addicted to Fun and Learning
Joined
May 1, 2018
Messages
643
Likes
1,232
Location
NJ
You must be joking. It was Amir's choice to publish his listening tests.
People think that these reviews and the Preference Table meant to have an impact in onsumer choice but are not critical of the methodology used in the listening tests nor the dissonance of some of the comments overlayed on the plots?

And what do you mean by "test speakers exactly the same way as everyone else"?
A performance test (measurements) must be performed the same as anyone else, as long as those are adequately performed.
And if a listening assessment is to be published then it has to be fit for purpose.



I have reservations about some of their conclusions. I also think that there is strong bias towards a "frequency response tells it all" (for practical reasons perhaps?) and towards wide-directivity (for personal preference - Toole uses an upmixer).
Maybe you should consider starting your own site and testing speakers yourself.
 

Ilkless

Major Contributor
Forum Donor
Joined
Jan 26, 2019
Messages
1,771
Likes
3,502
Location
Singapore
A resonance-induced blip in the FR may resonate for long enough to be audible.

This was enough to invalidate your opinion, because it has made it abundantly clear that you are relying on layperson intuition, anecdote and folk theory disguised as a valid counter-argument to empirical evidence. Section 9.2.1 of Sound Reproduction sums up the literature on the subject. Evidence indicates above 200Hz, we hear the spectral bump, not the ringing. That the ringing is generated by the spectral bump is a non sequitur.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
This was enough to invalidate your opinion, because it has made it abundantly clear that you are relying on layperson intuition, anecdote and folk theory disguised as a valid counter-argument to empirical evidence. Section 9.2.1 of Sound Reproduction sums up the literature on the subject. Evidence indicates above 200Hz, we hear the spectral bump, not the ringing. That the ringing is generated by the spectral bump is a non sequitur.

I've just replied to this in another thread...

How do you know that it is not meaningful from a perceptual perspective? Existing studies seem to indicate that but should we stop there? I for one find the methodology in some of those studies somewhat fragile. Should we take the results of those audibility studies as the gospel, never to be contested? That is not science, but religion.
 

infinitesymphony

Major Contributor
Joined
Nov 21, 2018
Messages
1,072
Likes
1,809
Trying to follow this discussion. Are the resonances not what the CSD graph in the review is showing?

index.php


There are so many measurement points around the speaker, why would individual components need to be measured separately to determine resonances?
 

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,306
Location
uk, taunton
I've just replied to this in another thread...

How do you know that it is not meaningful from a perceptual perspective? Existing studies seem to indicate that but should we stop there? I for one find the methodology in some of those studies somewhat fragile. Should we take the results of those audibility studies as the gospel, never to be contested? That is not science, but religion.
Can you keep these arguments to the thread they belong in.

It seems they are popping up everywhere and you seem to always be at the center of it .

Back to OP.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Trying to follow this discussion. Are the resonances not what the CSD graph in the review is showing?

index.php


There are so many measurement points around the speaker, why would individual components need to be measured separately to determine resonances?

The purpose of these plots is to illustrate performance in specific parameters.
Sometimes splitting measurements of individual drivers helps to clarify some issues.
See here.
 

QMuse

Major Contributor
Joined
Feb 20, 2020
Messages
3,124
Likes
2,785
The purpose of these plots is to illustrate performance in specific parameters.
Sometimes splitting measurements of individual drivers helps to clarify some issues.
See here.

Repeating over and over your CSD mantra in various topics won't make it any more true. It is a simple fact that every driver will have several resonance modes. And yes, those modes can be easilly measured, but what is relevant is their audibility. So far only the modes that affect FR have been proven to be audible by listening tests, while others are not due to masking. This has been discussed many times together with multiple quotations from various AES articles, so either provide some new article proving otherwise or plz stop repeating the same thing over and over.
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Repeating over and over your CSD mantra in various topics won't make it any more true. It is a simple fact that every driver will have several resonance modes. And yes, those modes can be easilly measured, but what is relevant is their audibility. So far only the modes that affect FR have been proven to be audible by listening tests, while others are not due to masking. This has been discussed many times together with multiple quotations from various AES articles, so either provide some new article proving otherwise or plz stop repeating the same thing over and over.

So far.
 
Top Bottom