@distortion graph: so 2.83v is 86dB and 10v is 105dB?
2.83 -> 10 should be +11 dB so 86dB -> 97dB
Indeed I agree it measures very well!Probably a typo. What is more interesting is that THD is over the entire range driven by 2nd harmonic, which is far more benign than if it would be the 3rd. That would point to an extremely neutral sounding speaker.
Indeed I agree it measures very well!
.. or you can level-match the 100Hz-1kHz region and realise that M16 is a bit higher in the 1kHz-10kHz region, after which it rolls-off mroe steeply.
But the truth is that, baring M16s hump at 1200-1800Hz and sharper roll-off at the end those 2 speakers are incredibly similar.
View attachment 54092
Here is the thing I believe it would be interesting for most of us: as both M16 and R3 have such similar estimated in-room response and exhibit quite low distorions I wander if they could be told apart if EQ-ed to the same target and listened blindly in a room.
That's certainly not wrong, but the thing is you can get that stuff anywhere. There are about a million people doing reviews of speakers on this manor. But there's only one guy with a Klippel. While there certainly can be some value in subjective reviews, I'm perfectly fine with the one guy with a Klippel doing nothing but providing us with the data we pretty much can't get anywhere else. Squeeze the most value out of that thing, don't waste too much time doing other things, I say.Fully agree with you, although for me for subjective impressions to be also useful I would like more care in the listening tests, thus sensible placement and listening distance and placement according to the bass propertiies or EQin the bass/modal region, stereo listening and measurements at the listening room position.
I don't say anything different and actually ignore the subjective listening and only look at the measurements. Am just writing what I would expect to see if I would want a subjective review as an add-on. Personally I would even prefer if the reviews had only objective measurements and the "panther ratings" were only based on measurements as this way the kind of get muddled up for inexperienced visitors.That's certainly not wrong, but the thing is you can get that stuff anywhere. There are about a million people doing reviews of speakers on this manor. But there's only one guy with a Klippel. While there certainly can be some value in subjective reviews, I'm perfectly fine with the one guy with a Klippel doing nothing but providing us with the data we pretty much can't get anywhere else. Squeeze the most value out of that thing, don't waste too much time doing other things, I say.
Somehow we use different data as I used the estimated listening response (Estimated In-Room Response.txt) from his zip files which gives a very different result if I align them similarly:It all depends what you want to show. Take a look at this comparison. Obviously both speakers aimed for flat PIR in the 1kHz-10khz so I aligned them that way (let's ignore M16 hump in the 1200-1800Hz range).
From that comaprison it turned out that M16 makes slightly less nergy in the entire sub 1kHz region and that is also rolls-off more steeply after 10kHz.
View attachment 54091
Thats what I actually did at my initial post #84 (with again very different results to yours) which started this discussion... or you can level-match the 100Hz-1kHz region and realise that M16 is a bit higher in the 1kHz-10kHz region, after which it rolls-off mroe steeply.
Yes, thats why I also just wrote in my following post.View attachment 54096
View attachment 54097
Something is not quite right with these two overlays...
Shouldn't both show overlaid predicted inroom responses? But it seems predictions in one don't match the other?
I agree. Personally I think lateral dispersion smoothness is much more important than vertical (for my uses anyway). I think maybe speakers with really smooth vertical dispersion might "game the score" a bit relative to a speaker with better lateral response but may not have as good verticals (which may matter less to the listener).However, while the spinorama shows us a great deal, it does not show the lateral reflections as a seperate curve. As our ears are in the lateral plane, that first sound to bounce of the sidewall is of great importance.
I have no doubt that the Revel will sound better you can see it in the measurements. You should have a look at the drectivity index plotsGreat thanks nice review and spin data, we also interested eat what you sense for subjective part at least i do and in you try to find explanations why M16 get golf panther and R3 plus 8341A dont get their expected football panther i made below toggle of 3 second animation camparison for M16/R3/8341A, maybe its too much show there but think its a good stare in Spinorama have included PIR in the upper orange curve and also a rough curve of Toole's trained listener preference from first edition book where M16 is coming most close to that curve. Thanks if you will take a stare and see if any directivity or Spinorama data makes sense to listening test, in the lower animation they filtered 8th order to telephone band like bandwidth and also smothed flat as pancake on axis which think combined the filtering is a good effect ala normalize thing.
Click inside visual to get clear resolution to directivity patterns:
View attachment 54038
EQed flat on axis and filtered BW 8th order stopbands @80Hz / @7kHz, click inside visual to get clear resolution to directivity patterns
View attachment 54039
Thanks, now it looks identical to mine.Sorry guys, I loaded wrong PIR for M16.
Here's how they look when range 1khz-10kHz is matched:
View attachment 54112
I’m the one who performed that blind test, and I agree — I couldn’t find any obvious major coloration differences once bass was normalized. They sounded remarkably similar actually, in terms of frequency response. But they did not sound similar in terms of “fidelity”, for lack of a better word.
This objective and subjective review from Amir is really fascinating to me, because of how closely it aligns with my KEF R3 experience: I initially bought the R3 because the published specs seemed to indicate a speaker that should outperform just about anything else out there — and now thanks to Amir, we’ve confirmed that KEF was not sugarcoating the measurements. The KEF R3 really does measure fantastically well.
And yet in my blind test, they didn’t really dominate, despite their phenomenal measurements suggesting that they should. They were clearly fantastic speakers, but the truth is when compared side by side, they lost pretty severely to the Ascend Sierra 2EX in a bass-normalized blind test. And this result is mirrored in Amir’s subjective listening where the R3 loses subjectively to another speaker (the Revel), which purely according to measurements shouldn’t be happening. And there are many other accounts you will find online mirroring this.
As for what could explain this? I do not know. Many of us suspect dispersion breadth to be the likely culprit, but we need more data to better establish exactly how much more important wide dispersion may be than how current preference scores weight it. And thanks to Amir’s equipment and hard work, many of us are happy to look forward to much more valuable data
FWIW, ultimately I ended up returning the KEF R3’s, but I still have my Revel F206 and all my Ascend flagship speakers. I still have not done a blind test between the Revel F206 and Ascend RAAL Towers, but I do still plan to get around to it eventually.
That certainly looks like the m16 would sound better ie pleasing. But the R3 more accurate. And I think we can agree now as a speaker gets more accurate it gets more boring.Sorry guys, I loaded wrong PIR for M16.
Here's how they look when range 1khz-10kHz is matched:
View attachment 54112
That certainly looks like the m16 would sound better ie pleasing
Sorry guys, I loaded wrong PIR for M16.
Here's how they look when range 1khz-10kHz is matched:
View attachment 54112