- Joined
- Feb 23, 2016
- Messages
- 20,801
- Likes
- 37,714
Yes it certainly looks like a copy and paste error."Emotiva 6s" is a copy-paste-error in the review at one place.
Yes it certainly looks like a copy and paste error."Emotiva 6s" is a copy-paste-error in the review at one place.
What I'm seeing of the formula vs measurements so far doesn't look at all like 86% correlation. Maybe @MZKM could address it in its own thread as he seems to spent the most time on the formula, and has an idea of what lowers a score and what surprisingly does not. If we could get Todd Welti, Sean Olive and Floyd O'Toole to participate it would spread some needed understanding I think.It has failed 15-20% of the time in some of the formal experiments hasn't it?
Hey Joe, do you have a public album available of all your measurements like the one SmackDaddy posted: https://photos.google.com/share/AF1...?key=U0pKaFBJRkU3bzVYX0tOdnNDaFBFbTRZYnVvN3Bn
What I'm seeing of the formula vs measurements so far doesn't look at all like 86% correlation. Maybe @MZKM could address it in its own thread as he seems to spent the most time on the formula, and has an idea of what lowers a score and what surprisingly does not.
I don't. I'm not here to plug my own channel, but instead to just clarify something in a comment mentioning me directly. I will just say that if you Google "Speaker Leaderboard" you will find a ranking of all the speakers I've reviewed on my channel. I almost always do a simple in-room frequency response measurement so viewers have a sense of what I was hearing. It's usually at the end of each video during the sound demo.It has failed 15-20% of the time in some of the formal experiments hasn't it?
Hey Joe, do you have a public album available of all your measurements like the one SmackDaddy posted: https://photos.google.com/share/AF1...?key=U0pKaFBJRkU3bzVYX0tOdnNDaFBFbTRZYnVvN3Bn
I don't. I'm not here to plug my own channel, but instead to just clarify something in a comment mentioning me directly. I will just say that if you Google "Speaker Leaderboard" you will find a ranking of all the speakers I've reviewed on my channel. I almost always do a simple in-room frequency response measurement so viewers have a sense of what I was hearing. It's usually at the end of each video during the sound demo.
I just want to emphasize that in-room measurements are quite useless as an indicator for sound quality. If you measure a flat response in-room from a few meters out, the speaker will either have extreme directional dispersion or the treble will be elevated. If the room is not acoustically very large, of course.
Review bothThe pioneers in the same price range sound leagues better imo (Amir may need to review that too).
We don’t know the gradient of scoring, how much better is a 6 vs 6.5 vs 7, it’s likely exponential/logarithmic in nature (matched listener-described preference ratings). So, I wouldn’t argue why two speakers are close by ~0.25, it would be more appropriate to ask why a speaker scored much lower/higher than others that measured somewhat similarly. People likely think the LS50 sounds better than the Harbeth, and the LS50 indeed scores better than it, so that’s all good; if you think the Harbeth should be closer in score to lower scoring speakers, that’s a discussion worth having.What I'm seeing of the formula vs measurements so far doesn't look at all like 86% correlation. Maybe @MZKM could address it in its own thread as he seems to spent the most time on the formula, and has an idea of what lowers a score and what surprisingly does not. If we could get Todd Welti, Sean Olive and Floyd O'Toole to participate it would spread some needed understanding I think.
@twelti @Floyd Toole
No, you can make relative comparisons between the steady-state response in your room, but that doesn't tell you much about what the speaker is doing.I'm aware, but if the method is consistent a reasonable relative comparison does have value (i.e. Speaker A has a bump at 5k-10k vs. Speaker B which has a dip) and you can draw some conclusions from the high frequency measurements can't you?
Huge y-axis and 10dB increments don’t help either, the green is ~5dB lower at 3kHz, that’s audible.No, you can make relative comparisons between the steady-state response in your room, but that doesn't tell you much about what the speaker is doing.
Here is an example of in-room measurements of Klipsch RP-160m vs Kii Three after inclusion of subs and near-field EQ on the Klipsch to make the listening window more flat. They don't sound nearly the same even if the graphs are quite similar. Ignore the ripples from 2-4k on the red, that's reflections from the microphone stand because I'm a moron
View attachment 48658
Wrong here and there as well.Measurements of the Emotiva 6s was performed using the Klippel Near-field Scanner (NFS).
What I'm seeing of the formula vs measurements so far doesn't look at all like 86% correlation. Maybe @MZKM could address it in its own thread as he seems to spent the most time on the formula, and has an idea of what lowers a score and what surprisingly does not.
No, you can make relative comparisons between the steady-state response in your room, but that doesn't tell you much about what the speaker is doing.
Here is an example of in-room measurements of Klipsch RP-160m vs Kii Three after inclusion of subs and near-field EQ on the Klipsch to make the listening window more flat. They don't sound nearly the same even if the graphs are quite similar. Ignore the ripples from 2-4k on the red, that's reflections from the microphone stand because I'm a moron
View attachment 48658
Edit; I was a bit swift. The Kii's are a controlled directivity design with controlled/constant dispersion down to about 100 hz while the Klipsch are a normal speaker with a small horn to have controlled dispersion over a small range. Kii measures near perfect while the Klipsch is riddled with nastiness. Still, depending on where you measure (in this case in the sofa near a backwall) the room will dominate the steady-state response to such a degree that it isn't reliable data for determining speaker sound quality.
We don’t know the gradient of scoring, how much better is a 6 vs 6.5 vs 7, it’s likely exponential/logarithmic in nature (matched listener-described preference ratings). So, I wouldn’t argue why two speakers are close by ~0.25, it would be more appropriate to ask why a speaker scored much lower/higher than others that measured somewhat similarly. People likely think the LS50 sounds better than the Harbeth, and the LS50 indeed scores better than it, so that’s all good; if you think the Harbeth should be closer in score to lower scoring speakers, that’s a discussion worth having.
But, as stated, look at my graphs for both the LS50 and RB42, they are pretty similar (except for vertical performance). But again, one limitation is no frequency weighting, +3dB in the presence region is treated similarly to +3dB in the mid-bass or upper-treble.
It doesn't tell you the most important bit, which is what/how our ears hear. Toole is pretty adamant about this.Huge y-axis and 10dB increments don’t help either, the green is ~5dB lower at 3kHz, that’s audible.
The smoothing is also important.
This would just be tonal balance though, it doesn’t tell a lot in regards to imaging, soundstage, image width, etc.
Can easily be fixed, here both are Audiolensed. Still significant difference in sound even though the overall tonality is the same;I wouldn't say those speakers are anywhere close to measuring the same, or even similarly. I would however concluded that the Red one is much hotter from 2kHz to 4kHz and may sound harsher and brighter than the Blue.
You might have a point if you insist on measuring at the back wall so the room is maximally involved in the result. You don't have to measure there you know. Measuring closer (especially very close) definitely can show tonal differences inherent to the speaker above the Schroeder frequency.It doesn't tell you the most important bit, which is what/how our ears hear. Toole is pretty adamant about this.
Can easily be fixed, here both are Audiolensed. Still significant difference in sound even though the overall tonality is the same;
View attachment 48660
The point, however, is the same; In-room measurements are useless for the purpose of demonstrating any type of speaker sound quality.
I am going to experiment with the slope for the NBD_PIR score.Comparing @MZKM 's graphs, if we look at the Listening Window, the RB42 does hold its own surprisingly well compared to the LS50. However, the PIR doesn't look as good on the RB42. I'm wondering if this is related to the slope issue.
To me the most glaring problem with the RB42 is its loss of directivity control around 5 kHz, where the reflections sound too bright relative to the direct sound. The LS50 and 305P which have a similar score do not have this kind of problem. The Control 1 kinda has this problem around 7 kHz. To me the graphs look like these four speakers should be split into two categories, with the LS50 and 305P in the top category, and the Control 1 and RB42 demoted to the bottom category due to their directivity abnormalities. But the current formula disagrees, showing them to be all equivalent. Like others I am struggling to make sense of this.
It has failed 15-20% of the time in some of the formal experiments hasn't it?
So this beats out the KEF LS50? Clearly the formula needs some work?