• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Micca RB42 Bookshelf Speaker Review

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,682
Likes
37,394
It has failed 15-20% of the time in some of the formal experiments hasn't it?




Hey Joe, do you have a public album available of all your measurements like the one SmackDaddy posted: https://photos.google.com/share/AF1...?key=U0pKaFBJRkU3bzVYX0tOdnNDaFBFbTRZYnVvN3Bn
What I'm seeing of the formula vs measurements so far doesn't look at all like 86% correlation. Maybe @MZKM could address it in its own thread as he seems to spent the most time on the formula, and has an idea of what lowers a score and what surprisingly does not. If we could get Todd Welti, Sean Olive and Floyd O'Toole to participate it would spread some needed understanding I think.
@twelti @Floyd Toole
 

Haint

Senior Member
Joined
Jan 26, 2020
Messages
347
Likes
453
What I'm seeing of the formula vs measurements so far doesn't look at all like 86% correlation. Maybe @MZKM could address it in its own thread as he seems to spent the most time on the formula, and has an idea of what lowers a score and what surprisingly does not.

There were a number of questions raised about how the Harbeth and Neumann are more or less scored equally (0.25 difference) when the raw measurements appear to suggest the Neumann should be a significantly better loudspeaker.
 
Last edited:

joentell

Active Member
Reviewer
Joined
Feb 4, 2020
Messages
240
Likes
770
Location
Los Angeles
It has failed 15-20% of the time in some of the formal experiments hasn't it?




Hey Joe, do you have a public album available of all your measurements like the one SmackDaddy posted: https://photos.google.com/share/AF1...?key=U0pKaFBJRkU3bzVYX0tOdnNDaFBFbTRZYnVvN3Bn
I don't. I'm not here to plug my own channel, but instead to just clarify something in a comment mentioning me directly. I will just say that if you Google "Speaker Leaderboard" you will find a ranking of all the speakers I've reviewed on my channel. I almost always do a simple in-room frequency response measurement so viewers have a sense of what I was hearing. It's usually at the end of each video during the sound demo.
 

Haint

Senior Member
Joined
Jan 26, 2020
Messages
347
Likes
453
I don't. I'm not here to plug my own channel, but instead to just clarify something in a comment mentioning me directly. I will just say that if you Google "Speaker Leaderboard" you will find a ranking of all the speakers I've reviewed on my channel. I almost always do a simple in-room frequency response measurement so viewers have a sense of what I was hearing. It's usually at the end of each video during the sound demo.

I'm aware, I enjoy the channel, but a more formal database of those measurements would make it much easier to directly compare a variety of speakers Vs. scrubbing through the YT videos. Like your recent Debut Reference review A/B'ed with the UB5, it would be useful to refer to an a album to A/B the Reference with say the B6.2, or whatever.
 

Absolute

Major Contributor
Forum Donor
Joined
Feb 5, 2017
Messages
1,085
Likes
2,131
I just want to emphasize that in-room measurements are quite useless as an indicator for sound quality. If you measure a flat response in-room from a few meters out, the speaker will either have extreme directional dispersion or the treble will be elevated. If the room is not acoustically very large, of course.
 

Haint

Senior Member
Joined
Jan 26, 2020
Messages
347
Likes
453
I just want to emphasize that in-room measurements are quite useless as an indicator for sound quality. If you measure a flat response in-room from a few meters out, the speaker will either have extreme directional dispersion or the treble will be elevated. If the room is not acoustically very large, of course.

I'm aware, but if the method is consistent a reasonable relative comparison does have value (i.e. Speaker A has a bump at 5k-10k vs. Speaker B which has a dip) and you can draw some conclusions from the high frequency measurements can't you?
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
What I'm seeing of the formula vs measurements so far doesn't look at all like 86% correlation. Maybe @MZKM could address it in its own thread as he seems to spent the most time on the formula, and has an idea of what lowers a score and what surprisingly does not. If we could get Todd Welti, Sean Olive and Floyd O'Toole to participate it would spread some needed understanding I think.
@twelti @Floyd Toole
We don’t know the gradient of scoring, how much better is a 6 vs 6.5 vs 7, it’s likely exponential/logarithmic in nature (matched listener-described preference ratings). So, I wouldn’t argue why two speakers are close by ~0.25, it would be more appropriate to ask why a speaker scored much lower/higher than others that measured somewhat similarly. People likely think the LS50 sounds better than the Harbeth, and the LS50 indeed scores better than it, so that’s all good; if you think the Harbeth should be closer in score to lower scoring speakers, that’s a discussion worth having.

But, as stated, look at my graphs for both the LS50 and RB42, they are pretty similar (except for vertical performance). But again, one limitation is no frequency weighting, +3dB in the presence region is treated similarly to +3dB in the mid-bass or upper-treble.
 

Absolute

Major Contributor
Forum Donor
Joined
Feb 5, 2017
Messages
1,085
Likes
2,131
I'm aware, but if the method is consistent a reasonable relative comparison does have value (i.e. Speaker A has a bump at 5k-10k vs. Speaker B which has a dip) and you can draw some conclusions from the high frequency measurements can't you?
No, you can make relative comparisons between the steady-state response in your room, but that doesn't tell you much about what the speaker is doing.

Here is an example of in-room measurements of Klipsch RP-160m vs Kii Three after inclusion of subs and near-field EQ on the Klipsch to make the listening window more flat. They don't sound nearly the same even if the graphs are quite similar. Ignore the ripples from 2-4k on the red, that's reflections from the microphone stand because I'm a moron :)

Kii L vs Klipsch L.jpg


Edit; I was a bit swift. The Kii's are a controlled directivity design with controlled/constant dispersion down to about 100 hz while the Klipsch are a normal speaker with a small horn to have controlled dispersion over a small range. Kii measures near perfect while the Klipsch is riddled with nastiness. Still, depending on where you measure (in this case in the sofa near a backwall) the room will dominate the steady-state response to such a degree that it isn't reliable data for determining speaker sound quality.
 
Last edited:

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
No, you can make relative comparisons between the steady-state response in your room, but that doesn't tell you much about what the speaker is doing.

Here is an example of in-room measurements of Klipsch RP-160m vs Kii Three after inclusion of subs and near-field EQ on the Klipsch to make the listening window more flat. They don't sound nearly the same even if the graphs are quite similar. Ignore the ripples from 2-4k on the red, that's reflections from the microphone stand because I'm a moron :)

View attachment 48658
Huge y-axis and 10dB increments don’t help either, the green is ~5dB lower at 3kHz, that’s audible.
The smoothing is also important.
This would just be tonal balance though, it doesn’t tell a lot in regards to imaging, soundstage, image width, etc.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
What I'm seeing of the formula vs measurements so far doesn't look at all like 86% correlation. Maybe @MZKM could address it in its own thread as he seems to spent the most time on the formula, and has an idea of what lowers a score and what surprisingly does not.

Comparing @MZKM 's graphs, if we look at the Listening Window, the RB42 does hold its own surprisingly well compared to the LS50. However, the PIR doesn't look as good on the RB42. I'm wondering if this is related to the slope issue.

To me the most glaring problem with the RB42 is its loss of directivity control around 5 kHz, where the reflections sound too bright relative to the direct sound. The LS50 and 305P which have a similar score do not have this kind of problem. The Control 1 kinda has this problem around 7 kHz. To me the graphs look like these four speakers should be split into two categories, with the LS50 and 305P in the top category, and the Control 1 and RB42 demoted to the bottom category due to their directivity abnormalities. But the current formula disagrees, showing them to be all equivalent. Like others I am struggling to make sense of this.
 

Haint

Senior Member
Joined
Jan 26, 2020
Messages
347
Likes
453
No, you can make relative comparisons between the steady-state response in your room, but that doesn't tell you much about what the speaker is doing.

Here is an example of in-room measurements of Klipsch RP-160m vs Kii Three after inclusion of subs and near-field EQ on the Klipsch to make the listening window more flat. They don't sound nearly the same even if the graphs are quite similar. Ignore the ripples from 2-4k on the red, that's reflections from the microphone stand because I'm a moron :)

View attachment 48658

Edit; I was a bit swift. The Kii's are a controlled directivity design with controlled/constant dispersion down to about 100 hz while the Klipsch are a normal speaker with a small horn to have controlled dispersion over a small range. Kii measures near perfect while the Klipsch is riddled with nastiness. Still, depending on where you measure (in this case in the sofa near a backwall) the room will dominate the steady-state response to such a degree that it isn't reliable data for determining speaker sound quality.

I wouldn't say those speakers are anywhere close to measuring the same, or even similarly. I would however concluded that the Red one is much hotter from 2kHz to 4kHz and may sound harsher and brighter than the Blue. I understand under 1k is hugely impacted by the room, so I wouldn't draw any conclusions from that. I am however curious that if you hot swapped similarly sized bookshelves and took back to back measurement, if it would expose any trends in the speaker's RAW response below 1000kHz relative to the comparative.

We don’t know the gradient of scoring, how much better is a 6 vs 6.5 vs 7, it’s likely exponential/logarithmic in nature (matched listener-described preference ratings). So, I wouldn’t argue why two speakers are close by ~0.25, it would be more appropriate to ask why a speaker scored much lower/higher than others that measured somewhat similarly. People likely think the LS50 sounds better than the Harbeth, and the LS50 indeed scores better than it, so that’s all good; if you think the Harbeth should be closer in score to lower scoring speakers, that’s a discussion worth having.

But, as stated, look at my graphs for both the LS50 and RB42, they are pretty similar (except for vertical performance). But again, one limitation is no frequency weighting, +3dB in the presence region is treated similarly to +3dB in the mid-bass or upper-treble.

That was my suspicion.
 
Last edited:

Absolute

Major Contributor
Forum Donor
Joined
Feb 5, 2017
Messages
1,085
Likes
2,131
Huge y-axis and 10dB increments don’t help either, the green is ~5dB lower at 3kHz, that’s audible.
The smoothing is also important.
This would just be tonal balance though, it doesn’t tell a lot in regards to imaging, soundstage, image width, etc.
It doesn't tell you the most important bit, which is what/how our ears hear. Toole is pretty adamant about this.

I wouldn't say those speakers are anywhere close to measuring the same, or even similarly. I would however concluded that the Red one is much hotter from 2kHz to 4kHz and may sound harsher and brighter than the Blue.
Can easily be fixed, here both are Audiolensed. Still significant difference in sound even though the overall tonality is the same;

Kii L vs Klipsch L.jpg


The point, however, is the same; In-room measurements are useless for the purpose of demonstrating any type of speaker sound quality.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,682
Likes
37,394
It doesn't tell you the most important bit, which is what/how our ears hear. Toole is pretty adamant about this.


Can easily be fixed, here both are Audiolensed. Still significant difference in sound even though the overall tonality is the same;

View attachment 48660

The point, however, is the same; In-room measurements are useless for the purpose of demonstrating any type of speaker sound quality.
You might have a point if you insist on measuring at the back wall so the room is maximally involved in the result. You don't have to measure there you know. Measuring closer (especially very close) definitely can show tonal differences inherent to the speaker above the Schroeder frequency.
 

Absolute

Major Contributor
Forum Donor
Joined
Feb 5, 2017
Messages
1,085
Likes
2,131
Not my point, I stole it from Dr. Toole. I tested it to check if it was true. It was. Otherwise room EQ would be all you would need.
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
Comparing @MZKM 's graphs, if we look at the Listening Window, the RB42 does hold its own surprisingly well compared to the LS50. However, the PIR doesn't look as good on the RB42. I'm wondering if this is related to the slope issue.

To me the most glaring problem with the RB42 is its loss of directivity control around 5 kHz, where the reflections sound too bright relative to the direct sound. The LS50 and 305P which have a similar score do not have this kind of problem. The Control 1 kinda has this problem around 7 kHz. To me the graphs look like these four speakers should be split into two categories, with the LS50 and 305P in the top category, and the Control 1 and RB42 demoted to the bottom category due to their directivity abnormalities. But the current formula disagrees, showing them to be all equivalent. Like others I am struggling to make sense of this.
I am going to experiment with the slope for the NBD_PIR score.
The main difference in their scores is the narrow band directivity of the on-axis, and the smoothness of the predicted in-room response; the Micca winning the former and losing the latter.

On-axis
Micca:
Screen Shot 2020-02-04 at 5.58.44 PM.png


KEF:
Screen Shot 2020-02-04 at 5.58.20 PM.png


Very apparent that the Micca is better (if not weighting frequencies).

Smoothness PIR
Micca:
Screen Shot 2020-02-04 at 5.55.58 PM.png


KEF:
Screen Shot 2020-02-04 at 5.55.27 PM.png


The KEF follows the red line more closely. The bass boost of the Micca hurting it's score in this instance, but will fake the notion of having deep bass; ignoring the bass boost and the slope (this is just how smooth it is), they are't that far apart (load them both up and quick switch between them).
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
It has failed 15-20% of the time in some of the formal experiments hasn't it?

It doesn't really work like that. The correlation between the formula's predicted preference ratings and the actual average preference ratings people gave the speakers during the formal blind tests was 0.86. This doesn't mean 86% of the time the ratings matched, and 14% of the time they didn't. The 0.86 number specifically refers to the (sample) Pearson correlation coefficient - a measure of the linear correlation between the predicted and actual preference ratings (1 being perfect linear correlation, 0 no correlation at all). Effectively it's a measure based on how far on average the data points are away from the linear regression line (line of best-fit) through them. In Sean Olive's paper, A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part II - Development of the Model (scroll down in that link for the correct paper), the correlation between predicted and actual preference ratings can be seen by looking at this graph:

correlation.png


The fact that the line of best-fit is also a y =x line shows that on average the predicted and actual preference ratings have a one-to-one correlation, and the fact that most of the data points are close to this line shows that the correlation is high (0.86).
 
Top Bottom