• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Micca RB42 Bookshelf Speaker Review

Haint

Senior Member
Joined
Jan 26, 2020
Messages
347
Likes
453
It doesn't really work like that. The correlation between the formula's predicted preference ratings and the actual average preference ratings people gave the speakers during the formal blind tests was 0.86. This doesn't mean 86% of the time the ratings matched, and 14% of the time they didn't. The 0.86 number specifically refers to the (sample) Pearson correlation coefficient - a measure of the linear correlation between the predicted and actual preference ratings (1 being perfect linear correlation, 0 no correlation at all). Effectively it's a measure based on how far on average the data points are away from the linear regression line (line of best-fit) through them. In Sean Olive's paper, A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part II - Development of the Model (scroll down in that link for the correct paper), the correlation between predicted and actual preference ratings can be seen by looking at this graph:

View attachment 48661

The fact that the line of best-fit is also a y =x line shows that on average the predicted and actual preference ratings have a one-to-one correlation, and the fact that most of the data points are close to this line shows that the correlation is high (0.86).

I see, thanks for the explanation. So it looks like some of the outliers are way off from the prediction. Is the "Measured Preference Rating" the score the test subjects actually awarded? So for example a predicted 4 was actually a 1.5, or a predicted 5 was actually a 7?
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
Having spent a lot of time with the RB42 myself, this review feels overly harsh.

Full disclosure: Micca did send me a set of them to review a while back. Check out the review if you want to see some teardown picks.

The estimated in-room response of the RB42 does not look bad at all to me and is not too dissimilar to what was measured for the well-behaved JBL 305P. The RB42's has a downward slope which is generally what we like to see, and the response stays within a few dB of that slope. The RB42 did not sound bright to me at all, and I don't think the measurements match up with the listening impressions in this review w.r.t. brightness.

I don't think the RB42s are perfect, obviously. With cheap bookshelf speakers like these, it's always a game of compromises. I didn't think their clarity, particularly in the midrange, can compare with even the good old Pioneer BS22. They are tiny and build quality is impressive for the size and price range (aside from the flecks on Amir's!) ...these things are built like mini-tanks from thick MDF. And off-axis response is of course more ragged than we'd like to see.

That said, I think they are unique and would be a good choice in this budget price range for many if you want bass from small speakers without a subwoofer - they could be great for a dorm room or apartment.
 

Sgt. Ear Ache

Major Contributor
Joined
Jun 18, 2019
Messages
1,895
Likes
4,162
Location
Winnipeg Canada
I suspect QC must be a bit of an issue with these cheap speakers (not surprisingly). My pair appear to have perfectly clean and well-glued drivers. I guess mine are even closer to passing for 4-500 dollar speakers! :)
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,926
Likes
7,643
Location
Canada
So this beats out the KEF LS50? Clearly the formula needs some work?

It is possible that the Kef's much superior vertical directivity matters more in the desktop listening scenario, as it won't produce bad desktop/console bounces.

The preference rating, much like the listening window, completely ignores vertical response outside +/- 10 degrees, I believe. So if you are in a scenario where that matters it's already not going to be that accurate.

It's certainly better than nothing, but it's pretty far from a "one size fits all" ranking system.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,684
Likes
241,195
Location
Seattle Area
I am now super interested to see the MB42X. I had those for a few years and never understood the high praise. The pioneers in the same price range sound leagues better imo (Amir may need to review that too).
I have bought the Pioneers as well.
 

spacevector

Addicted to Fun and Learning
Forum Donor
Joined
Dec 3, 2019
Messages
553
Likes
1,003
Location
Bayrea
The RB42 did not sound bright to me at all, and I don't think the measurements match up with the listening impressions in this review w.r.t. brightness.
I just read the review you linked, it does say: "We’re hearing lots of treble here. Too much. Subjectively, it’s not bad and some (especially with older ears) will prefer it."

So they did sound bright to you?

I do agree that the decapitated panther may be a bit too harsh for the product. Perhaps the shrugging panther is better suited - especially considering the price.
 

dwkdnvr

Senior Member
Joined
Nov 2, 2018
Messages
418
Likes
698
To me the most glaring problem with the RB42 is its loss of directivity control around 5 kHz, where the reflections sound too bright relative to the direct sound.

That does seem to be the most glaring problem - the bass hump isn't ideal but not likely to be as objectionable. Makes me very curious about the Dayton C-Note kit. Supposedly using more or less the same drivers, but the C-Note uses a waveguide on the tweeter which should make the response >2.5k very smooth and uniform. $100/pr, although out of stock for a while.
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,146
Likes
8,718
Location
NYC
I think the reason for the LS50s being so close to the Miccas is pretty simple from the data.

The weighting of components is:
  • Narrow-band deviation of the on-axis curve(NBD_ON): 31.5%
  • Bass extension(LFX): 30.5%
  • Narrow-band deviation of the predicted in-room response(NBD_PIR): 20.5%
  • Smoothness of the PIR(SM_PIR): 17.5%
The Micca has better NBD_ON and very slightly better NBD_PIR, but the LS50 has better significantly better SM_PIR. The SM_PIR is proportionally less important than the other factors though, so the Micca ends up quite close.

Micca Breakdown:
Snag_3ab9cac1.png

LS50 breakdown:
Snag_3abccaf2.png

Bass is nearly the same, so it doesn't really matter if we're ignoring LFX.

It's important to note that the formula does allow manufacturers to balance on-axis errors with "fixes" in the off axis sound. By nature of the formula, The very best speakers will have both good directivity and good direct sound because that will lead to the smoothest on-axis and PIR, but lower in the preference range, there's a fair bit of room for balancing the direct and off-axis sound. You can make a "pretty good" speaker with excellent on-axis and decent PIR, or with or one with decent on-axis and excellent PIR. To be truly great, you need both.

So we can argue about why the formula might have issues, but based on the information we have, it doesn't surprise the micca isn't far behind.
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
We don’t know the gradient of scoring, how much better is a 6 vs 6.5 vs 7, it’s likely exponential/logarithmic in nature (matched listener-described preference ratings). So, I wouldn’t argue why two speakers are close by ~0.25, it would be more appropriate to ask why a speaker scored much lower/higher than others that measured somewhat similarly. People likely think the LS50 sounds better than the Harbeth, and the LS50 indeed scores better than it, so that’s all good; if you think the Harbeth should be closer in score to lower scoring speakers, that’s a discussion worth having.

I think the scoring gradient is close enough to linear (just imagine how you would rate the sound quality of speakers). However, there may be a 'contraction bias' in the ratings. On page 111 of this presentation by Olive and Welti, they say due to contraction bias "listeners tend not use the top and bottom of the preference scale". Now this was in relation to headphone preference ratings, but I don't see why this would be much different with speaker ratings. This contraction bias can be seen in the distribution of the reported ratings I posted above, most being above 2 and below 8 (a similar pattern can be seen in pretty much all of Olive's blind listening tests).

They adjusted for this by scaling the reported ratings from 0-100 (they asked for a rating out of 100 for headphones, instead of 10 as we presume Olive chose for speakers). Page 110 of that presentation shows the data and formula before this scaling. Maybe you could do something similar for the speaker preference ratings, scaling them from 0-10? I'm not sure what method they used to scale though - it doesn't look like just a linear scaling to me. Maybe you can work it out.

By the way, I think you mixed up the Harbeth and LS50 - the former scores higher than the latter (5.08 to 4.48 respectively, and 7.49 to 6.61 with an ideal sub).
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
I think the scoring gradient is close enough to linear (just imagine how you would rate the sound quality of speakers). However, there may be a 'contraction bias' in the ratings. On page 111 of this presentation by Olive and Welti, they say due to contraction bias "listeners tend not use the top and bottom of the preference scale". Now this was in relation to headphone preference ratings, but I don't see why this would be much different with speaker ratings. This contraction bias can be seen in the distribution of the reported ratings I posted above, most being above 2 and below 8 (a similar pattern can be seen in pretty much all of Olive's blind listening tests).

They adjusted for this by scaling the reported ratings from 0-100 (they asked for a rating out of 100 for headphones, instead of 10 as we presume Olive chose for speakers). Page 110 of that presentation shows the data and formula before this scaling. Maybe you could do something similar for the speaker preference ratings, scaling them from 0-10? I'm not sure what method they used to scale though - it doesn't look like just a linear scaling to me. Maybe you can work it out.

By the way, I think you mixed up the Harbeth and LS50 - the former scores higher than the latter (5.08 to 4.48 respectively, and 7.49 to 6.61 with an ideal sub).
Whoops, meant the Harbeth and Neumann, not LS50.

I can try scaling, they actually altered the formula and respective weighting of each aspect. I may be able to work out something less destructive (which of course would have to be dynamic when new data is added, especially if it tops/bottoms out the rankings.). There likely is a formula for it (ceiling, trend, rank, etc.), it would be a linear scaling, but that looks like what they used anyway, don’t see why you said it doesn‘t look like it.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
I just read the review you linked, it does say: "We’re hearing lots of treble here. Too much. Subjectively, it’s not bad and some (especially with older ears) will prefer it."

So they did sound bright to you?

I do agree that the decapitated panther may be a bit too harsh for the product. Perhaps the shrugging panther is better suited - especially considering the price.
Objectively? As Amir's measurements show we do have a few extra dB of treble. My (much less professional) measurements showed this as well.

Subjectively? I certainly didn't find them as face-meltingly bright as Amir did. I guess it depends on how you feel about those few dB... "bright" is a subjective term; everybody can make up their own mind about how much extra treble equals "bright."

In hindsight I think my perspective in the review was slightly skewed by comparing them to the Pioneer BS22 which are a bit dark. That's something I'll be correcting if I do future reviews.

FWIW, Micca's representative passed this along to me with regards to the treble. Note that he says this applies to the "first production batch" and he wrote this to me back in March 2019. Whether or not there have been tweaks to the design (something Micca is known to do) in the meantime is unknown. So YMMV. As you can see from my photos the crossover is extremely easy to access, so I suppose it would be easy to compare a more recent set with my photos to see if there were revisions.

If you want to lower the output of the tweeter, there is an easy way to do it if you have a soldering iron handy. This first production batch is level matched by reducing one of the resistors to 0-ohms. You can see this in your second crossover picture you posted, where there is a long solder drop next to one of the 4.0uF capacitors. This solder drop shorts out the resistor. Removing this solder drop short will decrease the tweeter output by about 3dB. You are welcome to give this a try, though we don't recommend this for everyone.
 
Last edited:

JustIntonation

Senior Member
Joined
Jun 20, 2018
Messages
480
Likes
293

As we can see clearly in the above plot, it's not so much that the woofer starts beaming (and the tweeter doesn't have a waveguide to match) that is affecting directivity. It is mainly the edge diffraction that's producing the uneven horizontal response. This affects the vertical response as well and to that is added the too large CTC distance for such a high crossover point so the cancellation axis comes at a relatively small angle already.
This is pretty standard in such designs. Too high a crossover (needed for most 1" or less tweeters especially such a cheap tweeter) and no edge roundover.
Farily easy to fix in designs.
1: add a waveguide/horn to allow a lower crossover and larger woofer and match directivity (not a good option in my personal opinion as waveguides make speakers way too dark off-axis for my taste)
2: get a bigger or better tweeter and cross lower with as close a CTC you can get.
3: as big a roundover as possible.
2+3 get my vote in a 3-way or 2-way+sub(s). Though I'm not aware of any cheap commercial products that do this well.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
That does seem to be the most glaring problem - the bass hump isn't ideal but not likely to be as objectionable. Makes me very curious about the Dayton C-Note kit. Supposedly using more or less the same drivers, but the C-Note uses a waveguide on the tweeter which should make the response >2.5k very smooth and uniform. $100/pr, although out of stock for a while.

I think you're thinking of Dayton's MKxxx series of speakers, which appear to use the same woofer as the Micca RB42. The tweeter on Dayton's speakers is superficially similar but I don't know if it's actually the same. The Miccas certainly have a more elaborate crossover, for whatever that's worth.

The C-Notes kit speakers have completely different drivers. 1" dome tweeters instead of 0.75", and a 5" aluminum cone woofer.


The C-Notes are absolutely wonderful. At $100/pair, I have zero hesitation in calling them the greatest value in the speaker world. That product page has a build video and the PDF manual lists the tools you'll need if that's something anybody's interested in tackling once they're back in stock.

Flat response, great dispersion characteristics, and strong output down to the 40hz range. I find their sound very similar to the JBL 305's, perhaps a bit mellower and perhaps with wider dispersion. To me their only real downside is that (like the 305s) they don't get super loud and their max output might not satisfy in a larger room.

There's nearly $85 worth of drivers in a pair of the C-Notes; it's almost like you're getting the cabinet and crossover parts for free.
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
Maybe you could do something similar for the speaker preference ratings, scaling them from 0-10?...Maybe you can work it out.
Here's what I got:
Screen Shot 2020-02-04 at 9.03.50 PM.png


This type of ranking was proposed earlier, my comment regarding is that it could confuse many, thinking the Neumann is the best out there in the world and the 104 is the absolute worst; whereas in actuality it's a dynamic ranking based on the current data.
It's good for completed data, not continually added to data.
 
Last edited:

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
21,621
Likes
21,900
Location
Canada
Hot dang. That KEF LS50 is looking better all the time in Canada!
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Is the "Measured Preference Rating" the score the test subjects actually awarded?

Yes. Below is what Olive said about the lower 0.86 correlation for the final formula applied to all the speaker tests compared to the initial formula only applied to one listening test, which had a higher correlation coefficient of 0.995 (~1 i.e. almost perfect):

Our final limitation is the accuracy of our subjective data. The model from Test One produced a correlation of 1. Test One was very tightly designed with 13 sessions to control contextual effects. Our model’s correlation slipped to 0.86 when applied to our listening test data gathered from 18 independent tests. We believe that the lower correlation is, in part, due to differences in how well the listening tests were controlled. The lack of a calibrated and anchored preference scale across the 18 tests most likely resulted in contextual effects and other associated scaling errors in the listener ratings.

Here's the correlation graph for the first formula applied to Test One:

Screenshot_20200205-022233_Adobe Acrobat.jpg


This may look great, but consider the speaker sample size is only 13, compared to 70 for the final formula. Plus I believe all these 13 speakers were stand-mounted designs, whereas the full 70 included all different types and sizes of speakers, so the first formula may not work as well for other speaker types.

@MZKM this might open up a can of worms, and be a lot more work, but I think it may be useful to calculate the preference ratings for the speakers tested so far using Olive's first formula for Test One, and compare those scores with the ones already computed using the final formula. If the scores are closely matched, I think this would be further evidence that they're correct, as the first formula had a higher correlation coefficient, and was made using data from stand-mounted speakers, which all of the ones Amir has tested so far are. (This would be even better evidence for the scores being correct if Olive had highlighted the speakers from Test One in the correlation plot for all the speakers, and these highlighted speakers were close to the regression line. Alas he didn't, so we don't know exactly how closely the initial and final formulas' preference ratings should match. I think the fact they both show high correlation with actual ratings suggests they should be fairly close though.)
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Here's what I got:
View attachment 48704

This type of ranking was proposed earlier, my comment regarding is that it could confuse many, thinking the Neumann is the best out there in the world and the 104 is the absolute worst; whereas in actuality it's a dynamic ranking based on the current data.
It's good for completed data, not continually added to data.

Thanks! Yeah I agree though, this scaling is only useful for complete data, or at least a data set that has a solid low and high anchor (Amazon Echo Dot and the JBL M2 / @amirm's Revel Salon 2 perhaps? ;))

I think the rating scaling works ok as it is really, as long as you think of a 10 on the scale as not the best speaker currently in the world, but the best possible sound reproduction theoretically possible i.e. audibly indistinguishable from actually hearing the music live. And in the same vein, a 0 would be no correlation with the music at all (so just silence or white noise). Maybe a 1 would be a phone speaker :D
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,684
Likes
241,195
Location
Seattle Area
Here's what I got:
View attachment 48704

This type of ranking was proposed earlier, my comment regarding is that it could confuse many, thinking the Neumann is the best out there in the world and the 104 is the absolute worst; whereas in actuality it's a dynamic ranking based on the current data.
It's good for completed data, not continually added to data.
From presentation point of view, we need to eliminate the negative ratings. I have one speaker that I can test that hopefully will the negative anchor and we can then offset from that...
 
Top Bottom