• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

RSL Outsider II Outdoor Speaker Review

Here2Learn

Active Member
Joined
Jan 7, 2020
Messages
112
Likes
113
I'm confused. I'd get if you don't agree with the review's conclusions -- this speaker's treble seems like it would toast my ears off -- but I'm not really sure what what part of the measurements you're objecting to.

As you said yourself, outdoors you hear more of the direct sound. So then, why not just look at the data for the direct sound? Amir hasn't removed that from the spin or anything. I can look at the spinorama and still get an idea of how it might sound outdoors. I focus more on the listening window, and take the off axis data to get an idea of how the speaker might sound if I move off axis. Moreover the 70 angles comprising the spinorama data is available for download and you can use that to more reasonably estimate how it might sound in a particular setup.

So what's the problem? The only data that are a 'sim' of indoor conditions are the early reflections and predicted in-room response curves. Feel free to ignore those if you'd like. The rest of the info, even the early reflections components, are still useful.

It's also worth remembering that our hearing to a significant degree adjusts for the space we are in. The research shows that to best speakers are generally preferred regardless of the room they are placed in - large, small asymmetrical, whathaveyou. Speakers sound different outdoors than indoors, yes, but not that different considering the direct sound is already perceptually dominant. It's kind of like hearing a speaker nearfield vs farfield.

And yes, some might still want to use an outdoor speakers indoors, like some might want to use a studio monitor in their living room. If it's good, it's good!

Not the measurements, the juxtaposition of panther type where a poorer objective result sometimes yields a better panther based on personal preferences. If it's just the objective data and consideration of cost (point score per buck), then there's no personal bias in it.

As somebody else said, a broken head panther can seriously affect a companies sales potentially, as can one of the upper tier panthers. If those panthers are reflecting personal bias on a review site supposed to be wholly objective, then they aren't wholly objective.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,722
Likes
241,586
Location
Seattle Area
I clarified earlier that expressing a personal opinion is fine and warranted. Selecting the panther by it, since it is subjective is another matter. Yes, I'd like to see the panther somewhat reflect the preference score in terms of VFM, similar to MKZM's charts on his website that can show this. Points of performance score per buck is totally objective. You can still give an opinion, but I'd prefer it didn't influence the panther selected.
At the risk of stating the obvious, there is no measurement that produces a panther rating. The panther is my sum total subjective opinion of the product I am testing. I weigh everything from measurements to build, features and listening test results to give a mark. This rating cannot and will not lend itself to objective scoring.

Remember there are things I test for such as power handling, the sound at the limit, variability with seating location, ability to EQ, etc. which is not reflected in measurements as posted yet they are very important characteristics in a speaker.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
Here's a good start to what is important beyond to 100-1000 FR. https://www.audiosciencereview.com/...ew-on-harman-blind-speaker-testing-system.33/ Poke around here. Also, @MZKM, where's a good link to how the scores are generated?
Right, I'm familiar with that. :) I offered my thought in response to the question of: "why don't calculated preference scores always line up with Amir's listening experiences and preferences?"

One obvious answer, that almost doesn't need saying, is that those preference scores are based on aggregate user preferences and aren't meant to exactly predict a particular individual's preferences.

Beyond that, I think Olive and company would agree that model's aren't perfect. It is groundbreaking, and I think quite successful, but hopefully they do not represent the final word in the matter. Hopefully, we can discover even more about what makes things sound good to people.

If I am not mistaken Olive et al. developed that model using a relatively small selection of speakers, perhaps not covering the full spectrum of the sorts of FR, dispersion, distortion at various SPL, etc. produced by real-world speakers. So that is maybe an area for improvement.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,722
Likes
241,586
Location
Seattle Area
As we discussed at length in another thread, the Olive paper ends with a bunch of work for the future but none materialized in public. And Harman itself doesn't seem to be using the score/metric for their own use.
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
If I am not mistaken Olive et al. developed that model using a relatively small selection of speakers, perhaps not covering the full spectrum of the sorts of FR, dispersion, distortion at various SPL, etc. produced by real-world speakers. So that is maybe an area for improvement.
It was 70 speakers; but nowadays, in thanks due to the work of Olive/Toole, there are a lot more better measuring speakers out there that don’t cost a fortune. Their sample pf speakers didn‘t include any good sounding wide directivity speakers, and as such any speakers that are, get scored lower that what may be actual preference.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,722
Likes
241,586
Location
Seattle Area
I think we're getting the two speakers mixed up. It's ok, I did that yesterday, too. They look the same...

The RSL's are $200/pair
The Focal's are $200/each
Oops. You are right. :) Sorry about that.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
As we discussed at length in another thread, the Olive paper ends with a bunch of work for the future but none materialized in public. And Harman itself doesn't seem to be using the score/metric for their own use.

Sean Olive's description of Harman's speaker development proceess below from less than a year ago, talking about using correlations between objective measurements and subjective listening test results developed from his research with Floyd Toole, sounds pretty much like the preference formula in all but name. He says that gets them about 90% there (86% maybe? ;)), and the remaining 10% is controlled double-blind tests with Harman trained listeners.

 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,722
Likes
241,586
Location
Seattle Area
Sean Olive's description of Harman's speaker development proceess below from less than a year ago, talking about using correlations between objective measurements and subjective listening test results developed from his research with Floyd Toole, sounds pretty much like the preference formula in all but name. He says that gets them about 90% there (86% maybe? ;)), and the remaining 10% is controlled double-blind tests with Harman trained listeners.
Not at all. What he is saying and follows on to say right after that is that measurements like Spinorama tell them how the speaker should be designed: flat on-axis and directivity that is similar to on-axis. There is no mention remotely of computing a single number and that tells them if they have done the job or not.

Even if he meant that, the very fact that listening tests are then performed means they don't put their trust in the number. But that they expect unknowns that are discovered in listening tests. A number of times I have talked to Harman folks where they have told me a speaker is taking longer to release because they are fine tuning it post listening tests.

Harman has released countless spinorama for their speakers. If they have the score, why do you think they have never, ever released that number for any of their current speakers?

Really, my information on what I stated comes directly from Sean himself. Please don't try to counter it by reading between the lines in a youtube video for heaven's sake. What you wish to be true, isn't. I would love for this number to be real and end of story as to be done with these endless arguments. But when the facts are not behind me, I can't do it. Neither should you.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
I said preference formula, not score :) But anyway, the score is really just a distillation of the spinorama. If it's a fairly accurate representation of the weighted contribution of the most salient parts of the spinorama, which correlate well with subjective listener preference in controlled blind tests, it doesn't matter whether Harman are using a single such score, or a combination of metrics they have found correlate well with preference (a preference formula in all but name). Sean states the measurements get them 90% there, that sounds like a high percentage of trust to me. Of course that isn't 100% so it isn't perfect, and I have never claimed (or even wished) it to be. Plus the metrics and (explicit or implicit) weightings they use now may have been improved upon since Olive's preference formula paper (by an extra 4% perhaps :D). But I don't think it's a coincidence that their highest-end Revel line occupies 4 of the top 5 positions in the ranking of passive speakers measured so far according to the preference rating. As for that last 10% of fine tuning based on listener feedback, I'd naturally expect that to sometimes be a slow process with the myriad of nuisance variables humans bring to the equation, in comparison to the first 90% based on well-rehearsed automated measurements from decades of acoustic research, but the scientifically-determined facts indicate the latter still holds the vast majority of weight in determining a good sounding speaker.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,722
Likes
241,586
Location
Seattle Area
RSL Outsider is now 199$/pair! and free shipping to continental USA
I could swear when I wrote the review, it was each. They must have changed that post my review.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,722
Likes
241,586
Location
Seattle Area
Sean states the measurements get them 90% there, that sounds like a high percentage of trust to me.
Measurements do. Not any kind of single value score. That is precisely why I post the measurements.

But anyway, the score is really just a distillation of the spinorama.
Well, why don't you do the distillation for a few spins and tell us what the score would be. I suggest wearing something stone proof before you state your answers. :)
 

BYRTT

Addicted to Fun and Learning
Forum Donor
Joined
Nov 2, 2018
Messages
956
Likes
2,454
Location
Denmark (Jutland)
We get spoiled again with pro data, thanks man Amir publish new interesting acoustic review.

.....Temperature was 68 degrees.....

In the name of some research for low end reach :p and if account is still okay "read septic tank" :mad: go never any one degree lower than that :)..

.....The tweeter on this speaker is offset to the right. I tried to find it but had no luck even when using a flashlight. So I used a reference point that is in the middle of the speaker and to the right of the logo.....

Thanks looks a fair shut close enough to tweeter axis, in radar plot of CAD software you spot on at 5kHz in verticals and non symmetry miss in horizontals is probaly the missing baffle gain at right side relative tweeter position, below radars is set to 2kHz because in software it looks be crossover point, manufacture spec is 2,5kHz and radar tendency is close to same there as for 2kHz in some multiple non symetric lobes that dont point exactly strait forward so acoustic slopes is probably out of phase type ones and maybe non symmetric.

Tweeter axis.png


So far think its interesting notice that for the two reviews using grills with small punched holes that whatever curves for them get bit jagged with a uniform spread, three examples below is of the outdoor category but one had not a grill mounted when analyzed..
Grill_comparison.png
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England
Beyond that, I think Olive and company would agree that model's aren't perfect. It is groundbreaking, and I think quite successful, but hopefully they do not represent the final word in the matter. Hopefully, we can discover even more about what makes things sound good to people.

As it stands we can only find out "about what makes things sound good to" Amir and not very accurately because he's only giving speakers a quick listen in less than optimal conditions.
 

carlosmante

Active Member
Joined
Apr 15, 2018
Messages
211
Likes
162
So here's another example of poor FR and directivity, plus really insipid measured bass (-6 dB at 55-60) yet, subjectively, it makes good noise. What can we learn from this?
That our enjoyment of Music and Sound is Not a "Linear System".
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
As it stands we can only find out "about what makes things sound good to" Amir and not very accurately because he's only giving speakers a quick listen in less than optimal conditions.
Yes, absolutely. Even if he gives them a long, detailed listen, it's still anecdotal because it's just a single listener in an uncontrolled environment. It is not a replacement for or and extension of previous controlled studies.

Still, I do find a lot of value in it and I'm very appreciative!
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Measurements do. Not any kind of single value score. That is precisely why I post the measurements.

And I'm very grateful you do post all the measurements, as well as the raw data for others to analyze, which is a unique benefit of this site. As I said though, Sean may not use a single score, but I'm sure an acoustic scientist like him would do something more reliably quantifiable than just eyeballing graphs when developing and evaluating speaker performance, like using a set of metrics at least similar to the ones in the preference formula. (And I'm a big fan of @MZKM 's 'breakdown' radar plot of these metrics, why I suggested he included them in his score posts so readers get a more detailed idea of the speaker's abilities than a single score can provide.)

Well, why don't you do the distillation for a few spins and tell us what the score would be.

I think combined with your excellent measurements, @MZKM is already doing a brilliant job of that ;)
 
Last edited:
Top Bottom