• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Sony SS-CS5 3-way Speaker Review

Tim Link

Addicted to Fun and Learning
Forum Donor
Joined
Apr 10, 2020
Messages
771
Likes
659
Location
Eugene, OR
Very interesting review for me since I own a couple pair of these and find them easy and enjoyable to listen to at low to moderate listening levels, and with subwoofer support. I do hear the brightness, but it doesn't strike me as piercing or harsh at all. More light and airy, perhaps whispy is the word.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
I love the work Amir is doing and I support ASR with a Patreon subscription. (I questioned his conclusion on these speakers, but IMO that sort of thing is part of a healthy and respectful community)
Soon ASR speaker measurement database (including Amir's measurement, yours and that of @hardisj), and backed by effort of @MZKM , @edechamps and @pierre, will become more relevant than that of Olive's. As collective knowledge and experience of user's of this forum is large enough to challenge any AES research made so far, soon, as @amirm and you 2 guys will be testing more and more speakers, IMHO we will have enough accurate data to make our own conclusions and we'll be able to challenge any research that doesnt' fit with it. ;)
Absolutely not.

The value of Olive's research is the large sample size of listeners who evaluated the speakers under blind, controlled conditions. These preferences were then correlated with Spin-o-rama measurements.

ASR's work is much different. The spin measurement part is essentially the same, but even if Amir or somebody increases the sample size of speakers and tests 10,000 speakers in this way... we are still talking about sighted, uncontrolled listening impressions by a single listener. Such a thing would have a lot of value but could never expand upon or replace any of the things you're talking about.

ASR's work helps to keep manufacturers honest and it helps consumers to know which gear is performing well at a technical and objective level. That is extremely valuable! But, it's not research that helps us to understand listener preferences (well, besides Amir's personal preferences) or audio reproduction itself.
 
Last edited:

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,146
Likes
8,717
Location
NYC
@edechamps @MZKM @QMuse

I've gone ahead and digitized the spinoramas for all 13 speakers from Olive Paper 1/Test 1 should y'all want to mess around with that data. Zip attached.

You can import all 13 .mdats at once by dragging into REW, or see the individual text files per curve in the 'text' folder. For some reason vituixcad exported the files with phase information which you can obviously ignore.

Also tagging @amirm as an FYI since I know you were looking more into test one.
 

Attachments

  • Olive Test One 13 Spinoramas.zip
    1.9 MB · Views: 112
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,663
Likes
241,004
Location
Seattle Area
The value of Olive's research is the large sample size of listeners who evaluated the speakers under blind, controlled conditions. These preferences were then correlated with Spin-o-rama measurements.
There is A LOT more to the study than this. I know I am guilty of summarizing it this way in the past. :) But it is not this simple. A set of speakers were compared to each other in the listening tests. So in that sense, the scores given were relative to the set. I have taken the test as I have reported before and my evaluation there is not what I am doing now as I am not sitting through a 4-way comparison.

Importantly, each listener preference was converted to a numerical value. This has inherent errors in it. I ran into this as I tried to score the speaker during Harman tests. I started with one set of numbers and as listening tests went on, I regretted the scaling I used.

There are other considerations which I will detail at some future date.
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
@edechamps @MZKM @QMuse

I've gone ahead and digitized the spinoramas for all 13 speakers from Olive Paper 1/Test 1 should y'all want to mess around with that data. Zip attached.

You can import all 13 .mdats at once by dragging into REW, or see the individual text files per curve in the 'text' folder. For some reason vituixcad exported the files with phase information which you can obviously ignore.

Also tagging @amirm as an FYI since I know you were looking more into test one.
No PIR data though, and that needs all 70 measurements to be calculated...
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,556
Location
Land O’ Lakes, FL
You're not able to calculate it by weighting the resulting LW/ER/SP curves?
Not like that, took me a few minutes to figure it out (I think); you first have to take the data and convert it via:
e^( (SPL * (ln(2)+ln(5) ) / 20 ) ) / 50000

2^(SPL/10 -8) * 5^(SPL/10 -10)
 
Last edited:

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,336
Likes
6,705
Well, let's look into your mind first. Give a score of 1 to 10 to this speaker and defend the value you picked. Then sample someone else to give us a score 1 to 10 just as well and defend that.

I'm sure I would do much worse than you do, as I'm much younger, with much less experience, and much less training. You do your listening before seeing the scores that MZKM releases, which is something I'm a huge fan of, but it makes it much more difficult to keep your view inline with the science. I don't doubt that you really disliked this speaker, but I'm curious why. The science says that this is one of the best value speakers yet, but subjective opinion strongly disagrees. What's causing that difference?

I truly believe that this site and your research will one day be comparable in importance to Olive and Toole's work, and that's why I'm here.
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,146
Likes
8,717
Location
NYC
Not like that, took me a few minutes to figure it out (I think); you first have to take the data and convert it via:
e^( (SPL * (ln(2)+ln(5) ) / 20 ) ) / 50000

CEA-2034A says this:
"The Estimated In-Room Response shall be calculated using the directivity data acquired in Section 5 or Section 6. It shall be comprised of a weighted average of 12 % Listening Window, 44 %, Early Reflections, and 44 % Sound Power. The sound pressure levels shall be converted to squared pressure values prior to the weighting and summation. After the weightings have been applied and the squared pressure values summed they shall be converted back to sound pressure levels. "
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,146
Likes
8,717
Location
NYC
I resumed the discussion of Olive paper 1 to MZKM's score thread since we've been quite off topic for a while. But for those who'd following along, I was able to find an old copy of Consumer Reports to identify the speakers used in Paper 1 here.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,663
Likes
241,004
Location
Seattle Area
Would it be possible to ask Olive about this possible transposition?
I asked a while back about some other issues we were having with his formula. He stopped answering me at that point. In person he said his interest was all about headphones, not speakers these days. So while someone else can try, I don't think we will get cooperation from him.
 

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,901
Likes
16,909
I asked a while back about some other issues we were having with his formula. He stopped answering me at that point. In person he said his interest was all about headphones, not speakers these days.
Quite disappointing behaviour from a scientist that publishes AES papers, gives also a bitter aftertaste that he might want to stand up to the flaws of his older work.
 

QMuse

Major Contributor
Joined
Feb 20, 2020
Messages
3,124
Likes
2,785
Trying to build a speaker scoring model purely based on measurement data but no controlled, rigorous subjective listening data is like trying to use pure math to understand the universe without running any physics experiments: it's interesting as a thought exercise, but it won't survive contact with the real world.

Very true. What I am hoping for is that subjective part of the testing will improve. :)
 

Biblob

Addicted to Fun and Learning
Joined
Sep 13, 2018
Messages
635
Likes
603
May I suggest that the discussions these last couple of pages get moved to their own thread? They don't really seem related to the Sony speaker but about the preference rating.

I find them to be full of worthfull information that shouldn't get lost in a 'unrelated' thread in the future.

Edit: @Thomas savage
 
Last edited:

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
Maybe a 80-20 split or something like that. It can tip the review a bit, but not to far. That way a speaker which measures very well for the price and has a decent preference score(an estimate of a crowd vote) doesn't stand with it's head off.
I think the objective and subjective scores should be completely separate. I liked how Amir rated DACs and amplifiers, giving (sometimes) subjective comments but ranking them based almost solely on measurements.

Of course, it's the subjective listening experience that matters, not the measurements. But the subjective listening experience depends so heavily on the listener and the room itself for speakers. This Sony speaker is a perfect example. It measures decently, quite a bit of effort was put into the xover, and many people including experts like it. But somebody skimming ASR and not reading 16+ pages of comments would think this speaker belongs in the garbage bin.

Could just be something like:

Measurement Score: B+ (or 4.8, or whatever)
Amir's Enjoyment Score: (picture of a broken panther)

...those are clumsy names, obviously we can do better, that's just a quick example
 
Last edited:

ROOSKIE

Major Contributor
Joined
Feb 27, 2020
Messages
1,936
Likes
3,525
Location
Minneapolis
I think the objective and subjective scores should be completely separate. I liked how Amir rated DACs and amplifiers, giving (sometimes) subjective comments but ranking them based almost solely on measurements.

Of course, it's the subjective listening experience that matters, not the measurements. But the subjective listening experience depends so heavily on the listener and the room itself for speakers. This Sony speaker is a perfect example. It measures decently, quite a bit of effort was put into the xover, and many people including experts like it. But somebody skimming ASR and not reading 16+ pages of comments would think this speaker belongs in the garbage bin.

Could just be something like:

Measurement Score: B+ (or 4.8, or whatever)
Amir's Enjoyment Score: (picture of a broken panther)

...those are clumsy names, obviously we can do better, that's just a quick example
Perfect idea. 2 scores.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
How do you propose to determine this score/number?

Two ideas come to mind. I'm sure there are 100 more.

Idea one. You already give out informal "scores" (the panther score and the "recommended" or "not recommended") for DACs and amplifiers almost entirely based on measurements. The speaker "measurement score" could be kind of a simple "pass/fail" or "recommended/not-recommended" based on the predicted in-room response having no major anomalies.

Idea two. Like idea one, but instead of a simple "recommended" it could be "recommended for on-axis listening" or "recommended for off-axis listening" or both (or neither).

Idea three. We use the formula from Olive's paper that MZKM currently calculates and posts. I know that formula is not perfect but it is decent.

One thing I think you mentioned elsewhere was worrying about contradictions - "Hey, this speaker measures well but I don't recommend it." That kind of thing.

I don't think that would look silly. For example, a lot of studio monitors measure flat but aren't always "enjoyable." We know that good measurements typically = enjoyment but that doesn't mean it's always true.

Above all else... thank you for your hard work.
 
Top Bottom