• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Sony SS-CS5 3-way Speaker Review

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
@andreasmaaan
I have this thread where I go over the basics of the scores for anyone wanting a quick overview (with a link to the patent as well). The AES paper is also freely viewable and downloadable, it for sure is a bit less confusing in some instances (like LFX stating in the patent that the Sound Power curve ”may be used” compared to the AES paper stating it “is used”, so it‘s not an optional aspect).

Thanks @MZKM.

Was that the link to the AES paper that you intended to post? It relates to headphones. I would have suggested these two loudspeaker papers instead:
It is interesting that none of those have a PIR dispersion hitting -1.75 let alone past that.
He must of used a good deal of speakers with horns.

As you note, it would seem very odd if that were the case. Olive states in the second AES paper:
Test One includes mostly 2-way designs whereas the larger sample includes several 3-way and 4-way designs that tend to have wider dispersion (hence smaller negative target slopes) at mid and high frequencies.

Hardly seems that he is talking about horns here.
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,111
Likes
8,446
Location
NYC
I hope it is something trivial. For eample, if you swap these 2 rows it would immediately start looking more acceptable as PIR would be the same as ER and SP woud be higher..

View attachment 65533

I never mentioned it because it hadnt come up here before, but I have basically assumed those values had been accidentally swapped ever since I first read the paper. It is essentially impossible that PIR had a steeper negative slope than the sound power over the course of 70 speakers lol. These target slopes are based on measured values of actual speakers, so the values presented are obviously swapped.

And there's a reason Toole always used ER as a quick analog to PIR. The curves are usually so similar that the PIR approaches redundancy.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
I never mentioned it because it hadnt come up here before, but I have basically assumed those values had been accidentally swapped ever since I first read the paper. It is essentially impossible that PIR had a steeper negative slope than the sound power over the course of 70 speakers lol. These target slopes are based on measured values of actual speakers, so the values presented are obviously swapped.

And there's a reason Toole always used ER as a quick analog to PIR. The curves are usually so similar that the PIR approaches redundancy.

It's amazing actually, when I previously plotted these slopes to help visualise them for myself, I apparently accidentally/unconsciously used the ER value for both ER and PIR in my graphs:

WhatsApp Image 2020-04-05 at 23.38.23.jpeg


I didn't even notice I'd done it til going back just now to look at the graphs, thinking it was strange I hadn't noticed anything was amiss before.

Seems I also mis-plotted SP, but in a way that looks kind of plausible.
 

infinitesymphony

Major Contributor
Joined
Nov 21, 2018
Messages
1,072
Likes
1,806
Now we're getting somewhere. Seems like any preference formula should be refined over time. Perhaps eventually we'll have enough data to ask, "What were some of your favorite speakers in the past?" Then use the metrics from those to produce recommendations suited to an individual listener's preferences.
 

QMuse

Major Contributor
Joined
Feb 20, 2020
Messages
3,124
Likes
2,785
Loudspeaker Explorer can already do that :) Just select the top 10 speakers (from @MZKM's spreadsheet) and go to the table under "Olive Preference Score", "Slope", "Calculation". It will show a table with the b values:

View attachment 65524

And from there the mean can be obtained with just one additional line of code:

Code:
speakers_slope_b.mean()
View attachment 65525

These look quite different from the target slopes in the paper! It's a very different sample though. Also note that Loudspeaker Explorer doesn't (yet) attempt to "fix" the slightly wrong weights of ER and PIR curves in the original Klippel data, and the wrong weights do affect slopes from what @MZKM has showed, so take this with a grain of salt.

With all due constraints I still think this table represents a great summarized value of the ASR measurements done so far.

Capture.JPG
 

QMuse

Major Contributor
Joined
Feb 20, 2020
Messages
3,124
Likes
2,785
It's amazing actually, when I previously plotted these slopes to help visualise them for myself, I apparently accidentally/unconsciously used the ER value for both ER and PIR in my graphs

Ehh.. We are here starting to assume this was a row swap, and that indeed seems logical. But whenever I do that I can't avoid to remember what my father used to say to me: "remember my son, if you look hard enough, in the root of every f*ckup you will find wrong assumption". ;)
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,111
Likes
8,446
Location
NYC
Ehh.. We are here starting to assume this was a row swap, and that indeed seems logical. But whenever I do that I can't avoid to remember what my father used to say to me: "remember my son, if you look hard enough, in the root of every f*ckup you will find wrong assumption". ;)

What other explanation is there though? Unless the data was inputted completely incorrectly for two rows, it seems the only possible conclusion. The slopes for Test one are in there too, and Test one includes spins for every speaker measured. Every single speaker has a more negative SP than ER in that one. Indeed, it would take a very unusual speaker for that not to be the case. =]
 

QMuse

Major Contributor
Joined
Feb 20, 2020
Messages
3,124
Likes
2,785
What other explanation is there though? Unless the data was inputted completely incorrectly for two rows, it seems the only possible conclusion. The slopes for Test one are in there too, and Test one includes spins for every speaker measured. Every single speaker has a more negative SP than ER in that one. Indeed, it would take a very unusual speaker for that not to be the case. =]

The problem with that logic is that if we can't see other explanation it doesn't imply it doesn't exist - it only implies we can't see it.

I do however sincerely hope it was a simple data row swap.
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,111
Likes
8,446
Location
NYC
The problem with that logic is that if we can't see other explanation it doesn't imply it doesn't exist - it only implies we can't see it.

I do however sincerely hope it was a simple data row swap.

Fair enough, though again, it is clear from looking at paper 1=] . I suppose if someone wanted to bother they could go and calculate those target slopes from digitizing the data to see whether it was a swap or input error . The target slopes were determined for the top 90 percentile of speakers, which for test one would be the top 2 speakers, maybe 3.
 

QMuse

Major Contributor
Joined
Feb 20, 2020
Messages
3,124
Likes
2,785
Fair enough, though again, it is clear from looking at paper 1=] . I suppose if someone wanted to bother they could go and calculate those target slopes from digitizing the data to see whether it was a swap or input error . The target slopes were determined for the top 90 percentile of speakers, which for test one would be the top 2 speakers, maybe 3.

Soon ASR speaker measurement database (including Amir's measurement, yours and that of @hardisj), and backed by effort of @MZKM , @edechamps and @pierre, will become more relevant than that of Olive's. As collective knowledge and experience of user's of this forum is large enough to challenge any AES research made so far, soon, as @amirm and you 2 guys will be testing more and more speakers, IMHO we will have enough accurate data to make our own conclusions and we'll be able to challenge any research that doesnt' fit with it. ;)
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
With all due constraints I still think this table represents a great summarized value of the ASR measurements done so far.

View attachment 65607

The thing that I find quite encouraging is that, with the rows properly swapped, these mean b slope values from the top 10 ASR speakers match the Olive target slopes quite well in terms of ER, PIR and SP. That's hardly a surprise since that top 10 is derived from the score itself, but it does speak to the internal consistency of Olive's approach. Good news about the Olive model for once…
 

QMuse

Major Contributor
Joined
Feb 20, 2020
Messages
3,124
Likes
2,785
The thing that I find quite encouraging is that, with the rows properly swapped, these mean b slope values from the top 10 ASR speakers match the Olive target slopes quite well in terms of ER, PIR and SP.

I fully agree - IMHO the correlation between your numbers and Olive's is more than encouriging. :cool:
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
Soon ASR speaker measurement database (including Amir's measurement, yours and that of @hardisj), and backed by effort of @MZKM , @edechamps and @pierre, will become more relevant than that of Olive's. As collective knowledge and experience of user's of this forum is large enough to challenge any AES research made so far, soon, as @amirm and you 2 guys will be testing more and more speakers, IMHO we will have enough accurate data to make our own conclusions and we'll be able to challenge any research that doesnt' fit with it. ;)

It's not that simple, sadly. If you want to challenge the Olive model or build your own model, you will need plenty of measurements and plenty of reliable blind test results (to compare/calibrate the model against ground truth). The second part is not optional. @amirm is solving the "plenty of measurements" problem (and that's amazing), but without solid subjective listening data similar to what Olive collected in his studies, you won't get far. Sadly, such subjective listening data is very expensive/time-consuming to produce because of the sheer logistics involved - arguably even harder than measuring speakers.

Trying to build a speaker scoring model purely based on measurement data but no controlled, rigorous subjective listening data is like trying to use pure math to understand the universe without running any physics experiments: it's interesting as a thought exercise, but it won't survive contact with the real world.

The only way I can think of to kinda-sorta work around that problem when trying to build a new model is to test/calibrate the new model against the Olive model, by making the assumption that the Olive score accurately reflects the reality of listener preference for at least some known "well-behaved" speakers. But that's really more of a hack/bandaid than anything else, and such a model would stand on shaky ground for sure.
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
@edechamps @MZKM , how about adding another field for amir's subjective panther rating? It could be an int, maybe 0-4 or 1-5 for the 4 different panther ratings he gives?(headless, shrugging, lounging, golfing). I realize it's subjective data, but it's still data, and might be fun to start correlating these other metrics, like ER, PIR, SP, to amir's personal preference. Proper blind tests would be best to correlate with, but we don't have those yet, so maybe this is better than nothing, or maybe just for fun? I'd be willing to categorize them all for you (0-4 or 1-5)manually if you want.
 

QMuse

Major Contributor
Joined
Feb 20, 2020
Messages
3,124
Likes
2,785
It's not that simple, sadly. If you want to challenge the Olive model or build your own model, you will need plenty of measurements and plenty of reliable blind test results (to compare/calibrate the model against ground truth). The second part is not optional. @amirm is solving the "plenty of measurements" problem (and that's amazing), but without solid subjective listening data similar to what Olive collected in his studies, you won't get far. Sadly, such subjective listening data is very expensive/time-consuming to produce because of the sheer logistics involved - arguably even harder than measuring speakers.

Trying to build a speaker scoring model purely based on measurement data but no controlled, rigorous subjective listening data is like trying to use pure math to understand the universe without running any physics experiments: it's interesting as a thought exercise, but it won't survive contact with the real world.

The only way I can think of to kinda-sorta work around that problem when trying to build a new model is to test/calibrate the new model against the Olive model, by making the assumption that the Olive score accurately reflects the reality of listener preference for at least some known "well-behaved" speakers. But that's really more of a hack/bandaid than anything else, and such a model would stand on shaky ground for sure.

I fully agree. Yet, once the measurement database is filled with enough samples to matter, who knows what future will bring? If you close your eyes and start to dream about it maybe even ASR organised speaker's blind test which will test the compliance with mesaurement data is not too far to dream of. ;)
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
Absolutely not. Treating the subjective ranking from a single listener's informal, sighted, uncontrolled listening experience as "data" is anathema to everything I stand for.

I want to add that I do agree with you here. I think people put far too much weight into his subjective opinion(it's what makes up 90% of the discussion). The reason I asked this is that I completely disagree with Amir's lack of recommendation for this Sony SS-CS5 which measured very well for any price point, let alone a $78 speaker. In terms of value, this speaker is at worst the second best speaker measured to date, and I guess I'm just curious to gain more insight into Amir's mind.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,550
Location
Seattle Area
I want to add that I do agree with you here. I think people put far too much weight into his subjective opinion(it's what makes up 90% of the discussion). The reason I asked this is that I completely disagree with Amir's lack of recommendation for this Sony SS-CS5 which measured very well for any price point, let alone a $78 speaker. In terms of value, this speaker is at worst the second best speaker measured to date, and I guess I'm just curious to gain more insight into Amir's mind.
Well, let's look into your mind first. Give a score of 1 to 10 to this speaker and defend the value you picked. Then sample someone else to give us a score 1 to 10 just as well and defend that.
 
Top Bottom