• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SVS Ultra Bookshelf Speaker Review

aarons915

Addicted to Fun and Learning
Forum Donor
Joined
Oct 20, 2019
Messages
686
Likes
1,142
Location
Chicago, IL
Well, you don't want to go there because I can tell you from experience that trained listeners care far less for superficial aspects of audio products than everyday audiophiles. And I firmly fall in the "don't care" category compared to someone who has bought a new toy, or lacks the experience in any kind of audio evaluation.

That's not the case whatsoever. I have never said that. What I have said is that the factors that bias me are not nearly as strong as what biases you all. So you can't be dismissive of my results out of hand that way.

I haven't seen anywhere that says the Harman training makes them less biased, it simply makes them more reliable and faster at rating speakers in blind testing. From Sean Olive's blog (http://seanolive.blogspot.com/2009/04/dishonesty-of-sighted-audio-product.html) he talks about the famous Blind vs Sighted study and says:

"The experienced listeners were simply more consistent in their responses. As it turned out, the experienced listeners were no more or no less immune to the effects of visual biases than inexperienced listeners."

I personally think sighted bias has more to do with the type of person someone is, we've all seen the rabid fanboys of certain brands in some of these owner's groups around various forums. Those are the type who need to do blind testing for sure but if you regularly buy different brands of speakers or shoes, cars, etc I would bet your bias won't be as much of a factor. This doesn't mean we discount your opinion but I think many of us place more value in the measurements.
 
Last edited:

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,993
Likes
6,853
Location
UK
Regarding the score of this speaker… its two immediate neighbours (with sub) are the KEF R3 and the Revel M106, which have similar scores and similar ON/PIR breakdowns. And if we look at the spinoramas side-by-side, I would tend to agree with the score in the sense that it's not really obvious that one of these three speakers is better than the other:

View attachment 76296
View attachment 76297
View attachment 76298
View attachment 76299
They do look very similar and non-offensive. About directivity, how do you look at those graphs and determine how good directivity is? Do you look at the two bottom curves (Early Reflections DI & Sound Power DI)? In terms of looking at those curves is it where they end up at 20kHz that's important....ie they're all ending up at around 10 & 5 dBr, so from that you would say they all have the same directivity? Or do you look for places on the Early Reflection Curve (3rd curve) that do not match the ON in terms of direction of trend.....so for instance if Early Reflection Curve is rising in places where ON is going down for instance? What's the best way of interpreting these graphs to assess directivity brilliance or lack thereof?

EDIT: answered my own question: found this link that describes in the graphs what to look for (https://www.audiosciencereview.com/...mkii-and-control-1-pro-monitors-review.10811/ ), seeing as it's the first speaker ever reviewed it contains more detailed explanations on what to look for. You want the Early Reflections DI curve to be smooth (no kinks), which is basically saying that the Early Reflections are following the general trend of the LW (listening window), which means in turn that Early Reflections aren't deviating greatly in tone/character from the direct sound (LW), and therefore the reflections enhance the sound rather than conflicting with it....hence it will sound better. Directivity described by smoothness of Early Reflections DI curve.
 
Last edited:

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,555
Location
Land O’ Lakes, FL
Looking forward to the LRS measurements. So board of poxy bookshelf speakers. Be nice to see something different.
I can only imagine it will look eratic:
D3AE363D-8F22-4B04-AD4D-01D716B48E13.jpeg

44795C0A-9E56-4FC7-985D-3F867CCA4044.jpeg

Here’s what Toole says about the MartinLogan above:

I am really looking forward to the polar plots though.
 
Last edited:

hardisj

Major Contributor
Reviewer
Joined
Jul 18, 2019
Messages
2,907
Likes
13,914
Location
North Alabama
You absolutely should care.

The implied statement is they are trained listeners. I'm just making the point that everyone doesn't need to shout at me if I used your name. Any trained listener would fit in place of any other trained listener's name.

Again, I'm not targeting *you*. I'm simply making the point about weighing one person's opinion over an entire science. This is the entire reason you purchased the NFS; so we wouldn't have to just go off opinions. Unfortunately, in some cases it seems the opinion is being weighted more than the data without us taking the time to understand the discrepencies. Thus my "baby with the bathwater".

And to be abundantly clear: I am not targeting you, your ears, or your credentials. We could sub Jim-Bob's name for Amir. Or Bobby-Sue. And now I'm imagining Amir with shoulder-length, curly hair. Great. :D
 
Last edited:

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,186
Location
Riverview FL
Here’s what Toole says about the MartinLogan above:


I am really looking forward to the polar plots though.


Hmm...

I'm sure the polars will be ugly, but I listen on-axis.

MartinLogan - was $4500 list, paid less, vs JBL LSR 308, both speakers active (left and right) on-axis at listening position, no EQ, 1/24th smoothing, at 10 feet, at a comfortable for both level:

1596385066916.png



MartinLogan vs an M2, with EQ on both (different house curve, though), in different rooms, left and right speakers individually measured on both, at their respective listening positions:

1596386289780.png


So.
 
Last edited:

ROOSKIE

Major Contributor
Joined
Feb 27, 2020
Messages
1,936
Likes
3,525
Location
Minneapolis
The more sensible conclusion from the mismatch between measurements and Amir's impression is that Amir might have limitations and reliability issues, not the method. He does mention that he might have bias towards Revel before every review of their speakers, so I choose to go with that explanation rather than some mysterious unknown objectively affecting actual performance ;)
Of course Amir has limitations and biases, there are zero humans that do not.
However he also has preferences, there are zero humans that do not.

Draw a line where-ever you want to, the data only take us so far right now. If it did not we would all buy speakers with zero need to audition them. I don't feel much if any need to audition amps, DAC's and such - speakers = H@ll no, I deff want to listen and need to listen.
Beyond more data collection (which eventually may reveal even more guidance) the sensible explanation for me is that there is a subjective element that never can be removed. Sighted, trained, untrained, blind, double blind or wherever - these systems do need to be auditioned by the interested human and the differences however subtle (or gross) between various presentations of the sound reproduction require a choice making time.
The data is a guide not a rule.
IMHO here on this site of course the data is quite an accomplished guide, yet it isn't a rule.

Of course not everyone will agree that my take is sensible nor do I have any reason to convince. Just more of my two cents for the sake of enjoying the conversation.
 
  • Like
Reactions: Tks

ROOSKIE

Major Contributor
Joined
Feb 27, 2020
Messages
1,936
Likes
3,525
Location
Minneapolis
How can a speaker like this get an overwhelming negative review, where as the Polks T15 didn't?
I don't see this an an overwhelmingly negative review.
Quite the opposite as it has great measurements presented.
So what if Amir didn't like it all that much?
The measurements indicate it will sound good to many, like all current speakers it will NEVER work for all. Maybe Amir is in the minority here.
The discussion (and language of the review post) make this possibility clear.
Try it via the return policy.
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,555
Location
Land O’ Lakes, FL
Hmm...

I'm sure the polars will be ugly, but I listen on-axis.

MartinLogan - was $4500 list, paid less, vs JBL LSR 308, both speakers active (left and right) on-axis at listening position, no EQ, 1/24th smoothing, at 10 feet, at a comfortable for both level:

View attachment 76308


MartinLogan vs an M2, with EQ on both (different house curve, though), in different rooms, left and right speakers individually measured on both, at their respective listening positions:

View attachment 76311

So.
The measured in-room response versus the predicted in-room is something I would like to see for an electelstatic.
 
Joined
Jun 13, 2020
Messages
57
Likes
76
I don't see this an an overwhelmingly negative review.
Quite the opposite as it has great measurements presented.
So what if Amir didn't like it all that much?
The measurements indicate it will sound good to many, like all current speakers it will NEVER work for all. Maybe Amir is in the minority here.
The discussion (and language of the review post) make this possibility clear.
Try it via the return policy.

While I don't disagree with YOUR interpretation. It's becoming clear that others jump right to the subjective remarks as the end truth.

Comparatively, these are fine enough speakers for their price point, but I've never seen Amir make a distinction on price point. Comparing this review, the Polk T15, and Micca RB reviews. You'd think that the Polks were surprisingly good, the Micca were trash.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
Quoting my earlier post because I feel like you're looking to start a fight that isn't there and that I've already addressed. Bolding the part that matters the most so we don't keep going rounds on this.



It's not about Amir anymore than it would be about *anyone* providing the subjective data. Even if it were myself. I would not advise anyone to simply take my word over the data. Again, reference the bolded.

Peace.

I do not see the bolded text that you refer to, however the "part that matters" is the part that I will quote for you below, using italics so as not to appear to be shouting.

You accuse me of "looking to start a fight", which of course is your opinion, but I think it is a bit bizarre in light of the fact that it was you who dredged up a fight that has already consumed far to much energy on Amir's AudioScienceReview site/forum. To your way of thinking you did not make it specifically about Amir, and you have written subsequent posts where you said plainly that it isn't about Amir specifically. Is this consistent with what you actually wrote? Here is what you wrote, in response to richard12511's lament:

" ... have you considered that Amir simply isn't the trained listener it's assumed he is? No disrespect, and lord knows I don't need a scolding and paragraphs on Amir's listening sessions with Harman listed as reference, but let's be real here: you are throwing out the baby with the bathwater by taking Amir's personal subjective reaction in a sighted test over some people's near-lifetime of work to quantify preference based on measurements."

When I read richard12511's lament, my inclination was to point out that science is never guaranteed to be perfect, and that this is something that all good scientists know and accept. Had I responded to it I would also have mentioned that scientists frequently take exception to the accepted theory and that this is an accepted part of the scientific process.
 
Last edited:

Xyrium

Addicted to Fun and Learning
Forum Donor
Joined
Aug 3, 2018
Messages
574
Likes
493
Not bad overall. THe FR is a little uneven, but the distortion is actually very very good. In fact, it appears superior to the Revel M106 (Below 500Hz of course):
https://www.audiosciencereview.com/...e-distortion-thd-audio-meaurements-png.70821/

It appears that these would be reasonably simple to integrate with subs from a distortion perspective, for those fearful of crossing over above 80Hz. You would just need to deal with some of those resonances above, and peak just below, 1k range.
 

jhaider

Major Contributor
Forum Donor
Joined
Jun 5, 2016
Messages
2,871
Likes
4,667
And if we look at the spinoramas side-by-side, I would tend to agree with the score in the sense that it's not really obvious that one of these three speakers is better than the other:

View attachment 76296

Look at DI. KEF clearly measures heads-and-shoulders better than the other two, as in KEF is the Gherkin and the other two are 2-level single family homes. a very smooth composite (horizontal + vertical) DI. KEF is more directional from 3-10 kHz. Revel measures better than SVS as well, with less dispersion disruption and similar directivity.

Therefore, in terms of measured performance that I consider, the relative performance ranking is clear:
KEF >>>>>>>>>>> Revel > SVS.

People with a strong preference for wider directivity may have a different preference ranking if they listen to the three speakers in succession.

Comparatively, these are fine enough speakers for their price point,

You don't think that attention to dispersion through the crossover is necessary at $1000/pr? NHT offers a similarly-sized 3-way with superb acoustic engineering at the same price (C3). Revel offers a similarly-sized 6.5" 2-way with attention to dispersion through the crossover for less (M16). So what's the excuse here?
 

napilopez

Major Contributor
Forum Donor
Joined
Oct 17, 2018
Messages
2,146
Likes
8,715
Location
NYC
A few points:
  • I was never quite sure how or why anyone expected to perfectly correlate a score developed under double-blind, controlled conditions with over 200 listeners and 84 percent accuracy to the impressions of a single listener doing brief sighted listening tests. I don't care who the listener is. It could be Floyd toole or Sean Olive themselves and I wouldn't expect the score to match their listening impressions perfectly, nor that every great-measuring speaker would sound at least good to them.
  • The fact that we know by much research on directivity and reflections that people can have strong differences for different directivity patterns should be enough to tell the score is not enough to reliably predict preferences for a single listener, only an average.
  • However, the score is still a bit useful, because as far as I can tell, it's probably at least as likely as giving a good recommendation any written review. The fact that overall, Amir tends to like good-scoring speakers is about all that we could have really asked for.
  • It's also worth noting that for some of the speakers Amir has tested, I've had completely different opinions. Not just on the 'goodness' of a speaker, where I'm admittedly more positive overall (there are few speakers I've measured that I flat out dislike), but in how a speaker sounds. The Q Acoustics 3020i, for instance, is a speaker I used for months in two homes and three setups and would have never called bright. This is okay too, and doesn't negate any of the value of the spinorama or Toole/Olive research. Some variability was always to be expected over a small sample.
  • The most important part of the toole/olive research is emphasizing the relationship of frequency response and directivity characteristics over everything else. The spinorama condenses this substantially. The problem is that many seem to take the score as an authoritative and complete summary for interpreting the spinorama and directivity data, when it's really not. Yes, there's a good correlation between a good spin and a good score. But the score also often misses details that I consider crucial.
  • I read this review and knew immediately it would score over a 7.5. At the same time, my interpretation of the measurements was that this is a good speaker, but the large, broad scoop in the listening window combined with directivity issues (esp in the horizontal plane) don't make the spin itself deserving of being in quite the tier it is. Its horizontal directivity and listening window performance, for instance, is not up to par with the other top performers like the Revel and R3. It seems to average out into excellent performance without actually being quite that good.
  • As @richard12511 pointed out, my biggest issue with the spinorama is the melding of horizontal and vertical data. In practice though, this isn't a huge deal, since these days the most common sources for spinorama-format data are Amir, audioholics, hardisj, and me - and we all provide detailed, separate horizontal and vertical data anyway.
  • Not to toot my own horn, but I do trust my interpretation of the data more than the scores. So does Harman, it seems, considering different authorities there have stated many times they care more about the listening window than the on-axis. And that should be the goal for everyone here - interpreting the data well enough on their own that they don't need a score.
 
Last edited:
Joined
Jun 13, 2020
Messages
57
Likes
76
You don't think that attention to dispersion through the crossover is necessary at $1000/pr? NHT offers a similarly-sized 3-way with superb acoustic engineering at the same price (C3). Revel offers a similarly-sized 6.5" 2-way with attention to dispersion through the crossover for less (M16). So what's the excuse here?

Yes. There are exceptional speakers in this price bracket. That was never in question.

The NTH are doubly so.

The Revel vs SVS is a a MUCH closer call. The comparison has already been made here.

You can argue that one is good and one is very good. But it's a ridiculous notion that one is bad.
 

jazzendapus

Member
Joined
Apr 25, 2019
Messages
71
Likes
150
Btw, I have a challenge proposal for you @amirm that might be fun - how about you listen to the speakers before measuring them and draw a frequency response you think they have (on axis I guess, unless you feel you can approximate off axis as well), and then compare what you drew with what Klippel comes out with.
 

hardisj

Major Contributor
Reviewer
Joined
Jul 18, 2019
Messages
2,907
Likes
13,914
Location
North Alabama
This is patently disingenuous. Whether you did or did not specifically mention the score isn't relevant. What you did do, and ought not try to deny, was to place fault with Amir for sharing an overall, not-fully-objective assessment of the speaker when his assessment was not in synch with the objective measurements. And furthermore, in your original reply you wrote a couple of sentences that put Amir squarely in opposition with Olive, which implicitly makes it about Amir vs. the score. So even thought you might not have referenced "the score" explicitly, you did so in an implicit way that for intents and purposes was as clear as if you had said it explicitly. Which is why I pointed out, correctly and honestly, that what you wrote here is disingenuous. It is.

Never once have I said or implied the score. Not one single time. I am talking about the CTA dataset and it’s correlation with preference. But not a single value metric which I already said I am not fond of. You are trying to pit me vs Amir and I won’t take part in that. I have respect for Amir and what he is doing here. I only replied to the post where an individual put more emphasis on Amir’s subjective assessment than the data without further consideration as to the faults of that notion. Amir just happened to be the reviewer. I don’t care who it is. I have made that abundantly clear in numerous posts. The fellow I was replying to understood my concern. I’m not going to continue this unnecessary back and forth.

edit: FWIW, if someone said the same about me I would say “Fair point. That’s why we have data. So we can discuss these differences.” So feel free to sub my name for Amir’s if it makes the point I am trying to make more clear.
 
Last edited:

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
And I don't think it should care about that. Maybe you're not claiming that it should either, but in any event the point of the research was to explore what factors correlate with listener preferences using speakers as designed, as consumers would. What I think might be a much more important limitation is one you identified--the equal weighting of vertical and horizontal dispersion (at least I think that's a feature of the score). I absolutely agree, based only on my listening and designing experience, that horizontal dispersion characteristics are much more important than vertical. But I'm still not sure why. Do you have any insight on this?

There is no question that that horizontal and vertical dispersion should not be weighted the same. The question is what is the ideal relative weighting for the horizontal and vertical dispersion. With any attempt to answer this question objectively, the underlying problem is made immediately apparent. There just isn't any fully objective way to do this. It necessarily would reflect the assumptions that were made, and not much more than the assumptions. And when I ponder this, it causes me to ponder what other assumptions have been made in deciding how to weight the off-axis response curves relative to the on-axis response curve. Surely this cannot be done absent assumptions. What the specific assumptions were I do not know, but I know that there had to have been assumptions, and as such, any attempt to reduce the whole thing to a singe score, even if it is only concerned with the room response, is a sort of fool's errand. Personally, I don't doubt one bit that the off-axis response is very important. But I don't agree with the emphasis on the directivity index. I know from personal experience that so far as my personal preference is concerned, when a choice has to be made between a smooth overall off-axis response that slopes downward, vs. one that isn't smooth but that achieves better balance between bass and treble, I much prefer the one that achieves better tonal balance even if it means that the overall response curve isn't very smooth. I care about the off-axis response, but when I look at the off-axis response curve for a particular speaker, I look for it to show a good overall balance between bass and treble, i.e., for it to not have excessive downward slope, and I pay less attention to the directivity error at the crossover point. This goes right to the heart of the Toole philosophy, and is the reason that I have taken the whole business with many grains of salt since I first encountered it, which seems like it was nearly forty years ago. I suspect that if someone were inclined to challenge this aspect of the Toole philosophy, that they would be able to put the whole thing on shaky ground by performing tests using speakers speakers that sound good in spite of having non-smooth off-axis response curves, as opposed to using speakers where it is already known that the smoothness in the off-axis response correlates well with the overall quality of sound.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
Intresting. In the same way a truly omnidirectional (MBL let's say) speakers will have great directivity but as you said they can impact everything to sound in a similar way (as the Magnepans). For some kind of jazz, acustic, liver or orchestral music is just perfect for my tastes but maybe not so much on other kind of music.
Did you ever got to test omnidirectionals at Harman or they are just to rare of a speaker type to be researched?

Please forgive, for I must say that I too am fond of liver music, especially the liver-with-onions kind. I humbly beg your indulgence.
 
Top Bottom