• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SVS Ultra Bookshelf Speaker Review

jhaider

Major Contributor
Forum Donor
Joined
Jun 5, 2016
Messages
2,822
Likes
4,514
If this is a 'huge' midrange dispersion disruption, then the Revel M106 has
a 'very large' (but not quite 'huge') midrange dispersion disruption

The measurements between the two are remarkably similar in the CEA format, in the horizontal directivity, and in the horizontal directivity normalized plots.

While I wouldn't consider either one to have smooth directivity, SVS is considerably worse. Compare beamwidth at 3kHz referenced to the common eyeball average of 50º. (N.B. the extra bump in M106's beamwidth after the crossover disruption is due to a tweeter resonance.)

I don't have an opinion on M106, as I've never heard one. (M105 is a very nice speaker.) I also haven't heard this iteration of SVS Ultra, though I have heard a previous one with a different (more expensive, not necessarily better) tweeter and have a pair of the old model's woofers, which were a similar Peerless/Tymphany HDS but with a metal cone.

My only reaction was to assertions that the measurements are anywhere near the Neumann/Genelec/KEF R ballpark. I don't see anything remotely close.
 
Last edited:

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
To my way of thinking a small bookshelf speaker needs to be pretty special to earn the $1000/pair price ticket. I would put this speaker with the Elac bookshelf speakers that sell for less than this. The only strong reason I see to buy an SVS speaker (I'm not talking about subwoofers) would be to get a true three-way speaker, for someone who wants a three-way speaker, and of course this means one of the towers.

I think that I may have recently acquired a new skill at recognizing the signature of diffraction effects in the on-axis and off-axis response curves. I suppose this is what I get for getting too deeply involved in the analysis of the gr-research kit speaker. Given the width of the baffle (the full, outer baffle), the first peak in the diffraction ripple, in the on-axis response, will occur at about 1.6 kHz. At this frequency the opposite will occur in the off-axis response, i.e., a dip. At twice this frequency, at 3.2 kHz, the on-axis and off-axis response swap roles. At 1.6 kHz there are other things going on, which makes it difficult to see the effect of diffraction per se. But at 3.2 kHz there is a distinct dip in the on-axis response (which Amir labeled) and a distinct rise in the off-axis response at the same frequency. I think it all but certain that this is diffraction ripple.

So why do I point this out? Because there is nothing to watch on TV except Ben Stiller. Okay, I wanted to mention that these diffractions effects could have been largely avoided by moving the tweeter to one side so that it isn't the same distance from both side edges. (And the distance to the top edge should be different from the distance to either side edge.) A related observation that may be worth pointing out is that the bevel applied to the corner of the baffle does not seem to have had any strong benefit at suppressing the diffraction effect. Presumably this is because the bevel isn't sufficiently pronounced in relation to the wavelength (4.2 inches at 3.2 kHz).

The best feature of this speaker so far as I can tell is that it is clean enough to be properly crossed to a subwoofer a bit below 100 Hz, probably 80 Hz. This sets it apart from many of the similar speakers that Amir has tested, and to my way of thinking this is the one thing that might give justification to the cost of this speaker.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,579
Likes
38,280
Location
Gold Coast, Queensland, Australia
SVS's logo looks like a blatant copy of DALI's to me.

1596338029692.png
 

maverickronin

Major Contributor
Forum Donor
Joined
Jul 19, 2018
Messages
2,527
Likes
3,308
Location
Midwest, USA
It is. It came as a challenge to Consumer Reports which had decided Power Response was the measure of merit. That made Harman speakers not look so good so Sean decided to see if he could develop a better metric and the score we have is that.

There were supposed to be follow on chapters analyzing role of distortion, dispersion, etc. but nothing was published.

Wow. I was under the impression they tested it at least once.

Makes me wonder how people here ended up taking it so seriously. You can almost always make a model that fits one particular set of data and even if it looks totally reasonable it can completely faceplant on a different set of data.

I'll make a distinction between data mining and what Olive did. Modern data mining is more about looking for correlations in raw data and eliciting the existence of correlations. Olive went into the study with a clear idea about what the correlated data were, and what figures of merit to use. The statistical work was finding the factor to apply to these numbers to yield ordering of preference. This is much more conventional statistical analysis than data mining. "Data mining" seems to have become a catch-all phrase, and I would not like to see it applied when there is arguably much more solid backing to the work than just letting a bunch of data mining tools loose over the data to see what turns up. There is much more science here than that.

I get what you mean in that Olive obviously knew where to start looking for and didn't just throw all the data into an algorithm looking for any kind of correlation but I still think the shoe fits because the best you can ever do by going over old data and looking for correlations is form a new hypothesis, not demonstrate it.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
200.gif


First off, I haven't had the chance to analyze the data but this reply smacked me upside my head.

This, IMHO, is a significant overreaction to one guy's subjective impressions. You're talking about changing your mindset from believing a science that has reliably predicted preference (to more than a flip-of-a-coin degree) to no longer believing it because one person's subjective opinion didn't align a few times? I always listen first, take notes and then look at the data. In nearly every case I have found points in the data that explained why I heard what I heard. And in cases where I haven't I either don't know what to look for or I heard something that wasn't there. I feel strongly about that.

** Flame suit on... ** I hate to be that guy but have you considered that Amir simply isn't the trained listener it's assumed he is? No disrespect, and lord knows I don't need a scolding and paragraphs on Amir's listening sessions with Harman listed as reference, but let's be real here: you are throwing out the baby with the bathwater by taking Amir's personal subjective reaction in a sighted test over some people's near-lifetime of work to quantify preference based on measurements. The only people I would trust explicitly when it comes to subjective feedback are the people who made the music; decided what it was going to sound like going out the door and hopefully had a very neutral set of speakers they mixed them on. I appreciate reading others' takes as more of a recreation but certainly none that I would put significant stock in; especially if they can't help me correlate it to data. **... Flame suit off **

I am not saying it can all be bottled up in to a single set of charts *for every speaker* because there may very well be cases that kind of step outside the norm (the Philharmonic BMR is an example with its wide horizontal dispersion impacting the predicted in-room response). But I just think you're taking a leap that is too far without trying to look in to the reasons. There may be other data points that explain the differences. And maybe this data isn't as good as it may seem at first glance. Or there simply may be the human factor, which no one (myself included) is impervious of.

I think we should not have this same discussion every time Amir's subjective opinion of a speaker seems out of synch with the measurements. I don't think Amir ever said that whenever this happens that everyone should pay attention to his subjective assessment and ignore the measurements. Can you imagine the uproar if he had ever said that? But he didn't.
 

GD Fan

Addicted to Fun and Learning
Forum Donor
Joined
Jan 7, 2020
Messages
932
Likes
1,699
Location
NY, NY USA
Perhaps not germane to this review, but I used SVS Prime Satellites as surrounds for about a year. They caused the worst ear fatigue I've ever experienced. It was just awful. They're still laying around here in a box awaiting their future, which will not be here.
 

hardisj

Major Contributor
Reviewer
Joined
Jul 18, 2019
Messages
2,907
Likes
13,908
Location
North Alabama
I think we should not have this same discussion every time Amir's subjective opinion of a speaker seems out of synch with the measurements. I don't think Amir ever said that whenever this happens that everyone should pay attention to his subjective assessment and ignore the measurements. Can you imagine the uproar if he had ever said that? But he didn't.

I didn’t say he did. I was replying to someone else’s take on the resolution of the discrepancy.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
I'll make a distinction between data mining and what Olive did. Modern data mining is more about looking for correlations in raw data and eliciting the existence of correlations. Olive went into the study with a clear idea about what the correlated data were, and what figures of merit to use. The statistical work was finding the factor to apply to these numbers to yield ordering of preference. This is much more conventional statistical analysis than data mining. "Data mining" seems to have become a catch-all phrase, and I would not like to see it applied when there is arguably much more solid backing to the work than just letting a bunch of data mining tools loose over the data to see what turns up. There is much more science here than that.

Data mining, big data, and cloud computing are all just catch phrases made popular by people with fetishes for catch phrases. The same even goes for "client-server". Maybe even for "Jim Cramer".
 

confucius_zero

Addicted to Fun and Learning
Joined
Nov 22, 2018
Messages
541
Likes
345
I suspect measurement score will be good, making me look bad. So be it! :)
Test bench update to be more in line with taste?
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
A) Another of my points. He's one guy. With an opinion. Throwing out a whole science based on one guy's opinion is... yea...
B) A well trained listener, someone who has shown time and again they can identify tones correctly, compression, level changes to within a small degree, etc. wouldn't be effected by this. Of course, there's no one I can think of off the top of my head that I know pass this criteria. I know people whose opinions seem to align with mine. But most of the time I look at the data and skip any subjective impressions other than for recreational use unless they are specifically trying to tie their subjective commentary to the data in some way or at least acknowledging areas where things don't seem to align.


I don't want this to be a "Amir can't have an opinion" spiral. For the love of God. I am saying: taking one guy's subjective opinion over decades of science and tossing the science aside is crazy talk. No matter who the "one guy" is. :D

I thought the reason he was becoming disenchanted with the objective process is that the objective process isn't as perfect as he had expected (or possibly had been led to expect). Amir's part in it was only to point out that the objective process isn't perfect. As such you seem to be arguing that when Amir doesn't agree with the score, the score is right and Amir is wrong. Are you certain of this, and if you aren't certain of this (how could you possibly be ...), then what motivated you to spin it the way you did? If you had simply said not to toss out the baby with the water just because the baby isn't perfect (like in the Seinfeld episode ...), you wouldn't have said anything controversial.
 

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,240
Likes
11,463
Location
Land O’ Lakes, FL
can almost always make a model that fits one particular set of data and even if it looks totally reasonable it can completely faceplant on a different set of data.
That data being 70 different speakers tested with a number of testees.

The main issues are the favoring of narrow directivity and that the formula is stronger with mid-level performers, as they didn’t have too many excellent speakers in the mix; as the saying goes (slightly): cheap things are getting good and good things are getting cheap.
 

Billy Budapest

Major Contributor
Forum Donor
Joined
Oct 11, 2019
Messages
1,810
Likes
2,674
Here's a teardown from a noob.
Oh, this “New Record Day” guy is not a noob. He has been at his YouTube review gig for a few years. He believes all the usual audiophile myths, though. He had a video that in large part focused on his complaints about sand cast resistors.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,494
There is not a lot of clearance to usncrew/screw the binding posts even for my narrow fingers.

Why do companies do this? Like all the time?

I am not trusting the above graph much as it seems to show distortion his higher than lower levels??? I did have to re-calibrate my system so maybe something went wrong there.

Isn't distortion lower from source when you output less power anyway sometimes?
 

Billy Budapest

Major Contributor
Forum Donor
Joined
Oct 11, 2019
Messages
1,810
Likes
2,674
This was probably discussed ad-nauseum, but is there a reason why the listening tests are done after the measurements are taken instead of before?
In this way, you go into a listening session knowing what to expect, at least in part.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
I didn’t say he did. I was replying to someone else’s take on the resolution of the discrepancy.

I didn't say that you said that. I was only pointing out what he would need to have said in order for all the commotion that we've seen on multiple occasions to be justified. Only if he had told everyone to ignore the measurements and pay attention to only his subjective assessment would all the commotion be justified. For your part, you did make it about Amir, which wasn't necessary. You could just as easily have said, "Don't become disenchanted just because the objective process isn't perfect. Just because the baby isn't perfect isn't a reason to throw out the baby with the bathwater." There wasn't any reason for you to have made it about Amir in the way that you did, unless, and only unless your intent was to argue that the data is always right and that when Amir doesn't agree with the data Amir is wrong. If this was your intent, then it was reasonable for you to have made it about Amir in the way you did. But if this was not your express intent then there is no reason for you to have made it about Amir in the way you did.
 

bigguyca

Senior Member
Joined
Jul 6, 2019
Messages
477
Likes
617
The rating is not bad but it looks like that with a better crossover (less messy and simpler? Meaning less components but of summed specs and maybe with a higher order) and a little EQ it could seriously work, despite the vertical spin (but don't we usually set the tweeter on our ears height when we doing some proper listening?).


How do you propose to build a higher order passive crossover with fewer components?

Also, a well design say 4th order crossover, will require better, tighter controlled parts both in the crossover and the drivers, and a tighter production process, which will add to cost.

In summary you seem to be saying, if the speaker was redesigned it could be good.
 

hardisj

Major Contributor
Reviewer
Joined
Jul 18, 2019
Messages
2,907
Likes
13,908
Location
North Alabama
I didn't say that you said that. I was only pointing out what he would need to have said in order for all the commotion that we've seen on multiple occasions to be justified. Only if he had told everyone to ignore the measurements and pay attention to only his subjective assessment would all the commotion be justified. For your part, you did make it about Amir, which wasn't necessary. You could just as easily have said, "Don't become disenchanted just because the objective process isn't perfect. Just because the baby isn't perfect isn't a reason to throw out the baby with the bathwater." There wasn't any reason for you to have made it about Amir in the way that you did, unless, and only unless your intent was to argue that the data is always right and that when Amir doesn't agree with the data Amir is wrong. If this was your intent, then it was reasonable for you to have made it about Amir in the way you did. But if this was not your express intent then there is no reason for you to have made it about Amir in the way you did.

Quoting my earlier post because I feel like you're looking to start a fight that isn't there and that I've already addressed. Bolding the part that matters the most so we don't keep going rounds on this.

Before this goes way off the rails let us right the ship because I don't want to have to dig through 5 pages of you defending yourself because my original reply was misconstrued. So, getting back on topic, this was the quote I was replying to:

richard1251 said:
I expected this site to reinforce my beliefs in the science of Toole/Olive, but it's kinda done the opposite. I was the guy over on AVS constantly arguing with the subjectivists who said "you can't tell if a speaker sounds bad just by looking at measurements", or "you can't tell if a speaker is good just by looking at measurements", and it frustrated me to no end how much they seemed to ignore the established science. However, in an odd twist of fates, this site - Audio Science - has kinda, in a way, proved the subjectivist right. Spinorama measurements really aren't sufficient to characterize the quality of a loudspeaker(as Toole had led me to believe). It really is possibly to have a speaker with an excellent spinorama that sounds bad(SVS Ultra), and it really is possible to have a speaker with a terrible spinorama that sounds excellent(Revel M55XC).

This left me with the impression that @richard12511 puts more weight on your opinion than data. That, when the two don't align its because the data isn't telling the right story.

And that's where I disagree. It can be that a) the subjective impression isn't necessarily accurate, b) we don't have all of the data to make the complete picture, c) we don't understand the data or d) we analyzing the data wrong. Any number of mix/match or a degree of all of those.

I cannot imagine you, @amirm, would want someone reading your data to come to the conclusion that when the two don't align the subjective opinion is the default answer. That's my argument. It's not that you can't hear shit. It's the mentality of throwing the baby out with the bathwater. That's all.

Erin

It's not about Amir anymore than it would be about *anyone* providing the subjective data. Even if it were myself. I would not advise anyone to simply take my word over the data. Again, reference the bolded.

Peace.
 

hardisj

Major Contributor
Reviewer
Joined
Jul 18, 2019
Messages
2,907
Likes
13,908
Location
North Alabama
This was probably discussed ad-nauseum, but is there a reason why the listening tests are done after the measurements are taken instead of before?

It has been discussed a lot. Or at least brought up.

I don't personally care how others choose to do it. I have my opinions but it's not something I find necessary to make a fuss over.

I choose to conduct my listening evaluations first because I know myself well enough to know that if I see the data first I am likely to be listening for the things that stood out to me in the data. But that's me. If someone else trusts themselves enough to not be influenced by the data then that's fine. And, hey, maybe we need someone to look for those things so they can report on how it affected their impressions.
 

hardisj

Major Contributor
Reviewer
Joined
Jul 18, 2019
Messages
2,907
Likes
13,908
Location
North Alabama
I thought the reason he was becoming disenchanted with the objective process is that the objective process isn't as perfect as he had expected (or possibly had been led to expect). Amir's part in it was only to point out that the objective process isn't perfect. As such you seem to be arguing that when Amir doesn't agree with the score, the score is right and Amir is wrong. Are you certain of this, and if you aren't certain of this (how could you possibly be ...), then what motivated you to spin it the way you did? If you had simply said not to toss out the baby with the water just because the baby isn't perfect (like in the Seinfeld episode ...), you wouldn't have said anything controversial.

I never, not one single time, referenced "the score". Please read my above post.

I didn't even talk about the score until later when I said I don't even look at it because I don't believe a single score should represent performance. A set of data is more important to me than a single value assigned to represent the data. I still don't know what the score for this speaker was. And I don't care enough to find out. ;)


Alright, I think I've explained my original reply enough. Bottom line: if one person doesn't like a speaker but the data suggests it's a well performing speaker, we shouldn't just take the subjective evaluation of "not like" over the data. We need to do more work.
 
Top Bottom