• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SVS Ultra Bookshelf Speaker Review

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,555
Location
Land O’ Lakes, FL
Spent a bit more time listening and trying to equalize the SVS. Filling in the hole where there is directivity error helps. Pulling down where the are resonances in waterfall display also helped. But after half of hour of messing with it, I just could not get it to sound great. To make sure I am not in "bad sound mood" I replaced it with Revel M106. Oh man. What a relief. The smoothness and balance of this speaker is in another planet.
Others will ask you to use a non-Harman speaker :p
 
  • Like
Reactions: PSO
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,641
Likes
240,754
Location
Seattle Area
Has it? I've seen comparisons to M106, but not the newer and less expensive M16.
I read your post so decided to switch to M16 from M106. It too sounds wonderful and way cleaner than SVS Ultra.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,641
Likes
240,754
Location
Seattle Area
Others will ask you to use a non-Harman speaker :p
I don't have those anymore to compare. I do have the GR Research. It is not as clear cut here but I can tell you the highs are pleasant on GR whereas the ones on SVS give me a headache. It is definitely more efficient though and plays a significantly higher level.

At this point I am biased against the sound of the SVS enough that it is impacting my judgement.
 

Matias

Master Contributor
Forum Donor
Joined
Jan 1, 2019
Messages
5,072
Likes
10,924
Location
São Paulo, Brazil
I read your post so decided to switch to M16 from M106. It too sounds wonderful and way cleaner than SVS Ultra.
Amir, for the "resident" speakers, would you mind revisiting them and measuring multi-tone distortion, compression and the other new tests? I know there are other speakers in line but some of the previous ones, like the M16 for instance, are references/benchmarks for the new ones to be compared to. Thanks!
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,641
Likes
240,754
Location
Seattle Area
Amir, for the "resident" speakers, would you mind revisiting them and measuring multi-tone distortion, compression and the other new tests?
Will do once I finalize the tests. I already spent a day running compression tests on all of them, only to change the spec. I am still not very happy with the current test suite. Results are hard to interpret.

The challenge with this task is the annoying noise speakers make often at very high levels, making my wife and the dogs upset. I am thinking of sticking the speakers in the attic to cut down on the noise. But first I have to clean it to make room and second, it is too hot in there right now.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
I'm not trying to propose a meaningful hypotheses, I'm just asking a question based on an idea I had...

You could similarly dismiss 99.9% of your own(or anyones) forum posts by this same standard. Very few posts have good data to support the post.

Granted, most of us share opinions for ideas for which we have no proof, but none of the opinions I've shared have been nearly as interesting as what you have suggested. Your hypothesis (or whatever you'd rather I call it) is particularly interesting because of the implications. If your hypothesis is true for Amir, it is also likely true for other trained listeners, possibly the majority, possibility all of them. This question has been raised before, but the rebuttal, so far as I could gather, is that if they are trained, they are trained to like what the majority likes. To prove that there has been no training is inherently difficult, so in its stead we are shown a demonstrable and strong correlation between the preferences of the trained listeners and the preferences of the majority of untrained listeners. Unfortunately, things that are easy to do are generally also easy to do in a way that is less than honest. I'm not suggesting any dishonesty, only pointing out one of things that good scientists need to be wary of. The establishment of this correlation would necessarily have been done using a particular collection of speakers, which raises the question of whether the speakers that were selected for the purpose of establishing this correlation were speakers that were known to support the correlation. Again, I'm only pointing out something that good scientists need to be wary of (I'm also ending sentences with prepositions). More specifically, a speaker that sounded especially good could have been deemed desirable for this purpose if and only if it had a smooth off-axis response, while a speaker that sounded bad could have been deemed desirable if and only if it had a non-smooth off-axis response. This could easily have happened without the people doing it having any conscious awareness that they had done this. At some point, a speaker identified as a good example of a speaker with an irregular off-axis response and that sounded bad. At some other point, a speaker identified as an example of a speaker with a smooth off-axis response and that sounded good. At other points, a couple of other speakers that weren't extreme cases but that were middle-of-the-road cases that fit the premise and the correlation that was believed to exist. Now, I'm not suggesting that something of this sort happened, but only pointing out something that they needed to be wary off and might not have been as sensitive to as they should have been. I am only pointing out the uncertainty as to whether the same correlation would have been found if a different set of speakers had been used. In fact I would almost bet that given the appropriate resources I could find a set of speakers that would support a different correlation and a different notion for what people prefer: that people prefer a balance of bass and treble in the off-axis response even if the cost is that the off-axis response resembles a rollercoaster.

In any case, my point was not so much to be dismissive of your suggestion as to simply say that it would be near impossible to prove it and that since you couldn't prove it there isn't any point to putting it out there.
 

xarkkon

Active Member
Joined
May 26, 2019
Messages
228
Likes
338
I don't have those anymore to compare. I do have the GR Research. It is not as clear cut here but I can tell you the highs are pleasant on GR whereas the ones on SVS give me a headache. It is definitely more efficient though and plays a significantly higher level.

At this point I am biased against the sound of the SVS enough that it is impacting my judgement.
As someone else has commented earlier, the most interesting reviews are ones where the perference rating don't tie with the subjective review. For these to score 8.1 w/sub and subjectively perform worse than the XLS Encore DIY speakers, very interesting!

Main takeaway from me for this review is how V-shaped this thing looks on the FR and yet all the youtube reviewers keep going on and on about how neutral and true these things are... And your review came out on the same day as DMS' youtube review. Goes to show how wrong some of these purely subjective reviews can be.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
There's a simpler solution - avoid posting shoddy and poorly substantiated data to begin with.


Because there's a very high likelihood that it's misleading. Are we in agreement that posting misleading info is not a good practice?

A simpler solution to what exactly? I am honestly not following you. And I am not going to answer the question because given the context in which it is asked, I would be implicitly agreeing that Amir has posted misleading information. I have no reason to believe he did. You obviously believe strongly that he did, but is this something you can prove?
 

Ron Texas

Master Contributor
Forum Donor
Joined
Jun 10, 2018
Messages
6,222
Likes
9,343
Here we go, a speaker with a high preference score and no recommendation. Someone please explain this to me.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
It is not just that. I have no speaker shuffler to replicate Harman tests. Anything less will be criticized just as well if it doesn't agree with someone's agenda. "Oh speakers were not in the same spot. How did you match levels? How about your ears? Aren't you too old? What amplifiers did you use? Maybe they favor one speaker over the other. Where you put the speaker is not where Harman put it so your tests don't work."

My current testing absolutely follows best practices. I listen in mono. Every speaker is in the same spot and I sit in identical spot. EQ is used to dial out the major room mode which was demonstrated to be a factor early on.

Wait ... you don't wear that hat do you?
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
The psychological inclination to conform to respected theory can explain this easily. The fact that you listen and give final grade to speakers after measuring them makes this correlation useless.

As long as I have no certainty that your subjective grading of the speakers is not affected by the degree your dinner agreed with you or your mood or a million other possible factors unrelated to sound, I see no reason to put any value in those subjective evaluations, and recommend others to do so as well. The funny thing is that you know all this very well yourself, but once someone doubts your superhuman listening abilities that obviously nullify all possible biases at once, you get into this half-obnoxious half-pathetic juggernaut mode destroying everything on your path along with your own dignity. I think I'm very close to being hit with a banhammer, so I better step out of this...


I'm not the one who claimed it's ~70% better than M106 with zero serious evidence.

It is a very sure bet that everyone who reads this forum possesses the intelligence and experience and whatever else is needed to decide for themselves how much stock to place in Amir's subjective assessments. You have no reason to be concerned about people being misled, and there is no reason for you to sound the alarm and recommend to other people that they shouldn't put any value in the subjective measurements. Each person on this forum makes up his or her own mind as to the value of his subjective measurements. The question here that is glaring, that begs to be answered, is why you are on this Quixotic quest to warn people not to pay attention to his subjective assessments. Why? What is your purpose? What is it that you are trying to accomplish?
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
Here we go, a speaker with a high preference score and no recommendation. Someone please explain this to me.

Among the possible explanations, one that is conspicuous is that the preference score is an imperfect estimate of a speaker's sound quality. Of course there are other, equally obvious explanations, but I was just concerned that this one might get overlooked. It raises the question of who claimed that the preference score was infallible and why they made that claim. Did Olive say that it is infallible? Or did he provide a statistical argument to the effect that a ranking of speakers based on his algorithm would be very similar to the ranking you would get if a bunch of people rated a bunch of speakers and their ratings were averaged together?
 

spacevector

Addicted to Fun and Learning
Forum Donor
Joined
Dec 3, 2019
Messages
553
Likes
1,003
Location
Bayrea
Hi amira, has an EQ been offered up yet to try with this speaker? I glanced through the thread but it has become rather dense with several tangent posts so I may have missed it.
 

Alexanderc

Addicted to Fun and Learning
Forum Donor
Joined
Jun 11, 2019
Messages
641
Likes
1,018
Location
Florida, USA
I’ve been reading this thread off and on all afternoon. Without speaking to any individual or addressing any particular issue, I think it is worth remembering what an enormous contribution this forum is making to audio. By “this forum” I naturally mean mostly Amir. It is mostly his sweat, his research, his experience, and his generosity in sharing this with everyone for free that allows us to talk about these things at all. We can only question and commend, argue and congratulate because he invests in the equipment and does the work so we can see what’s there. I’m not saying we should all just assume Amir is always right, period, but sometimes I lose perspective, and I’m sure I’m not the only one. At those times it’s good to take a step back and look from another angle. There is some pioneering happening here and some growing pains as we figure out how best to use the data generated.
 

ROOSKIE

Major Contributor
Joined
Feb 27, 2020
Messages
1,936
Likes
3,525
Location
Minneapolis
I'm not the one who claimed it's ~70% better than M106 with zero serious evidence.
I don't totally disagree with all of your points, however rather than get into the bits of that rabbit hole, how about this food for thought...

Based on the data collected and even based on the subjective experiences of Amir, you every reason to buy both the Revel M106 and the SVS Ultra (maybe even add in an ELAC DBR62, then you'd have a $500pair, a $1000pair and a $2000pair). Now listen for yourself at home. Because they both measure well, received a similar Harman score and yet somehow did not satisfy Amir equally in his personal listening sessions, the M106 and the SVS Ultra Bookshelf would make for a great longer term comparison test and great excuse for some really fun listen time.
Buy both with a 45-60 day return window and go to town.
This to me is the very point of this site, to inspire smart and informed action. It would be very smart to personally try multiple speakers and then settle on a final purchase.
 
Joined
Jun 13, 2020
Messages
57
Likes
76
That was also never asked. The question I posed to you was: "what's the excuse [for lack of attention to dispersion through the crossover] here?"

Because it's not uncommon issue with speakers regardless of price.

Just look how some of the 1k+ speaker measured here actually perform.

In this measurement, the SVS appears better than the Buchardt S400 which is 2x the cost. People are jerking off to it.

Given the price point, I am willing to accept none given that several competitors find a way to engineer loudspeakers with smooth dispersion through the woofer-tweeter handoff. At a quarter the price, I can forgive a desire to balance output capability and fidelity. Maybe the money isn't there for a smallish company to amortize design and tooling costs of a custom tweeter faceplate over a relatively small production run. However. in a thousand dollar speaker a custom tweeter faceplate should be doable within a series production speaker's BOM. Compared to the next price level, reasonable cost-saving measures at $1k/pr would be, for example, omitting notch filters for woofer breakup that's already low enough to not present a clear audible problem from first listen. And other such items that are several orders of magnitude below getting dispersion through the crossover right.

OK. Cool for you man.

SVS is probably over pricing the speaker a little bit.

The revel is better, but not that much better. SVS also doesn't have the same design facilities that HK does.
 

ROOSKIE

Major Contributor
Joined
Feb 27, 2020
Messages
1,936
Likes
3,525
Location
Minneapolis
This is highly relevant. Two speakers with very similar scores sound different such that you prefer one over the other. Surely it is apparent from just this simple observation that the score is not a perfect predictor of how satisfied an individual will be with a given speaker.

By the way, I am interested in knowing why you preferred the R162 over the B6.2. Did one sound "brighter" than the other? Did the bass in one seem adequate, or possibly "boomy"? I've heard the B6.2, and I thought it sounded quite good although in need of subwoofer augmentation as I expect the R162 would be as well. But I haven't had an opportunity to hear the R162, and havn't even seen any Infinity home speakers in a store since about the time that Circuit City folded. So I'm curious to know more about how one sounded different from the other.

Howdy, alright even though pretty off topic here goes & just to remind as I said for this comparison they are both eq'd to my house curve.
That curve is FLAT from 200hrz-1000hrz then drops slightly more than 1db per octave to 10k with a faster drop from 10k-20k. Bass rises from 200hrz down to 50hrz and is 6db higher at 50hrz with no rise beyond 50hrz.
My room is not lively.
Additionally both speakers are high passed at 35hrz and the R162 is low passed at 18k (6db down at 18k)
The speakers are 9 feet apart and I am a 9-10 feet away. I have never used either in the nearfield.

By the way I have never listened to the ELAC without EQ. I have listen very much to the R162 without EQ.


Both speakers would benefit from a subwoofer. In fact I have never had a two way speaker that would not. It simply cleans up the bass and I don't ultimately want a 6" woofer trying to hit 40hrz at 95db. That said both speakers sound great in the bass department sans sub. I am very impressed.
Neither speaker is "boomy". Yes I have room modes to tame, this not the speakers fault. There is no boom and the PEQ cuts the modes back.
The bass in both is articulate and slightly warm.

The frequency response is very similar due to the EQ, neither speaker is currently EQ'd beyond 800 hrz. (well the R162 is low passed at 18k so essentially that does EQ that slight something hard to describe up there... ringing? The ELAC is falling by itself up there)

Anyway both speakers have a Harman score of around 5. (EQ brings them both up)

In a nutshell my GF said it best. We listened to the ELAC together for awhile. She loves listening and she really enjoyed the session. I asked her how the speakers sounded and she great they did a great job and loved the tunes, except they don't really have a soul like some of my others actually pointing to the R162 as an example of a speaker with "soul".

IMHPO, that is what I get as well, they sound excellent but somehow inexplicably have this "going through the hifi motions" kind of sound. The system is very neutral (tonally) and well presented and balanced (frequency wise) but somehow it just doesn't hit my heart. Additionally there is this etched quality, a quality that is very subtle but when another speaker doesn't have that it stands out a bit. I actually think a lot of folks will quite like that "etched" quality by the way and I don't mind it but it plays second fiddle to the R162 for my tastes.

The R162 is more lively in the very upper treble. That is why I cut it there. Seems like that might not do much but it does IMHO, some bright edge, a near impossible to hear edge is now gone. (I realize some folks call this the "AIR" zone)
The speaker is very detailed and in this subtle ways "flows" when the ELAC is "etched".

The ELAC seems to congest in the lower mids and upper bass when played loud. The R162 is not doing this and honestly it seems to handle high volume much better. Important for me as I listen fairly loud or at least high medium (depending on mood somewhere between 82-90db C weighted pink noise) The R162 begs to be turned up and up.

"Je ne sais quoi" So what is up here? I can not really say. I know having been able to hear both I found a meaningful difference and I am not sure where that difference shows up in the data.
They are both beyond decent speakers that really, really surprise for the costs on sale ($200 ELAC and $160 R162) and neither is a fully mature speaker.
I really think 10 people out of 20 would pick each speaker set up how I have them.

I had the same experience with the JBL 530 vs the REVEL M105. Both are excellent and Harman score rated well here (Amir did not like the 530 much though) but I picked the 530 and honestly as good as the M105 was, IMHPO it did not have that ""Je ne sais quoi", that somehow the 530 has for me.

Now this is all subtle AND very important to my enjoyment and I rate all 4 mentioned speakers as EASILY worth giving a shot at.

How this relates to Amir truly disliking the SVS speaker and loving the Revel gear I don't know. I hope in some small way it does. My apologizes guys.
 
Last edited:

Haint

Senior Member
Joined
Jan 26, 2020
Messages
347
Likes
453
OK. Cool for you man.

SVS is probably over pricing the speaker a little bit.

The revel is better, but not that much better. SVS also doesn't have the same design facilities that HK does.

"A little bit" may be an understatement. Best Buy Employee pricing on the Ultra Bookshelves is just over $500 a pair (~$260/ea IIRC), and both BB and SVS are likely still turning a decent profit at that. SVS's prices are definitely bloated across the board by their return policy, warranty, and no questions asked customer service, but these bookshelves are high even by their standards. I wouldn't be surprised if margins are 70-80% on a direct full price sell through their website.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,641
Likes
240,754
Location
Seattle Area
Hi amira, has an EQ been offered up yet to try with this speaker? I glanced through the thread but it has become rather dense with several tangent posts so I may have missed it.
I did not offer one because I could not find filters that fixed the issues I had with the speaker. That's not to say you can't improve it. You can by filling in the mid-range with EQ and perhaps pull down the few hundred hertz to 2 kHz peaking. In other words, make the on-axis response more flat.
 
Top Bottom