• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SVS Ultra Bookshelf Speaker Review

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
Uh, is it reasonable to ask Amir to do double-blind listening tests each time he encounters a speaker that doesn't measure the same as it sounds to him? It seems to me that he is doing an awful lot of work for likely very little compensation already. .

This is my main complaint against those who bash Amir's subjective listen. The amount of work necessary to do proper double blind tests(ie Amir has no idea which speakers are even in the test) is huge, and would slow testing down considerably. Would you rather see double blind tests, or would you rather see 5x as many spinoramas with sighted tests?
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
It could be that Amir's reference for what sounds "right" is how close it sounds to the top Revel sound(Salon 2). After all, all of his training is from Harman. It could be that they trained him to prefer the Harman sound. Maybe all Revel speakers have a more similar sound to the Salon 2(which is Amir's reference), despite their measurement shortcomings. That could help explain why a speaker with terrible measurements(Revel M55X6) could sound good enough to get a golf panther, and a speaker with excellent measurements(SVS Ultra Bookshelf) could sound bad. Perhaps speakers are subjectively being judged on how close they sound to the best Revel speaker.

On the surface this has strong appeal. However there are all manner of hypotheses that people can come up with to explain all manner of things, that seem plausible, but that are meaningless without supporting data. This hypothesis is meaningless unless and until you can figure out some way to back it up with data. Of course that will be essentially impossible to do, but the fact remains that without data to back up this hypothesis there is not good reason for anyone to entertain it seriously. That doesn't mean that no one will, unfortunately.
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
On the surface this has strong appeal. However there are all manner of hypotheses that people can come up with to explain all manner of things, that seem plausible, but that are meaningless without supporting data. This hypothesis is meaningless unless and until you can figure out some way to back it up with data. Of course that will be essentially impossible to do, but the fact remains that without data to back up this hypothesis there is not good reason for anyone to entertain it seriously. That doesn't mean that no one will, unfortunately.

I'm not trying to propose a meaningful hypotheses, I'm just asking a question based on an idea I had...

You could similarly dismiss 99.9% of your own(or anyones) forum posts by this same standard. Very few posts have good data to support the post.
 

jazzendapus

Member
Joined
Apr 25, 2019
Messages
71
Likes
149
Uh, is it reasonable to ask Amir to do double-blind listening tests each time he encounters a speaker that doesn't measure the same as it sounds to him?

There's a simpler solution - avoid posting shoddy and poorly substantiated data to begin with.

Why would the fact that he has taken it upon himself to perform all these measurements mean that he should have to give up the right to share his opinion on whether a speaker sounds good?
Because there's a very high likelihood that it's misleading. Are we in agreement that posting misleading info is not a good practice?
 

Foxxy

Member
Forum Donor
Joined
Jul 31, 2020
Messages
24
Likes
43
Location
Austria
There's a simpler solution - avoid posting shoddy and poorly substantiated data to begin with.
Because there's a very high likelihood that it's misleading. Are we in agreement that posting misleading info is not a good practice?
Look, if you can't call out a speaker for being sibilant you can't call out a speaker for ANYTHING. It so offensive and jarring to the trained ear it's ridiculous.

Like I said, we don't have the full testing battery. We are stuck with spinorama, distortion and waterfall. Those are NOT complete.
This is what the complete battery would look like:
http://www.klippel.de/products/rd-system.html
The SCN (Scanning Vibrometer System) would be the minimum addon we might need to fully put stuff like this into measurements. And yes, that would also mean ripping speakers apart which are only on loan to Amir.

We try our best with the stuff we have. Let's be grateful we have a NFS at our disposal which already goes far beyond any objectibe hobbyist means.
 

Chromatischism

Major Contributor
Forum Donor
Joined
Jun 5, 2020
Messages
4,765
Likes
3,703
The main issue that I believe is bugging most of the people (including me) here is that you're doing very good measuring and very poor listening tests, and instead of trying to deal with the issue head on, ie, do proper listening tests, you choose quite desperately to prove your point with very circumstantial evidence - your (glorious) past successes in various and kinda unrelated tests.

So unless there's some serious objective data, there's no reason to suspect that SVS Ultra is worse than M106 that measures quite similarly (and costs double), while you rated the former 3/5 and the latter 5/5 (unless I'm missing some pink panther gradation). Subjective and poorly conceived listening tests shouldn't really have a place in those reviews. This kind of stuff is the modus operandi of Stereophile that poisoned this field for decades with its mishmash of somewhat useful objective measurements that were then mixed with subjective crap/payola and diluted anything of actual value.
Amir's listening tests are one data point. And I've seen enough data points regarding the SVS Ultra that there is a trend, and enough of the M106 that there is a trend. Since Amir's impressions of both speakers don't seem different than what I've seen out there, I don't think the results here are erroneous.

Also, I wouldn't say they both measure similarly unless you're only looking at the predicted in-room. If you look at the deviation in directivity throughout their range, you will see the objective differences that describe most of the subjective differences. The Revel speakers have a much more consistent directivity (less deviation from a mean per frequency).
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
The main issue that I believe is bugging most of the people (including me) here is that you're doing very good measuring and very poor listening tests, and instead of trying to deal with the issue head on, ie, do proper listening tests, you choose quite desperately to prove your point with very circumstantial evidence - your (glorious) past successes in various and kinda unrelated tests.
I can just see you challenging your doctor: "you don't know anything about medicine... and don't tell me about your diploma and how long you have been practicing medicine. it is all in the past...."

And what past successes? Here is my current success. If you look up speaker reviews that got an Olive score of 3.0 or lower you get this:

1596406243745.png


Putting aside the one subwoofer, there are 16 reviews there. I gave NO to 14 out of 16 for a score of 88% agreement.

As I showed earlier, 77% of the time my recommendation also agrees with a speaker getting a grade of 5.0 or higher.

And you are up in the arm with this? You think crappy listening tests generate this kind of correlation?

The problem you have is that you haven't understood the research. Never conducted a listening test where you get scored in every situation like I do, nor have any familiarity with the topic at hand. Add good bit of emotional inferiority that I get to score speakers with my ears and you don't, and we have posts like yours. Information-free nonsense.
 

Chromatischism

Major Contributor
Forum Donor
Joined
Jun 5, 2020
Messages
4,765
Likes
3,703
When I added the high pass filter, it did reduce distortion some but also took away useful bass so I did not leave it on. And that is part of the problem with distortion in low frequencies: it actually increases the bass energy to some extent so subjective feeling may be positive.
This is really interesting, and gets into subwoofers, too. There is a large number of people who prefer a more meaty-sounding subwoofer, and there is also a large number of people who prefer a cleaner, more articulate-sounding subwoofer. Different harmonic distortions, hysteresis distortions, etc have been explored to try to explain why.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
This is my main complaint against those who bash Amir's subjective listen. The amount of work necessary to do proper double blind tests(ie Amir has no idea which speakers are even in the test) is huge, and would slow testing down considerably. Would you rather see double blind tests, or would you rather see 5x as many spinoramas with sighted tests?
It is not just that. I have no speaker shuffler to replicate Harman tests. Anything less will be criticized just as well if it doesn't agree with someone's agenda. "Oh speakers were not in the same spot. How did you match levels? How about your ears? Aren't you too old? What amplifiers did you use? Maybe they favor one speaker over the other. Where you put the speaker is not where Harman put it so your tests don't work."

My current testing absolutely follows best practices. I listen in mono. Every speaker is in the same spot and I sit in identical spot. EQ is used to dial out the major room mode which was demonstrated to be a factor early on.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
This is really interesting, and gets into subwoofers, too. There is a large number of people who prefer a more meaty-sounding subwoofer, and there is also a large number of people who prefer a cleaner, more articulate-sounding subwoofer. Different harmonic distortions, hysteresis distortions, etc have been explored to try to explain why.
Definitely. There is a standard trick in active little speakers to on purpose add harmonic distortion that the speaker can create, versus the bass notes that it can't.

I am finding that when I do a sweep and the speaker hugely distorts to the point where the sound it produces has nothing to do with the low frequency note, a high pass filter makes a big difference. In cases like SVS where that didn't quite occur, then it becomes tricky and subjective testing is necessary to evaluate if it is a good idea or not.
 

Chromatischism

Major Contributor
Forum Donor
Joined
Jun 5, 2020
Messages
4,765
Likes
3,703
It could be that Amir's reference for what sounds "right" is how close it sounds to the top Revel sound(Salon 2). After all, all of his training is from Harman. It could be that they trained him to prefer the Harman sound. Maybe all Revel speakers have a more similar sound to the Salon 2(which is Amir's reference), despite their measurement shortcomings. That could help explain why a speaker with terrible measurements(Revel M55X6) could sound good enough to get a golf panther, and a speaker with excellent measurements(SVS Ultra Bookshelf) could sound bad. Perhaps speakers are subjectively being judged on how close they sound to the best Revel speaker.
There isn't really a "Harman sound" other than speakers that sound like what the majority of listeners prefer.

So I'm not sure if Amir is aware that he was brainwashed into liking the same sound as the statistical mean of listeners and is now unknowingly peddling Harman speakers :D
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
There's a simpler solution - avoid posting shoddy and poorly substantiated data to begin with.
What you just said is shoddy and poorly substantiated. You have double blind tests of SVS Ultra against some other speakers showing it to be a winner?
 

jazzendapus

Member
Joined
Apr 25, 2019
Messages
71
Likes
149
And you are up in the arm with this? You think crappy listening tests generate this kind of correlation?
The psychological inclination to conform to respected theory can explain this easily. The fact that you listen and give final grade to speakers after measuring them makes this correlation useless.

As long as I have no certainty that your subjective grading of the speakers is not affected by the degree your dinner agreed with you or your mood or a million other possible factors unrelated to sound, I see no reason to put any value in those subjective evaluations, and recommend others to do so as well. The funny thing is that you know all this very well yourself, but once someone doubts your superhuman listening abilities that obviously nullify all possible biases at once, you get into this half-obnoxious half-pathetic juggernaut mode destroying everything on your path along with your own dignity. I think I'm very close to being hit with a banhammer, so I better step out of this...

What you just said is hoddy and poorly substantiated. You have double blind tests of SVS Ultra against some other speakers showing it to be a winner?
I'm not the one who claimed it's ~70% better than M106 with zero serious evidence.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
The psychological inclination to conform to respected theory can explain this easily. The fact that you listen and give final grade to speakers after measuring them makes this correlation useless.
Then when they don't as is the case here, you should sit up and pay attention. That bias factor then is not in play.

Your theory is wrong anyway as I don't see the scores until I have finished and posted the review. So what you ask, is what is occuring.
 

Xyrium

Addicted to Fun and Learning
Forum Donor
Joined
Aug 3, 2018
Messages
574
Likes
493
The psychological inclination to conform to respected theory can explain this easily. The fact that you listen and give final grade to speakers after measuring them makes this correlation useless.

As long as I have no certainty that your subjective grading of the speakers is not affected by the degree your dinner agreed with you or your mood or a million other possible factors unrelated to sound, I see no reason to put any value in those subjective evaluations, and recommend others to do so as well. The funny thing is that you know all this very well yourself, but once someone doubts your superhuman listening abilities that obviously nullify all possible biases at once, you get into this half-obnoxious half-pathetic juggernaut mode destroying everything on your path along with your own dignity. I think I'm very close to being hit with a banhammer, so I better step out of this...

You're probably getting closer to reality with this post. Anyone's subjective opinion is just that, an opinion. There are far too many factors that go into what your brain actually hears, even the shape of your pinna can alter the soundstage and obviously, localization, etc. However, if you tend to learn what a reviewer seems to prefer, then you now know what they are generally hearing, and can compare that to your own preferences.

Meanwhile, you get the advantage of having a ton of data back it up in sites like these. So, you can take out of it what you'd like. Hang around and I'm sure you'll grow to appreciate these reviews, and how they evolve. Perhaps you'll consider a contribution as well, at some point! :)
 
  • Like
Reactions: PSO
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
As long as I have no certainty that your subjective grading of the speakers is not affected by the degree your dinner agreed with you or your mood or a million other possible factors unrelated to sound, I see no reason to put any value in those subjective evaluations, and recommend others to do so as well.
So? There are many people who think measurements are stupid. Should I stop measuring as well? Go tell people what you want, just don't be insulting with me and waste my time with information-free posts.

The funny thing is that you know all this very well yourself, but once someone doubts your superhuman listening abilities that obviously nullify all possible biases at once, you get into this half-obnoxious half-pathetic juggernaut mode destroying everything on your path along with your own dignity. I think I'm very close to being hit with a banhammer, so I better step out of this...
The bolded part is what is so annoying about these interactions. It is not funny. It shows such a lack of common sense that you pretend to know more than I do about the most basic concepts in audio research.

As my dignity, please, leave out the inferiority complex. Training in audio research is a real thing. I suggest you get yourself trained and see where you land a few months later. Then come and be dismissive of what that does for you. Until then you are just repeating talking points of audio research and objectivity without any real feel for what it all means.
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
I'm really surprised and appalled by some of the responses I've read here. I can tell there are some strong opinions about these SVS speakers and the ability of spin data to predict loudspeaker preferences.

I'd like to chime in and say that in my opinion, the replies here that criticize Amir's listener observations are very anti-science. Like it or not, the auditory impressions provided by a reviewer are data. They're the same type of data we would get from the gold-standard test (a blinded A/B listening comparison under otherwise identical conditions), except without all of the laborious measures to eliminate bias. A good scientist never dismisses data simply because it doesn't "agree" with his/her prevailing theory. Like it or not, Amir's listening impressions ARE data - and should be treated as such.

The other thing I'd like to point out is that there are apparently some individuals who think they can eyeball spin charts and make magical predictions that Speaker A will sound better than Speaker B with 100% certainty. As far as I know, there is NO agreed upon and objective way to convert spin charts to preference predictions, and NO evidence that one's ability to "eyeball" a series of spin charts is superior to Olive's regression formula, which we know uses a deliberate SUBSET of spin data, requires complex math, AND only explains 74% of the variability in listener preferences in a closed set of 70 speakers. So, the notion that spin data is a highly reliable way to gauge a speaker's sound quality without listener validation is unfounded and unsupported by evidence - unless someone has something to share.
 
Last edited:

jhaider

Major Contributor
Forum Donor
Joined
Jun 5, 2016
Messages
2,822
Likes
4,514
Yes. There are exceptional speakers in this price bracket. That was never in question.

That was also never asked. The question I posed to you was: "what's the excuse [for lack of attention to dispersion through the crossover] here?"

Given the price point, I am willing to accept none given that several competitors find a way to engineer loudspeakers with smooth dispersion through the woofer-tweeter handoff. At a quarter the price, I can forgive a desire to balance output capability and fidelity. Maybe the money isn't there for a smallish company to amortize design and tooling costs of a custom tweeter faceplate over a relatively small production run. However. in a thousand dollar speaker a custom tweeter faceplate should be doable within a series production speaker's BOM. Compared to the next price level, reasonable cost-saving measures at $1k/pr would be, for example, omitting notch filters for woofer breakup that's already low enough to not present a clear audible problem from first listen. And other such items that are several orders of magnitude below getting dispersion through the crossover right.

The Revel vs SVS is a a MUCH closer call. The comparison has already been made here.

Has it? I've seen comparisons to M106, but not the newer and less expensive M16. It is possible that I missed that comparison. Comparing SVS to M16, I expect the excellent Tymphany woofer SVS uses allows higher output, but M16 is better engineered in every other way. Here is the dispersion width and smoothness possible from a well-engineered 6.5"-7" 2-way loudspeaker.

index.php


You can argue that one is good and one is very good. But it's a ridiculous notion that one is bad.

Is it? Compare the expected-in-this-price-class performance above with the following:

index.php



index.php


Geometry is not destiny in loudspeakers, but it does set a fidelity ceiling. Here, poor geometry leads to objectively poor performance - large dispersion disruption where woofer hands off to tweeter. I think better should be expected.
 

NDC

Member
Joined
Jul 18, 2020
Messages
86
Likes
115
Location
Sydney, Australia
The psychological inclination to conform to respected theory can explain this easily. The fact that you listen and give final grade to speakers after measuring them makes this correlation useless.

As long as I have no certainty that your subjective grading of the speakers is not affected by the degree your dinner agreed with you or your mood or a million other possible factors unrelated to sound, I see no reason to put any value in those subjective evaluations, and recommend others to do so as well. The funny thing is that you know all this very well yourself, but once someone doubts your superhuman listening abilities that obviously nullify all possible biases at once, you get into this half-obnoxious half-pathetic juggernaut mode destroying everything on your path along with your own dignity. I think I'm very close to being hit with a banhammer, so I better step out of this...

Dude, you need to think about how you come across. The only person appearing 'half-obnoxious half-pathetic' is yourself, reading the tone of your posts the last few pages. Sit back, take a breath, be a bit more respectful - it's not the end of the world we're discussing - it's audio!
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
Spent a bit more time listening and trying to equalize the SVS. Filling in the hole where there is directivity error helps. Pulling down where the are resonances in waterfall display also helped. But after half of hour of messing with it, I just could not get it to sound great. To make sure I am not in "bad sound mood" I replaced it with Revel M106. Oh man. What a relief. The smoothness and balance of this speaker is in another planet.
 
Top Bottom