• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SVS Ultra Bookshelf Speaker Review

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
I honestly can't fathom why it is a big deal when someone realizes the measurements are not a complete replacement for listening. Why is that?

I don't see where anyone threw the baby out with the bathwater, I only see someone realizing that the data does have limitations. I am not seeing any argument that makes a strong case against that, mearly people trying to draw a line in the sand in different spot. Well don't let that spot confuse you, you draw it where you feel comfortable but it exists, at some point the objective data must transition into the subjective experience. There are zero perfectly accurate to the "source" sound reproducers so we must choose.

I just listened to both the Infinity R162 and ELAC B6.2. They have a similar score and reasonably similar data when averaged out +'s & -'s. Both EQd closely to my house curve. Both sound pretty darn nice for the price and most buyers in the price class would be completely shocked by performance for $ and happy to have either. Done deal.
Yet they do not sound the same. I would much rather have the R162. The only way for me to find this out was to listen in my room, even with all that fun and great data.
Based on the data I predicted a tie, yet I assure you for me it is one over the other. Now additionally my money is on a tie 10-10 if twenty folks take a blind test. I can see how some will prefer the ELAC.
That (hypothetical) tie still changes nothing for me personally because I like the R162 more.

This is highly relevant. Two speakers with very similar scores sound different such that you prefer one over the other. Surely it is apparent from just this simple observation that the score is not a perfect predictor of how satisfied an individual will be with a given speaker.

By the way, I am interested in knowing why you preferred the R162 over the B6.2. Did one sound "brighter" than the other? Did the bass in one seem adequate, or possibly "boomy"? I've heard the B6.2, and I thought it sounded quite good although in need of subwoofer augmentation as I expect the R162 would be as well. But I haven't had an opportunity to hear the R162, and havn't even seen any Infinity home speakers in a store since about the time that Circuit City folded. So I'm curious to know more about how one sounded different from the other.
 

DDF

Addicted to Fun and Learning
Joined
Dec 31, 2018
Messages
617
Likes
1,355
I played with that a lot although did not go that high as far as corner frequency. It did not cure the sharpness that would come and go with vocals.

Here is a snapshot I took:

View attachment 76254

IME from designing and voicing many crossovers/speakers, I would anticipate that this EQ wouldn't help.

If anything, I would
1. bring down 2 to 4kHz 1 to 2 dB and make sure they're not toed in, but listened to 30 degrees on axis. That off axis tweeter flare is left far too untamed, and will sound like what you've heard (I've heard this many many times when I listen to a first xover prototype designed for flat on axis, without a waveguide).
2. Consider playing with a 1 or 2 dB gentle low pass tilt. Given their flat response, rising >8 kHz, and excess off axis 2 to 4 kHz, this particular bass response would no doubt make vocals sound bright at times. These speakers don't have enough bass to be used without a sub, away from a wall as is.

1596391456646.png

3. Alternatively, if they were given some upstream EQ to arrive at an in room ~ +4 dB at 60 Hz (Olive/Toole in room curve), that would help ameliorate the perceived brightness. This would also negatively impact maximum achievable loudness and one of this speakers claims to fame is "it plays loud". The Revel M106's fatten the response below 200 Hz by ~ 2dB vs the SVS. That will help reduce an impression of brightness without sapping amp power.

1596391999512.png


I think the designer made the on axis hot 500 Hz to 2 kHz to try and fill in the off axis due to woofer beaming, to better meet the tweeter response off axis. A bit of this is often helpful but IME is excessive here.

IME, instead the designer should have more directly addressed the tweeter as mentioned above. Another option is to use a lower order xover with lower Q, shift the xover frequencies up, then add some offset between low and high pass frequency to create some phase mismatch at xover which can be used to keep on axis flat while boosting the dispersion dip off axis. But it seems 4rth order LRs with maximal on axis reverse null have become gospel even if (IME at least) they too often result in speakers that poorly address the tweeter off axis dispersion flare resulting in audible glare
 
Last edited:

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
There are deep truths and issues here. IMHO it is a bit more nuanced, but the above cuts to the core of what is reasonable. This is a huge part of the reproducibility crisis in science. You can mine pre-existing data to demonstrate an existing hypothesis, but what cannot do is both derive your hypothesis and your demonstration from the same data. Yet we had decades of social science, psychological, and even harder sciences that did exactly that.
Modern research protocols are designed to ensure this sort of thing does not happen. An ideal research programme would ensure that a researcher pre-registers their hypothesis before they get access to existing data, and that they are not allowed to move the goalposts once they start analysis. This isn't a trivial problem. Things break down when data is very expensive or otherwise hard to obtain. Psuedo-science becomes a serious risk. The academic regime of publication being linked to career success, and even just having a career at all, plus the manner in which journals are only interested in new results, especially sexy new results, makes the entire edifice rot from the core. It is being fixed, but slowly.

So, what about the Olive score? It has not been subject to what we would consider the rigour of modern best practice. Nor I suspect was it intended to be. Everyone likes reducing things to a simple numerical score. It avoids having to think. But often it just isn't reasonable to do so. Even just looking at the spinorama, there isn't a simple score. There is a huge amount of data, and at best, some guidance about what parts are more important than others. But imagining that there is a simple way of creating a total ordering of speakers is just plain naive. The spinorama contains the vast majority of information about how a speaker responds. Simple numerical scores derived from the spinorama do not.

Hooray! And since it cannot reasonably be reduced to a simple numerical ranking, does not this imply that it is perfectly within reason for Amir to share his subjective overall assessment of the speaker even when it seems not perfectly aligned with the spinorama response curves?
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
That's only true to a certain extent. The established hypothesis at the time of the Olive study was the @Floyd Toole hypothesis "smooth, flat on-axis, followed by smooth directivity". If Olive was really looking to confirm that hypothesis, he would only have used ON (or LW), ERDI, and SPDI when training the model. But that's not what he did - he threw a bunch of random stuff in there, such as SP, ER and, crucially, PIR. The fact that ON+PIR is what came out of the training is causing a lot of controversy (see for example the never-ending debate between @QMuse and myself on this thread) because, according to Toole, we hear ON/LW, ERDI and SPDI, but we don't hear PIR, so the Olive model appears to be contradicting Toole. (My explanation is that's because the model is exploiting a spurious correlation. Not everyone agrees with my explanation.)

As an old saying goes, "Live by the correlation, die by the correlation."
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
The implied statement is they are trained listeners. I'm just making the point that everyone doesn't need to shout at me if I used your name. Any trained listener would fit in place of any other trained listener's name.

Again, I'm not targeting *you*. I'm simply making the point about weighing one person's opinion over an entire science. This is the entire reason you purchased the NFS; so we wouldn't have to just go off opinions. Unfortunately, in some cases it seems the opinion is being weighted more than the data without us taking the time to understand the discrepencies. Thus my "baby with the bathwater".

And to be abundantly clear: I am not targeting you, your ears, or your credentials. We could sub Jim-Bob's name for Amir. Or Bobby-Sue. And now I'm imagining Amir with shoulder-length, curly hair. Great. :D

If only you had given it this spin in your original reply to richard12511's lament ...
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
Regarding the score of this speaker… its two immediate neighbours (with sub) are the KEF R3 and the Revel M106, which have similar scores and similar ON/PIR breakdowns. And if we look at the spinoramas side-by-side, I would tend to agree with the score in the sense that it's not really obvious that one of these three speakers is better than the other:

View attachment 76296
View attachment 76297
View attachment 76298
View attachment 76299

The Revel definitely has the most boosted bass.
 

CDMC

Major Contributor
Forum Donor
Joined
Sep 4, 2019
Messages
1,172
Likes
2,321
Our binaural hearing is set up to function in the horizontal plane. Simply put; our ears are positioned the wrong way to function well in the vertical plane.

Speak for yourself, I have a second set of ears on top of my head, but unfortunately recorded music doesn’t account for them, so they are useless.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,526
Location
Seattle Area
@amirm , from your measurements, this speaker is almost distortion-free (for humans at least). Was that evident on listening?
When I added the high pass filter, it did reduce distortion some but also took away useful bass so I did not leave it on. And that is part of the problem with distortion in low frequencies: it actually increases the bass energy to some extent so subjective feeling may be positive.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,526
Location
Seattle Area
The more sensible conclusion from the mismatch between measurements and Amir's impression is that Amir might have limitations and reliability issues, not the method.
Once more, there is no mismatch between my impressions and measurements. This speaker does NOT have ruler flat on-axis response, nor proper directivity as the research indicates are signs of excellent design and highest listener preference. This is what the measurements show and is in agreement with me potentially not preferring the speaker.

The mismatch is with the single value olive score. So no, it does not follow that I am automatically wrong. The CEA-2034 specifications would have just had the Olive score and none of the measurement graphs if all that we cared about was the Olive score.

This is on top of the fact that we have found issues with the documentation/computation of Olive Score. Since we are the only ones who is trying to compute and use it, there is no way to verify where we stand (not fully anyway).

Finally the role of distortion was supposed to be follow on research which was never completed or published. This is my acuity in hearing simulated distortion from speakers: https://www.audiosciencereview.com/forum/index.php?threads/distortion-listening-test.8152/

index.php


As you see, I was able to hear distortions that are -30 dB lower than what is typical in a real speaker. I could have done better if the logistics of the test were different (see the thread).

The above simulation is for Bl(x) which is one of the top causes of distortion in bass. And here, every speaker we test has copious amount of distortion bass with varying profiles. So it reasons that someone like me could be hearing distortion which can easily cause tonality changes such as sharpness/brightness (due to harmonic spray).

So no, you need to apply a much more sophisticated analysis to the situation and not a simple, "I trust what I have read over what Amir says." What you have read is the tip of the iceberg. We are practicing the science, not just reading it.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,526
Location
Seattle Area
Btw, I have a challenge proposal for you @amirm that might be fun - how about you listen to the speakers before measuring them and draw a frequency response you think they have (on axis I guess, unless you feel you can approximate off axis as well), and then compare what you drew with what Klippel comes out with.
Nothing that is more work for me is "fun." Why is it that you all want to prove your theories by creating more work for me?

Why don't you all do this and post your results? Buy a cheap speaker, listen to it and graph your impression of frequency response, then send it to me and I will measure. Also give me a preference score to go with that so that we can see if you match the scoring.

I have done this anyway with no other than Dr. Sean Olive running the blind test on me:

Harman Reference Room - small.jpg


You can see the UI for the simulation program on the screen. I got up to level 5 or 6 and that was 10 years ago. As I said earlier, Sean was much better than me, and other high-end dealers much worse than me.

Really, I have paid my dues. I am doing all this work, measurements, listening tests, etc. I can't keep chasing challenges like this. You all need to step up as well. Maybe it turns out many of you won't have preferences agreeing with research. Or high acuity when it comes to evaluating speakers.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
Never once have I said or implied the score. Not one single time. I am talking about the CTA dataset and it’s correlation with preference. But not a single value metric which I already said I am not fond of. You are trying to pit me vs Amir and I won’t take part in that. I have respect for Amir and what he is doing here. I only replied to the post where an individual put more emphasis on Amir’s subjective assessment than the data without further consideration as to the faults of that notion. Amir just happened to be the reviewer. I don’t care who it is. I have made that abundantly clear in numerous posts. The fellow I was replying to understood my concern. I’m not going to continue this unnecessary back and forth.

edit: FWIW, if someone said the same about me I would say “Fair point. That’s why we have data. So we can discuss these differences.” So feel free to sub my name for Amir’s if it makes the point I am trying to make more clear.

Again, whether you did or did not explicitly mention the preference score is of no relevance whatsoever.

Who was it who wrote this:

" ... have you considered that Amir simply isn't the trained listener it's assumed he is? No disrespect, and lord knows I don't need a scolding and paragraphs on Amir's listening sessions with Harman listed as reference, but let's be real here: you are throwing out the baby with the bathwater by taking Amir's personal subjective reaction in a sighted test over some people's near-lifetime of work to quantify preference based on measurements."

You and I look at this differently. You look at it and it does not seem to you that you had made it specifically about Amir. I look at it and it and to my way of thinking you made it specifically about Amir, in a way that I thought unnecessary. Mostly the reason I did not like what you did is that you dredged up the same pointless debate that this group has argued over endlessly and needlessly in the past. As Yogi Berra said, "It's deja vu all over again!" I fully expect that you or someone else will dredge up the same debate again the next time that Amir shares a subjective opinion that isn't fully in agreement with the quantitative data. A more appropriate response to richard12511's lament would have been something along the lines of, "Yes, the methodology isn't perfect, but this is not an exact science, and the methodology has strong benefit notwithstanding that it is not perfect."
 
Last edited:

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
Not bad overall. THe FR is a little uneven, but the distortion is actually very very good. In fact, it appears superior to the Revel M106 (Below 500Hz of course):
https://www.audiosciencereview.com/...e-distortion-thd-audio-meaurements-png.70821/

It appears that these would be reasonably simple to integrate with subs from a distortion perspective, for those fearful of crossing over above 80Hz. You would just need to deal with some of those resonances above, and peak just below, 1k range.

This is something that I as well had mentioned, that this speaker is set apart from many of the other similar speakers by virtue of lower distortion that permits it to be crossed over to a sub below 100 Hz, possibly 80 Hz. A little bit of effort with some additional passive filtering should easily take care of most of the other flaws, although not the directivity mismatch, but personally that does not concern me much. The problem for me, though, is that for my purposes none of the speakers of this type earn a price tag of $1000 per pair. A pair of Elac F6.2 floor-standing speakers is $300 cheaper.
 

Chromatischism

Major Contributor
Forum Donor
Joined
Jun 5, 2020
Messages
4,767
Likes
3,706
This is my feeling as well. I think the Olive score is by far the biggest weakness of the Toole/Olive science. I mentioned that in the Revel thread. The Revel actually has excellent directivity, but it's crippled by being horribly bright. The horrible brightness is easily taken care of by EQ, but the Olive score doesn't account for that.
I don't think so. Anechoic measurements of a speaker designed to be placed against a boundary are always going to look bad. It's the same for my Infinity RS152, another Harman-designed speaker. It needs the bass reinforcement from the boundary. This is designed-in to prevent the boundary from over-boosting the bass.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
Nothing that is more work for me is "fun." Why is it that you all want to prove your theories by creating more work for me?

Why don't you all do this and post your results? Buy a cheap speaker, listen to it and graph your impression of frequency response, then send it to me and I will measure. Also give me a preference score to go with that so that we can see if you match the scoring.

I have done this anyway with no other than Dr. Sean Olive running the blind test on me:

View attachment 76325

You can see the UI for the simulation program on the screen. I got up to level 5 or 6 and that was 10 years ago. As I said earlier, Sean was much better than me, and other high-end dealers much worse than me.

Really, I have paid my dues. I am doing all this work, measurements, listening tests, etc. I can't keep chasing challenges like this. You all need to step up as well. Maybe it turns out many of you won't have preferences agreeing with research. Or high acuity when it comes to evaluating speakers.

"Why is it that you all want to prove your theories by creating more work for me?"

That is a very good question. Could you please bring me a glass of water?
 
Last edited:

jazzendapus

Member
Joined
Apr 25, 2019
Messages
71
Likes
149
I am doing all this work, measurements, listening tests, etc.
The main issue that I believe is bugging most of the people (including me) here is that you're doing very good measuring and very poor listening tests, and instead of trying to deal with the issue head on, ie, do proper listening tests, you choose quite desperately to prove your point with very circumstantial evidence - your (glorious) past successes in various and kinda unrelated tests.

So unless there's some serious objective data, there's no reason to suspect that SVS Ultra is worse than M106 that measures quite similarly (and costs double), while you rated the former 3/5 and the latter 5/5 (unless I'm missing some pink panther gradation). Subjective and poorly conceived listening tests shouldn't really have a place in those reviews. This kind of stuff is the modus operandi of Stereophile that poisoned this field for decades with its mishmash of somewhat useful objective measurements that were then mixed with subjective crap/payola and diluted anything of actual value.
 
Last edited:

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
The main issue that I believe is bugging most of the people (including me) here is that you're doing very good measuring and very poor listening tests, and instead of trying to deal with the issue head on, ie, do proper listening tests, you choose quite desperately to prove your point with very circumstantial evidence - your (glorious) past successes in various and kinda unrelated tests.

So unless there's some serious objective data, there's no reason to suspect that SVS Ultra is worse than M106 that measures quite similarly (and costs double), while you rated the former 3/5 and the latter 5/5 (unless I'm missing some pink panther gradation). Subjective and poorly conceived listening tests shouldn't really have a place in those reviews. This kind of stuff is the modus operandi of Stereophile that poisoned this field for decades with its mishmash of somewhat useful objective measurements that were then mixed with subjective crap/payola and diluted anything of actual value.

It could be that Amir's reference for what sounds "right" is how close it sounds to the top Revel sound(Salon 2). After all, all of his training is from Harman. It could be that they trained him to prefer the Harman sound. Maybe all Revel speakers have a more similar sound to the Salon 2(which is Amir's reference), despite their measurement shortcomings. That could help explain why a speaker with terrible measurements(Revel M55X6) could sound good enough to get a golf panther, and a speaker with excellent measurements(SVS Ultra Bookshelf) could sound bad. Perhaps speakers are subjectively being judged on how close they sound to the best Revel speaker.
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
It's also interesting to look at the comparison that @edechamps provided of the M106 vs the SVS Ultra, and comparing the subjective listen.

Both have very spinoramas, and I think you can argue either way which one is better. One thing that is clear, is that the M106 has much more boosted bass. Amir's main complaint with this speaker is the boosted bass, and he didn't have that same complaint with the M106, despite it having even more boosted bass.

I think one explanation may be where Revel boosts the bass, which is 120Hz and below. My guess is that Amir's impression are due to frequencies higher than that.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
The main issue that I believe is bugging most of the people (including me) here is that you're doing very good measuring and very poor listening tests, and instead of trying to deal with the issue head on, ie, do proper listening tests, you choose quite desperately to prove your point with very circumstantial evidence - your (glorious) past successes in various and kinda unrelated tests.

So unless there's some serious objective data, there's no reason to suspect that SVS Ultra is worse than M106 that measures quite similarly (and costs double), while you rated the former 3/5 and the latter 5/5 (unless I'm missing some pink panther gradation). Subjective and poorly conceived listening tests shouldn't really have a place in those reviews. This kind of stuff is the modus operandi of Stereophile that poisoned this field for decades with its mishmash of somewhat useful objective measurements that were then mixed with subjective crap/payola and diluted anything of actual value.

Uh, is it reasonable to ask Amir to do double-blind listening tests each time he encounters a speaker that doesn't measure the same as it sounds to him? It seems to me that he is doing an awful lot of work for likely very little compensation already. And if we agree that this expectation probably is not reasonable, then would we ask Amir to refrain from sharing his opinion when a speaker doesn't measure the same as it sounds to him? Should we all exercise the same restraint with respect to our opinions? Why would the fact that he has taken it upon himself to perform all these measurements mean that he should have to give up the right to share his opinion on whether a speaker sounds good?

His past experiences are relevant and lend true credibility to his subjective assessments. There is quantitative evidence supporting this, which he has shared only because of all the heat he takes every time he shares a subjective assessment of a speaker that doesn't measure the same as it sounds to him. It is not an act of desperation for him to share that background with us, and it is patently unfair for anyone to characterize it as an act of desperation.

And I do not think it fair to compare Amir with what Stereophile has done over the years. For that matter I'll even defend Stereophile, because notwithstanding that their articles have pushed a lot of snake oil and voodoo from time to time, their objective measurements were pretty much the only available objective measurements of anything for very many years, up until very recently.
 
Top Bottom