• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SVS Ultra Bookshelf Speaker Review

Foxxy

Member
Forum Donor
Joined
Jul 31, 2020
Messages
24
Likes
43
Location
Austria
At the same time, female vocals had very sharp extensions that were almost painful to listen to. This would come and go of course as the singing went along.

I played with EQ but after a while I gave up. Strangely no matter what I did, I could not get rid of the brightness in vocals. I did bring down the upper bass and lower mid-range level and that mostly helped but then it exaggerated the highs.
Oh my god, it's a sibilant tweeter!! Those are my bane, I have a tendency for Tinnitus and sharp sibilants just trigger it. It's the worst sound signature there is imo.
A sibilant tweeter is simply an evil, ringing mess. If something doesn't show up in measurements that simply means the measurements are not exhaustive enough, not that you can't measure it. We don't have a laser-inferrormeter or like Bruno Putzey said, he has a test battery of around 20 measurements that enable him to exactly replicate any amp-sound he wants. Our own testing battery isn't even close to that. But it doesn't matter because we aim for "specs so good the unmeasured stuff doesn't matter" anyway.
Don't forget that no speaker is "pure" (yet) so you always have sounding components.

Amir, if you still have the speaker, could you try something? Just brutalise the resonances at 1.5 and 5kHz. Like 15dB down. No mercy. Yes, it will of course completely mess up the frequency response and everything sounds wonky but the test is, will this get rid of the sssssssst?

edit: Actually, try only the 5kHz ringing first. That should be enough. For those wondering why: we are doing what in recording is called De-Essing. If you played back songs in a DAW you could probably load up a De-Esser and treat the tracks in a way that they just sound right on the SVS.
 
Last edited:

TimVG

Major Contributor
Joined
Sep 16, 2019
Messages
1,181
Likes
2,573
And I don't think it should care about that. Maybe you're not claiming that it should either, but in any event the point of the research was to explore what factors correlate with listener preferences using speakers as designed, as consumers would. What I think might be a much more important limitation is one you identified--the equal weighting of vertical and horizontal dispersion (at least I think that's a feature of the score). I absolutely agree, based only on my listening and designing experience, that horizontal dispersion characteristics are much more important than vertical. But I'm still not sure why. Do you have any insight on this?

Our binaural hearing is set up to function in the horizontal plane. Simply put; our ears are positioned the wrong way to function well in the vertical plane.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,878
Likes
6,673
Location
UK
Very interesting review. I too thought this speaker would sound good with some EQ when looking at the spinorama because the local variations are quite small (as in it's quite smooth locally) but with some broad humps & dips.....so I figured it would be super easy to EQ to a flat response. This obviously proved not to be the case when I read the listening & EQ tests....goes to show that there are likely factors other than direct sound (ON & LW) that influence a speakers listening preference. Over in the recent thread entitled "Equalizing loudspeakers based on anechoic measurements"(https://www.audiosciencereview.com/...nechoic-measurements-community-project.14929/ ) there are a number of people concluding/hypothesising that directivity is a major factor in speaker enjoyment.
 

Francis Vaughan

Addicted to Fun and Learning
Forum Donor
Joined
Dec 6, 2018
Messages
933
Likes
4,697
Location
Adelaide Australia
the best you can ever do by going over old data and looking for correlations is form a new hypothesis, not demonstrate it.
There are deep truths and issues here. IMHO it is a bit more nuanced, but the above cuts to the core of what is reasonable. This is a huge part of the reproducibility crisis in science. You can mine pre-existing data to demonstrate an existing hypothesis, but what cannot do is both derive your hypothesis and your demonstration from the same data. Yet we had decades of social science, psychological, and even harder sciences that did exactly that.
Modern research protocols are designed to ensure this sort of thing does not happen. An ideal research programme would ensure that a researcher pre-registers their hypothesis before they get access to existing data, and that they are not allowed to move the goalposts once they start analysis. This isn't a trivial problem. Things break down when data is very expensive or otherwise hard to obtain. Psuedo-science becomes a serious risk. The academic regime of publication being linked to career success, and even just having a career at all, plus the manner in which journals are only interested in new results, especially sexy new results, makes the entire edifice rot from the core. It is being fixed, but slowly.

So, what about the Olive score? It has not been subject to what we would consider the rigour of modern best practice. Nor I suspect was it intended to be. Everyone likes reducing things to a simple numerical score. It avoids having to think. But often it just isn't reasonable to do so. Even just looking at the spinorama, there isn't a simple score. There is a huge amount of data, and at best, some guidance about what parts are more important than others. But imagining that there is a simple way of creating a total ordering of speakers is just plain naive. The spinorama contains the vast majority of information about how a speaker responds. Simple numerical scores derived from the spinorama do not.
 

Vini darko

Major Contributor
Joined
Jun 1, 2020
Messages
2,280
Likes
3,395
Location
Dorset England
Looking forward to the LRS measurements. So board of poxy bookshelf speakers. Be nice to see something different.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,878
Likes
6,673
Location
UK
Well said! I initially had these set up near field and felt they sounded pretty good, but once I moved them into a bigger room they were terrible. I sold them soon after.
I guess that goes towards proving that directivity is important and the reason for the dislike of this speaker in Amir's listening test. Nearfield equals mostly direct sound, further afield listening equals reflections & direct sound (hence directivity greater involvement). Does Olive's preference score take directivity into consideration, and if it does perhaps not to a great enough extent?
 

maty

Major Contributor
Joined
Dec 12, 2017
Messages
4,596
Likes
3,161
Location
Tarragona (Spain)
https://www.audioholics.com/bookshe...elf-speakers/sound-quality-tests-measurements

SVS Ultra Bookshelf frequency.jpg

SVS Ultra Bookshelf impedance phase.jpg
 

maty

Major Contributor
Joined
Dec 12, 2017
Messages
4,596
Likes
3,161
Location
Tarragona (Spain)
Last edited:

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,687
Likes
4,068
@amirm , from your measurements, this speaker is almost distortion-free (for humans at least). Was that evident on listening?

As for the harshness, I also suspect either a directivity error or a bad time alignment at crossover. If it is a directivity error ar 3 or 4 kHz, a high shelf around 10 kHz could make things even drier and fatiguing IME. I have spent litterally hundreds of hours trying to EQ my Epos K3 which have the same kind of error and gave up. Every EQ was making it worse.
 

q3cpma

Major Contributor
Joined
May 22, 2019
Messages
3,060
Likes
4,416
Location
France
I honestly can't fathom why it is a big deal when someone realizes the measurements are not a complete replacement for listening. Why is that?
It is a complete replacement, though. Interpretation and visualization, on the other hand...

There are zero perfectly accurate to the "source" sound reproducers so we must choose.
Where did you get this idea? Either you listen to the album in the studio it was made or you get some "perfect" speakers with controlled cancellation down to very low and room treatment to hear only the recording, and not the room. Seems doable, these day.

I just listened to both the Infinity R162 and ELAC B6.2. They have a similar score and reasonably similar data when averaged out +'s & -'s. Both EQd closely to my house curve. Both sound pretty darn nice for the price and most buyers in the price class would be completely shocked by performance for $ and happy to have either. Done deal.
Yet they do not sound the same. I would much rather have the R162. The only way for me to find this out was to listen in my room, even with all that fun and great data.
Based on the data I predicted a tie, yet I assure you for me it is one over the other. Now additionally my money is on a tie 10-10 if twenty folks take a blind test. I can see how some will prefer the ELAC.
That (hypothetical) tie still changes nothing for me personally because I like the R162 more.
I hope you don't pretend there's some "magic" something in the air that can't be captured. Personally, I think that AP providing a weighted and masking aware distorsion metric like Gedlee's is the final hole to fill.
 

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,758
Likes
16,225
The woofer appears to be a Peerless by Tymphany and the tweeter might be a variation of this Peerless by Tymphany
Thank you, wouldn't surprise me if its the same or a similar model which also shows resonance problems around 5k, here from the manufacturer spec sheet
1596362847417.png


which also showed up in the ASR measurements (red curve)

1596362893009.png


and being only 25 dB lower than the total SPL, together with the delayed decay

1596362982629.png


could be the reason for the heard harshness. It would have been interesting to test it with a additional RLC suction circuit in the woofer crossover around that frequency, if that solves the heard problem.
 

somebodyelse

Major Contributor
Joined
Dec 5, 2018
Messages
3,682
Likes
2,962
Why do companies do this? Like all the time?
Standards, once ingrained, take some effort to change. 3/4" was adopted ~100 years ago and the component manufacturers are still using it. Companies big enough to be investing in custom moldings can do something different, but everyone else is picking from what's available in the catalog.
 
  • Like
Reactions: Tks

ctrl

Major Contributor
Forum Donor
Joined
Jan 24, 2020
Messages
1,616
Likes
6,088
Location
.de, DE, DEU
We had such outliers where a good spinorama and unobtrusive HD nevertheless gotten a bad sound evaluation by @amirm already more often.
After listening to more than seventy loudspeakers in the same environment, I would definitely concede to @amirm that, at least with regard to bookshelf speakers, he has a far above average overview of their sound characteristics.

A good spinorama of a loudspeaker is only a sufficient condition for good sound. The sound tuning by the loudspeaker developer is still decisive. So +0.3dB over one or two octaves can mean the difference between "meh!" and "wow!

So if a loudspeaker measures super well and performs less well in @amirm's hearing test (and @amirm has no hearing damage ;)), the most obvious reason is simply human error in crossover tuning (or time pressure during the final tuning of the loudspeaker).

Whether the resonance, which can be seen in the CSD at 5kHz, is mainly responsible for the bad sound impression is hard to say, because the constantly changing scaling in the CSD measurements does not allow a comparison with other of Amir's measurements.

It should be noted, however, that the decay in the CSD only starts at about 1.8ms and already ends at 4ms and the resonance at 5kHz there has decayed to -20dB.

based only on my listening and designing experience, that horizontal dispersion characteristics are much more important than vertical.
I'd confirm that - in my experience.
 

mtmpenn

Active Member
Forum Donor
Joined
May 26, 2020
Messages
134
Likes
218
Nice measuring speaker.

I think a major take away, for me, is not to worry too much about the subjective part of the review.

This speaker measures well. How much it’s, relatively minor, imperfections are a problem is likely to be highly individual specific (with respect to their room, their ears, and their preferences).

I suspect that if you listened in the same room to a stereo pair of these, the Elac dbr-2, a set of kefs and some revels they would all sound different. Some would prefer one speaker, others would prefer a different speaker. All of the speakers would be doing a commendable job.

It’s great to know that there are a number of excellent options at attainable price points.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
I have added the SVS Ultra Bookshelf to Loudspeaker Explorer where it can be compared to other speakers.

Good consistency within the listening window… as long as you keep the speaker at ear height:

Loudspeaker Explorer chart(26).png


I'll note that this speaker is an example of a case where the Listening Window average looks a bit better than any response you can obtain at any angle, making it somewhat misleading.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
Olive went into the study with a clear idea about what the correlated data were, and what figures of merit to use. The statistical work was finding the factor to apply to these numbers to yield ordering of preference.

That's only true to a certain extent. The established hypothesis at the time of the Olive study was the @Floyd Toole hypothesis "smooth, flat on-axis, followed by smooth directivity". If Olive was really looking to confirm that hypothesis, he would only have used ON (or LW), ERDI, and SPDI when training the model. But that's not what he did - he threw a bunch of random stuff in there, such as SP, ER and, crucially, PIR. The fact that ON+PIR is what came out of the training is causing a lot of controversy (see for example the never-ending debate between @QMuse and myself on this thread) because, according to Toole, we hear ON/LW, ERDI and SPDI, but we don't hear PIR, so the Olive model appears to be contradicting Toole. (My explanation is that's because the model is exploiting a spurious correlation. Not everyone agrees with my explanation.)
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
Regarding the score of this speaker… its two immediate neighbours (with sub) are the KEF R3 and the Revel M106, which have similar scores and similar ON/PIR breakdowns. And if we look at the spinoramas side-by-side, I would tend to agree with the score in the sense that it's not really obvious that one of these three speakers is better than the other:

Loudspeaker Explorer chart(27).png

Loudspeaker Explorer chart(28).png

Loudspeaker Explorer chart(29).png

Loudspeaker Explorer chart(30).png
 
Last edited:

jazzendapus

Member
Joined
Apr 25, 2019
Messages
71
Likes
149
I only see someone realizing that the data does have limitations.

The more sensible conclusion from the mismatch between measurements and Amir's impression is that Amir might have limitations and reliability issues, not the method. He does mention that he might have bias towards Revel before every review of their speakers, so I choose to go with that explanation rather than some mysterious unknown objectively affecting actual performance ;)
 
Top Bottom