• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SVS Ultra Bookshelf Speaker Review

hardisj

Major Contributor
Reviewer
Joined
Jul 18, 2019
Messages
2,907
Likes
13,914
Location
North Alabama
I got sucked in to an OT discussion regarding how literally to take the subjective evaluation vs the objective that I never even got to look at the dang data. So, I went back. Some things I notice:
  • Pyramid baffle; I assume to help diffraction concerns
  • Is that port adequate? Seems small. But maybe that's just because I'm used to seeing larger woofers with lower tuning.
  • Wide Q bump about an octave wide from 700-1500Hz would make things sound a bit boomy/resonant in the mid-to-upper vocals. The 1-2kHz region is also responsible for "bite" and could be a bit harsh, depending on the content.
  • That dip/flat/dip pattern in the upper midrange through lower treble would mess with the presence (2-5kHz)
  • The early reflections deviates in the 800Hz and 2kHz region. The former will make vocals more "boxy" (echo-y) and the latter is going to affect presence even more because what you hear first won't agree with what comes to you shortly after.
  • Port resonance ~ 800Hz. That explains the boost in that region and you can bet it's gonna sound resonant with the right music to show it off.
  • Is that a 1st order LPF on the midwoofer?
  • The tweeter's NF doesn't jive with the anechoic FF measurement >10kHz. The NF decreases while the FF increases above this frequency. Odd. This may be more of an artifact of the NF measurement's HF max (discussed here if you are interested).
  • The vertical directivity plot shows the woofer playing well until 5kHz at 100° down. Yet, there's a null in the response at around 2.5kHz. That's interesting. You'd generally expect the response to be wider there than an octave above it. Yet, at an octave above the response the woofer is still playing a lot of energy off-axis below. I'd have to think about this to visualize why this is happening. I know the shallow LPF plays a role here but it doesn't explain the observation (fully, at least).

Feel free to agree/disagree. That's how we learn, after all.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,657
Likes
240,874
Location
Seattle Area
The NFS should effectively act like anechoic, so if what you say is true then the measurements should be suspect?
The waterfall as I have shown in this review is NOT anechoic. I can make it so as I often do but did not do it in this case.
 

jazzendapus

Member
Joined
Apr 25, 2019
Messages
71
Likes
150
It’s unfortunate Amir’s credentials are frequently under fire. It could be that unlike widely recognised jobs like ‘medical doctors’ or ‘tennis players’, people just don’t think about or get exposed to specialist work performed in specific industries, say odor tester or Pantone color experts.
It's not the credentials that are under attack, it's the fact that he doesn't seem to understand that they are irrelevant, as they do not include the (superhuman) ability to nullify (mostly psychological) biases that are inherent in every sighted and subjective test. His skill for hearing low level distortion and other artifacts doesn't relate in any way to the required skill of controlling for and diminishing those heaps of intervening factors unrelated to actual sound.

The objective comparison between SVS Ultra and M106 doesn't suggest anything that would knock the former's rating two panthers down, so you can fill the rest of the thread with countless hypotheses about why he heard what he heard, but considering that blind testing speakers is not on the agenda, those hypotheses will remain untested. So what's the point in going on with that?
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,657
Likes
240,874
Location
Seattle Area
It's not the credentials that are under attack, it's the fact that he doesn't seem to understand that they are irrelevant, as they do not include the (superhuman) ability to nullify (mostly psychological) biases that are inherent in every sighted and subjective test. His skill for hearing low level distortion and other artifacts doesn't relate in any way to the required skill of controlling for and diminishing those heaps of intervening factors unrelated to actual sound.
You are very confused about this as are others.

When testing for differences that are extremely small, bias plays a huge role. After all, in sighted tests many would say two DACs sound different but once levels are matched and test is blind, those differences vanish.

When comparing speakers, the above does not remotely hold. We can guarantee that if someone says two speakers sound different sighted, they would say the same in blind testing.

So generalizations about bias absolutely do NOT hold with respect to speakers.

It is true that controlled testing of speakers has shown that bias does enter preference scores. A big and pretty tower speaker will bias a lay audiophiles against a tiny Bluetooth speaker. Trained listeners though, are immune to these kinds of extreme bias for the most part. Not perfectly, but for the most part.

This is why in industry we use trained listeners in sighted evaluation all the time. Nothing would ever get built if we set up elaborate, multi-listener controlled systems for every design decision. This is where your (and other people's) lack of experience pollutes your views. You are running with talking points about objectivity without any personal and first hand experience in this field.

Look at the very research that advocates blind listening for speakers:

BlindVsSightedMeanLoudspeakerRatings.png


Look at speaker one to the left. Blind and sighted scored it the same as speaker to the right of it. Same as the speaker to all the way to the right where blind and sighted even got equal scores. Only speaker 3 reversed fortunes but look at the scale: we are talking about 5.8 versus 6.3. Take into account the error bars and the difference becomes almost academic.

Notice how the scale of the graph is from 5 to 8 to accentuate the differences shown. There is salesmanship in there, something that even Dr. Toole has observed.

All of this makes sense of course. Speakers do sound tremendously different from each other. Telling one has more bass than other is unlikely to be due to placebo.

Do we want to rely on sighted impression of untrained listeners when it comes to speakers? Probably not. They don't listen correctly, are not trained to know what to listen for, and are unaided by objective information about the sound of the speaker. But when a trained listeners with those skills talks, you better take notice. Discounting it flippantly based on primitive talking points of objectivity is just silly.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
In a nutshell: after a scientific analysis, it was determined the soup was based on water, tomato, bread, onion, salt, pepper, vinegar and some bubbles of air.
Viscosity and density were reported as a top secret, not being revealed.
Temperature at testing headquarters was 23 Celsius. Pressure was 907 hPa.
Serving temperature was 33 Celsius. Spoons and plates were kept at reported ambient temperature.

Many people liked it very much, asking for another handle.
And many other people didn't enjoy it at all.

Easy, isn't it?

If you were meaning to use food as as an analogy to speakers, that doesn't work. Music is akin to the flavours and textures of food/drink, not speakers. Hi-fidelity audio reproduction is a recipe to recreate fine listening (dining) at home. Follow a chef's (artist's) recipe exactly, and you'll recreate the flavours and textures (music) they intended you to experience. Salt and pepper (DSP/EQ) can be added optionally at home to taste e.g. if the chef isn't too great, once the flavours have been tried (music heard), not by piling it in blind during cooking and destroying the balance of flavours (using DACs/amps/speakers that muck up tonality or add huge amounts of distortion to everything, differing from the ones used when mastering the music). Only blind testing (tasting!) will tell you your true preferences though, with all biases removed. Otherwise you could end up fooling yourself, and anyone else who blindly believes your (not blinded!), subconsciously biased, 'preferences'. Just one of those many subconscious cognitive biases is pricing bias, that along with innumerable others (that can combine in unpredictable ways), every single person is susceptible to, even demonstrated at the neurological level via such biases activating the reward center of the brain:

 
Last edited:

CDMC

Major Contributor
Forum Donor
Joined
Sep 4, 2019
Messages
1,172
Likes
2,321
I don't know if it's "universal" or a USA thing but every comments or critique are called an attack?
There's no war, violence, fight or such things as attacks.

Your comments completely cross the line from contructive and or critique, into personal insults and attack. The fact that you cannot or choose not to see it is your own shortcoming, but it is clear to others.
 

Cahudson42

Major Contributor
Joined
Sep 21, 2019
Messages
1,083
Likes
1,557
Well, when it comes to spending what little $$ I have, I would rather see a false negative rather than a false positive..therefore I will continue to look for both measured positive objective tests plus a positive Amir subjective opinion before purchase.

There are plenty of 'speaker fish' in the ocean to choose from, so why not?:)
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,559
Likes
1,703
Location
California
There's a ton of off topic postings here , can we stick to the OP and focus the discussion on the data presented please .

if you don't value the subjective bit of the review feel free to disregard it , it's free after all!.

Thank goodness.
Perhaps we should start a separate thread for folks who don't believe that unblinded listener impressions are valid, complete with a link to a GoFundMe so Amir can purchase an Automated Speaker Shuffler to use just for them.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
It's not the credentials that are under attack, it's the fact that he doesn't seem to understand that they are irrelevant, as they do not include the (superhuman) ability to nullify (mostly psychological) biases that are inherent in every sighted and subjective test. His skill for hearing low level distortion and other artifacts doesn't relate in any way to the required skill of controlling for and diminishing those heaps of intervening factors unrelated to actual sound.

The objective comparison between SVS Ultra and M106 doesn't suggest anything that would knock the former's rating two panthers down, so you can fill the rest of the thread with countless hypotheses about why he heard what he heard, but considering that blind testing speakers is not on the agenda, those hypotheses will remain untested. So what's the point in going on with that?

Certainly it is true that when Joe Everyman tweaks his speakers by replacing the perfectly good capacitors with different capacitors that also are perfectly good and then invites his buddies over for a listening session, bias is going to play a strong role in what they claim they hear. In this kind of scenario the only way to avoid the bias would be to set up a double-blind test (although it isn't really possible to do it properly in any home environment). In general, double-blind testing is needed in testing fringe theories concerning the audibility of whatever. But you are saying that no evaluation of a speaker's sound quality is ever valid unless it is done according to double-blind protocol, because, you say, there will always be bias that will pollute the findings. This is a plausible hypothesis. But is it correct? What studies do you know of that have demonstrated this to be the case? Not only would this be a difficult thing to prove, but it is purely a supposition on your part. What evidence-based reason do you have for this belief?

My gut sense is that Amir is simply telling us what he heard. I have zero reason to think otherwise, and I think there is a good amount of evidence supporting this perspective, that Amir is simply and honestly telling us what he heard, and that his subjective assessment of what he heard has strong value and validity. There is a lot of evidence showing that the trained listeners do have a learned ability to select speakers that correlate well with the sound that the majority of people prefer, and to detect flaws that are characteristic of speakers that are not preferred by the majority of people. In addition to evidence of this variety, there is also other evidence both anecdotal and measurement-based that jives with Amir's subjective assessment of this speaker. In fact there are several people who have reported on this forum that they own this speaker or once did and that they weren't happy with the sound, for reasons that seem similar to Amir's reasons for not liking the speaker. All in all I do not find any reason to doubt his subjective assessment. I would not put the same sort of trust in the subjective reviews you find in various web forums, because I have no idea who all those people are or what agenda they may be pushing. It may be that Amir is biased to some degree toward the sound of Revel speakers, but given how Revel speakers typically measure, this is probably a good bias for him to have. And even if he has a bias toward the sound of Revel speakers, I don't think there is reason to jump to the conclusion that this would ruin his subjective assessment of this SVS speaker or most any other speaker.

The point I am trying to make is that you are insisting that Amir's subjective assessment of the speaker is corrupt on the basis that it did not follow a rigorous protocol, but you haven't provided any evidence that supports your contention that he would need to have followed that protocol before his subjective assessment would be valid. You can't just say, "Your findings are not valid because you did not follow the protocol that I think you should have followed." When there is a specific protocol that is rigorously followed when doing some specific kind of scientific inquiry within a given branch of science, that protocol is followed because the acknowledged experts within that branch of science have collectively agreed that for the kind of work they are doing it is necessary to follow a specific protocol. Nothing of this sort has happened within the field of engineering of audio recording and playback equipment, and I think that there are good and practical reasons for why nothing of this sort has happened. As someone on this forum pointed out yesterday, if Amir were to put himself through the trouble of doing a double-blind listening evaluation with every speaker he tests, the number of speakers he'd be able to test would be reduced to a fraction of what it is now. And he would have even less time to spend with his dog and the other members of his family.
 

patate91

Active Member
Joined
Apr 14, 2019
Messages
253
Likes
137
Certainly it is true that when Joe Everyman tweaks his speakers by replacing the perfectly good capacitors with different capacitors that also are perfectly good and then invites his buddies over for a listening session, bias is going to play a strong role in what they claim they hear. In this kind of scenario the only way to avoid the bias would be to set up a double-blind test (although it isn't really possible to do it properly in any home environment). In general, double-blind testing is needed in testing fringe theories concerning the audibility of whatever. But you are saying that no evaluation of a speaker's sound quality is ever valid unless it is done according to double-blind protocol, because, you say, there will always be bias that will pollute the findings. This is a plausible hypothesis. But is it correct? What studies do you know of that have demonstrated this to be the case? Not only would this be a difficult thing to prove, but it is purely a supposition on your part. What evidence-based reason do you have for this belief?

My gut sense is that Amir is simply telling us what he heard. I have zero reason to think otherwise, and I think there is a good amount of evidence supporting this perspective, that Amir is simply and honestly telling us what he heard, and that his subjective assessment of what he heard has strong value and validity. There is a lot of evidence showing that the trained listeners do have a learned ability to select speakers that correlate well with the sound that the majority of people prefer, and to detect flaws that are characteristic of speakers that are not preferred by the majority of people. In addition to evidence of this variety, there is also other evidence both anecdotal and measurement-based that jives with Amir's subjective assessment of this speaker. In fact there are several people who have reported on this forum that they own this speaker or once did and that they weren't happy with the sound, for reasons that seem similar to Amir's reasons for not liking the speaker. All in all I do not find any reason to doubt his subjective assessment. I would not put the same sort of trust in the subjective reviews you find in various web forums, because I have no idea who all those people are or what agenda they may be pushing. It may be that Amir is biased to some degree toward the sound of Revel speakers, but given how Revel speakers typically measure, this is probably a good bias for him to have. And even if he has a bias toward the sound of Revel speakers, I don't think there is reason to jump to the conclusion that this would ruin his subjective assessment of this SVS speaker or most any other speaker.

The point I am trying to make is that you are insisting that Amir's subjective assessment of the speaker is corrupt on the basis that it did not follow a rigorous protocol, but you haven't provided any evidence that supports your contention that he would need to have followed that protocol before his subjective assessment would be valid. You can't just say, "Your findings are not valid because you did not follow the protocol that I think you should have followed." When there is a specific protocol that is rigorously followed when doing some specific kind of scientific inquiry within a given branch of science, that protocol is followed because the acknowledged experts within that branch of science have collectively agreed that for the kind of work they are doing it is necessary to follow a specific protocol. Nothing of this sort has happened within the field of engineering of audio recording and playback equipment, and I think that there are good and practical reasons for why nothing of this sort has happened. As someone on this forum pointed out yesterday, if Amir were to put himself through the trouble of doing a double-blind listening evaluation with every speaker he tests, the number of speakers he'd be able to test would be reduced to a fraction of what it is now. And he would have even less time to spend with his dog and the other members of his family.

In fact Floyd/Tool produced a paper about blind test and the industry (I think someone shared in this thread).

Also other field proved that biaises needs to be attenuate as much as possible. Cognitiv science have progressed in the recent years, so the training is not valid argument (this is not an audio science related).

Again we understand that Amir doesn't have corporation's budget and installation. However a simple thing can be done to listen before measurements. It also proves that we trust him, because there's no way for us to verify what he's doing.

Amir has a lot of knowledge and experience, he can then elaborate on why his subjective listening impressions don't correlate with data. If je doesn't have time to do that, it won't help the science part of ASR.
 
Last edited:

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,305
Location
uk, taunton
In fact Floyd/Tool produced a paper about blind test and the industry (I think someone shared in this thread).

Also other field proved that biaises needs to be attenuate as much as possible. Cognitiv science have progressed in the recent years, so the training is not valid (this is not an audio science related).

Again we understand that Amir doesn't have a corporation's budget and installation. A simple thing can be to listen before measurements. It also proves that we trust him, because there's no way for us to verify what he's doing.

Amir has a lot of knowledge and experience, he can then elaborate on why his subjective listening impressions don't correlate with data. If je doesn't have time to do that, it won't help the science part of ASR.
Not in this thread though .

No more off topic please.
 

Matthew J Poes

Active Member
Technical Expert
Reviewer
Joined
Apr 20, 2020
Messages
159
Likes
548
The NFS should effectively act like anechoic, so if what you say is true then the measurements should be suspect? I find that pretty doubtful - it should show up on other waterfalls too if that's the case right?

anechoic simply means no reflections. No no room noise. This approach also has little to do with CSD which behaves the same on the Klippel as any other. It’s still dependent on a quiet room.

NSF is only relevant to the obtaining of the frequency response minus room reflections and extrapolating the full 360 degree sphere of sound. Temporal and non-linear issues don’t get special treatment. I’m unaware of any way to do that.
 

F1308

Major Contributor
Joined
May 24, 2020
Messages
1,059
Likes
917
If you were meaning to use food as as an analogy to speakers, that doesn't work. Music is akin to the flavours and textures of food/drink, not speakers. Hi-fidelity audio reproduction is a recipe to recreate fine listening (dining) at home. Follow a chef's (artist's) recipe exactly, and you'll recreate the flavours and textures (music) they intended you to experience. Salt and pepper (DSP/EQ) can be added optionally at home to taste e.g. if the chef isn't too great, once the flavours have been tried (music heard), not by piling it in blind during cooking and destroying the balance of flavours (using DACs/amps/speakers that muck up tonality or add huge amounts of distortion to everything, differing from the ones used when mastering the music). Only blind testing (tasting!) will tell you your true preferences though, with all biases removed. Otherwise you could end up fooling yourself, and anyone else who blindly believes your (not blinded!), subconsciously biased, 'preferences'. Just one of those many subconscious cognitive biases is pricing bias, that along with innumerable others (that can combine in unpredictable ways), every single person is susceptible to, even demonstrated at the neurological level via such biases activating the reward center of the brain:


I am just saying that some people like Audi and some Mercedes.
And some of them, the people going for Audi, will avoid A6 and go for A3 or A8.
And you can make all the testing, after a great build, after a great engineering, and regardless of how long or complex it was, someone will nevertheless say he doesn't like the female voices coming out of it.
The moral of the story, the moral of this song is simply that one will only keep the one it was liked.
 

Shike

Member
Joined
Apr 27, 2020
Messages
65
Likes
115
The waterfall as I have shown in this review is NOT anechoic. I can make it so as I often do but did not do it in this case.

I see. Even beyond the CSD I still think there's issues with this design around 5khz based on the vertical. It feels like the woofer is still putting out to much energy. I'm just not seeing other issues that makes me immediately think siblance would be an issue, but maybe I'm blind.
 

Matthew J Poes

Active Member
Technical Expert
Reviewer
Joined
Apr 20, 2020
Messages
159
Likes
548
No, I am a doctor. I managed the entire division at Microsoft responsible for develop of audio technologies among others. I trained myself to hear small impairments in audio and became our top listener when it came to testing of our audio technologies. I have post countless double blind tests showing that I can easily ear artifacts that most people would not be able to whatsoever. I have provided some examples in this very thread.

Importantly, my job depended on being right when it came to listening tests. As was the fortunes of the company and my teams. Just the way your doctor is situated.

In this area, I am exceptionally familiar with the research and know the people behind them personally. My first exposure to it came from Dr. Toole himself. I have taken Harman training tests and increased my abilities in this area as I reported earlier.

I don't consider myself a research but neither is your doctor. I know how to listen carefully in ways that most people don't know. The skill requires tons of training across countless trials but professionally and informally.

It is the nature of subjective testing that one can't always be right. That's why we perform objective measurements. But for the most part, we, the industry, relies on trained listeners for bulk of its product development. This is why Harman has trained listeners. It is not just for research work.
Amir, I have to speak in your defense on this. We swim in the same circles, and while we don’t know each other, many of the people we both know have always spoken with great respect for you. Frequently when discussing measurements to folks at Harman, NAD, etc. someone will mention a product measured by Amir. They have great trust and faith in both your measurement ability as well as your grasp of the subjective and objective science.

objective measurements are worthless without a subjective context. That is the basis for Toole and Olives life work. To understand the relationship between objective measurements and subjective impressions. We still don’t have this nailed 100%, it’s a work I progress with more to learn. But we have really good understandings of the relationship between those measurements and subjective impressions. Having said that, I appreciate your subjective impressions. Two speakers can both measure well enough, look ok, but sound quite different (note I did not say measure the same but sound different, since that is not true). The average consumer can’t necessarily translate these measurement anomalies into subjective impressions and my experience is many have far higher beliefs of themselves than are true. Misjudging a speakers subjective performance from their erroneous interpretation of the data. Something you are less apt to do given your experience and knowledge of the literature.

sighted listening tests are prone to bias and error, but that doesn’t make listening tests useless. All consumer reviewers should measure and listen. For me, the biggest surprise (and why I try to develop my listening impressions blinded to measurements by listening before I measure) is how often small errors sound awful or seemingly large errors turn out more benign than expected.

The final point that I think needs to be made is that even Olive has noted that there exists evidence that preference is not totally homogenous. Some groups of people like different things. Sean isn’t the only person to see this. I recently read a white paper report of blind headphone tests done by country/region finding pretty big differences in what was preferred. That doesn’t make subjective impressions wrong but it can mean that what one prefers another may not. That’s why people should ultimately go with what they like. The industry should engineer competent products, but there is a lot of room for differences in those products (at least for speakers).

@amirm I appreciate your subjective impressions. They often help me feel more sane about my own subjective impressions.
 

Matthew J Poes

Active Member
Technical Expert
Reviewer
Joined
Apr 20, 2020
Messages
159
Likes
548
In fact Floyd/Tool produced a paper about blind test and the industry (I think someone shared in this thread).

Also other field proved that biaises needs to be attenuate as much as possible. Cognitiv science have progressed in the recent years, so the training is not valid argument (this is not an audio science related).

Again we understand that Amir doesn't have corporation's budget and installation. However a simple thing can be done to listen before measurements. It also proves that we trust him, because there's no way for us to verify what he's doing.

Amir has a lot of knowledge and experience, he can then elaborate on why his subjective listening impressions don't correlate with data. If je doesn't have time to do that, it won't help the science part of ASR.
I’m really not sure what you want from Amir. Best I can tell you are blanket criticizing subjective listening tests and want him to test himself for all by listening before he measure and what? Publish each blind to each other so we can see how accurate he is?

I don’t see the point of that. that would only serve to validate the subjective utility of the objective measurements for one person. Something Olive and Toole already did based on 100’s Of people. I don’t see this forum as being that kind of science. Amir provides consumers with information we should be able to get from manufacturers but can’t.

I am unaware of any advancements in the perceptual sciences that would invalidate the training approach Harman did. In fact, quite the opposite. When it comes to perceptual listening tests, it seems nobody cares enough to further advance that field and modify our approach to such training and testing.

I just had a conversation with Sean in the last month related to the sources of bias in MUSHRA and ABX style tests and their limitations. The main point of that was that most of the research done around that was actually based on other senses. Many of the texts establishing limits relied on tests of other perceptions from other senses where validation was easier. It made it hard to establish just how sensitive these tests are.

I would argue that this science has not advanced very much and many questions remain open. There bigger problem is that nobody cares enough to do the research and address it.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,657
Likes
240,874
Location
Seattle Area
anechoic simply means no reflections. No no room noise. This approach also has little to do with CSD which behaves the same on the Klippel as any other. It’s still dependent on a quiet room.

NSF is only relevant to the obtaining of the frequency response minus room reflections and extrapolating the full 360 degree sphere of sound. Temporal and non-linear issues don’t get special treatment. I’m unaware of any way to do that.
Mostly correct. NFS has an additional functionality where it can create an inverse filter for the room reflections. This can then be convoluted with the distortion or waterfall measurements to make them anechoic. I have shown that a number of times when I indicate a measurement is anechoic. Here is an example:

index.php


Without correction, low frequency room modes accentuate the harmonic energy. And in high frequencies create narrow troughs that show up as false spikes in THD percentage. And of course the on-axis response would not look so clean.

It is not a huge effect though so I don't always run it.
 

Matthew J Poes

Active Member
Technical Expert
Reviewer
Joined
Apr 20, 2020
Messages
159
Likes
548
Mostly correct. NFS has an additional functionality where it can create an inverse filter for the room reflections. This can then be convoluted with the distortion or waterfall measurements to make them anechoic. I have shown that a number of times when I indicate a measurement is anechoic. Here is an example:

index.php


Without correction, low frequency room modes accentuate the harmonic energy. And in high frequencies create narrow troughs that show up as false spikes in THD percentage. And of course the on-axis response would not look so clean.

It is not a huge effect though so I don't always run it.

I didn’t realize that inverse filter could have that effect. Interesting for sure.

usually obtaining high resolution distortion measurements of speakers requires very low noise in the mic, system, and room. I assume this is still true. Obviously measuring max output or distortion changes with level shouldn’t matter. But typically when you need to measure the distortion of a speaker within its linear range, you use mics with self noise below 10dBA and in rooms with a noise floor at or below 0dB. I know you can measure distortion below the noise floor but it has a limit. I’ve always been told it’s about 10dB below the noise floor and that isn’t accounting for mic noise, which I was told can’t be addressed. It’s part of why I bought such a quiet mic.

so in the CSD those ridges look to me like noise. Is that your interpretation. I am admittedly looking at this on a phone so I might be missing something, but I didn’t see an associated peak in the response. It would seem highly unlike to have such energy tails as that would make the speakers non-minimum phase
 

Beave

Major Contributor
Joined
May 10, 2020
Messages
1,394
Likes
3,015
This is the Internet. You expect people with no direct experience in this field, most of the time not having even read or understood the research to agree with people who have?

You are forgetting that people who post have their own agenda. Quoting them as evidence is not wise. If you want to disagree with your doctor, you best quote other doctors, not lay people on the Internet.

I have provided plenty of evidence as have others why you can't take a single value Olive score as bible. Across larger set of speakers, the correlation slipped to 86% so you need to accept 24% of the time Harman listeners not agreeing with the score. And as it happens, that is my correlation score right now.

Remember once more: Harman itself does NOT use this scoring method. The work was abandoned as was seamingly the research.

Does this mean the listeners were giving 110%? :p
 
Top Bottom