• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Effect of Loudspeaker Directivity Compared with In-room Measurements

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
785
I'm talking about cardioids in a thread where Kvalsvoll has posted interesting measurements.



So instead of engaging in a point-by-point discussion with me, you imply that I should go away?

Why can we not look at the effects of cardioid loading through the lense of what we do know about acoustics and psychoacoustics? Wouldn't THAT be what science demands, instead of telling the person who is actually doing so to go away because he doesn't have proof?

If science demanded proof before dialogue, it would not advance very fast.

Instead of going away, I invite you to respond to my previous post point-by-point.
I edited my post and did :)
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,866
Location
Princeton, Texas
I edited my post and did :)

Thank you!

I need to go work on an unproven design of my own right now, the weather's good today.
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
785
The data @Kvalsvoll posted in this thread shows the in-room results from using cardioid loading compared with a conventional loudspeaker. Maybe you don't see any evidence of worthwhile improvement in the data, but I do. Or maybe you don't accept the plausibility of audible improvement without "proof" in the form of peer-reviewed controlled blind testing regardless of the evidently improved smoothness of the off-axis energy, and that is certainly your privilege.
I did and still do find the data interesting. Interesting enough that I asked him to post the .mdat's so I could look at it in detail. Things took a bad turn when he refused to post the data. Why would he do that? Even making flimsy excuses why he couldn't post the data. Why open a thread to show data when in the end the whole dataset is kept "confidential"? That doesn't sit well with me. Is there anything wrong with the data itself?

Then there's 3 manufacturers directly involved in cardioid design arguing against a single member that is simply taking the side of science. Smells fishy.
 

sigbergaudio

Major Contributor
Audio Company
Forum Donor
Joined
Aug 21, 2020
Messages
2,701
Likes
5,688
Location
Norway
I did and still do find the data interesting. Interesting enough that I asked him to post the .mdat's so I could look at it in detail. Things took a bad turn when he refused to post the data. Why would he do that? Even making flimsy excuses why he couldn't post the data. Why open a thread to show data when in the end the whole dataset is kept "confidential"? That doesn't sit well with me. Is there anything wrong with the data itself?

Then there's 3 manufacturers directly involved in cardioid design arguing against a single member that is simply taking the side of science. Smells fishy.

We've posted graphs showing that our design clearly works. Is it "better"? Better in what respect? For who? In what situation? We built the prototype in such a way that we could test both as a sealed design and cardioid design. To me the improvement is obvious. Will you or every consumer out there come to the same conclusion? I have no idea.

Expecting anyone who claims a loudspeaker or any other product created for enjoyment / recreational purposes is better than their previous model or better than the competition that they prove it in a controlled test simply isn't how the world works, everyone makes these kinds of claims all day long in their marketing. So it's difficult to argue in a meaningful matter with someone who holds you to that standard.

The combination of measurements, listening tests and loudspeaker theory makes me pretty confident this is a good approach to good sounding speakers, I suspect the same is the case for Kvaalsvoll. And I also don't think any of us has claimed anything else.
 
OP
Kvalsvoll

Kvalsvoll

Addicted to Fun and Learning
Audio Company
Joined
Apr 25, 2019
Messages
888
Likes
1,657
Location
Norway
It's interesting how small companies are often held to higher standards than the large corporations who routinely make any number of claims about how some random "technology" or use of esoteric materials are superior with no proof whatsoever.
This is something worth reflecting over.

The "14hz" subwoofer with 6 inch drivers, another marketing-brand gets a free-pass for questionable measurements.

It is no longer the contant that matters, it is the size of the poster.
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
785
Expecting anyone who claims a loudspeaker or any other product created for enjoyment / recreational purposes is better than their previous model or better than the competition that they prove it in a controlled test simply isn't how the world works, everyone makes these kinds of claims all day long in their marketing.
I said it earlier already, that's the sorry state of affairs in audio land. https://www.stereophile.com/asweseeit/1107awsi/index.html
 
Last edited:

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
785
Preference rating only 85% predictive, interesting that the Salon2 and Array 1400 were reported to have performed similarly in actual blind testing at Harman
What are you alluding to? I posted that example just to show that constant directivity doesn't seem to be related to higher preference. That small speaker becomes omni pretty early.
 
Last edited:

youngho

Senior Member
Joined
Apr 21, 2019
Messages
486
Likes
800
What are you alluding to? I just posted that example to show that constant directivity doesn't seem to be related to higher preference. That small speaker becomes omni pretty early.
You were just referring to a prediction score with an accuracy of 86% (sorry, not 85%), not based on actual preference testing with blind listening
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,866
Location
Princeton, Texas
I'd like to first go back to your post #159 above:

Neumann KH 120A with sub, preference rating 9.1 :) https://pierreaubert.github.io/spinorama/scores.html

The data for the Neumann KH120 was provided by the manufacturer. As far as I can tell it was never measured by Harman, but we can evaluate it based on what we know about the psychoacoustic implications of its measured response both on and off axis. Can we not do that with other designs as well?

[regarding the Dutch & Dutch 8c] Excellent engineering with unproven benefits :)

If they would claim superiority – which they do* – yes.

* "the 8c is above and beyond any other system in terms of neutrality, precision, and in-room tonal balance"

I don't speak for anybody's ministry of propaganda, which is why I asked whether you considered the design of the Dutch & Dutch 8c to have "validity".

Like the Neumann you mentioned above, we do not have a blind listening evaluation of the 8c by Harman. We only have measurements, from several different sources. Apparently you and I have different levels of trust in measurements.

And now to your post #162:

I did and still do find the data interesting. Interesting enough that I asked him to post the .mdat's so I could look at it in detail. Things took a bad turn when he refused to post the data.

For the record, you asked for a LOT more than just the .mdat files. From your post number 123:

Let's see the "Effect of Loudspeaker Directivity Compared with In-room Measurements", i.e. a comparison of multiple speakers with different directivities and how that changes the in-room response. At best in different rooms. Quite a task as you would need to present directivity data of different speaker designs and relate that to response changes in the room. I would expect a full set of measurements showing things like RT60, filtered IR (include Schroeder integral), clarity, spectrogram, etc.
If you do it in REW then please post the .mdat's so others can verify the results. Otherwise you open yourself up to the suspicion of having a hidden agenda.

Back to post #162:

Then there's 3 manufacturers directly involved in cardioid design arguing against a single member that is simply taking the side of science. Smells fishy.

If you think that I'm directly (or indirectly) involved with cardioid design, you are mistaken. I compete against it, but recognize and appreciate its merits.

If you have any actual science-based arguments against the merits of cardioid midbass enclosures, I'd like to hear them. Surely you can think of valid acoustic or psychoacoustic objections to the design. Surely you have something more scientific to offer than complaints about the design being "unproven" and insinuations of duplicity. Show me that there really is some science on your side.

And for the record, I do not appreciate your insinuations of duplicity.
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
785
You were just referring to a prediction score with an accuracy of 86% (sorry, not 85%), not based on actual preference testing with blind listening
True but does that take away from the rationale?
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
785
I'd like to first go back to your post #159 above:



The data for the Neumann KH120 was provided by the manufacturer. As far as I can tell it was never measured by Harman, but we can evaluate it based on what we know about the psychoacoustic implications of its measured response both on and off axis. Can we not do that with other designs as well?



I don't speak for anybody's ministry of propaganda, which is why I asked whether you considered the design of the Dutch & Dutch 8c to have "validity".

Like the Neumann you mentioned above, we do not have a blind listening evaluation of the 8c by Harman. We only have measurements, from several different sources. Apparently you and I have different levels of trust in measurements.

And now to your post #162:



For the record, you asked for a LOT more than just the .mdat files. From your post number 123:



Back to post #162:



If you think that I'm directly (or indirectly) involved with cardioid design, you are mistaken. I compete against it, but recognize and appreciate its merits.

If you have any actual science-based arguments against the merits of cardioid midbass enclosures, I'd like to hear them. Surely you can think of valid acoustic or psychoacoustic objections to the design. Surely you have something more scientific to offer than complaints about the design being "unproven" and insinuations of duplicity. Show me that there really is some science on your side.

And for the record, I do not appreciate your insinuations of duplicity.
It gets repetitive and tiresome. No bite.
 

youngho

Senior Member
Joined
Apr 21, 2019
Messages
486
Likes
800
True but does that take away from the rationale?
I'm not sure what you're actually saying by "rationale." Can you explain further?

In the meantime, let me outline my scientific concerns how you (and really most of ASR) seem to be using this score (which is already stretched beyond its original formulation by adding EQ and/or subwoofer). Toole himself notes "predicted preference ratings correlated with those from listening tests with a correlation of 0.86, with a very high statistical significance (p = <0.0001)." Scores like this derived from regression analyses will show best results with the data sets from which they were derived.

Here is an example from the medical field: https://pubmed.ncbi.nlm.nih.gov/10608376/. Clinical prediction rules require follow-up validation studies with a new set of data: https://pubmed.ncbi.nlm.nih.gov/15292409/. Note "Clinical prediction rules typically demonstrate diminished performance in a new patient population because they are optimally modeled to the original data set." Here is another paper showing different results: https://pubmed.ncbi.nlm.nih.gov/15118038/. Nonetheless, Kocher's prediction rule quickly became referred to as "Kocher criteria," as seen here: https://pubmed.ncbi.nlm.nih.gov/30950940/.

Validation data for Olive's second preference prediction model was never published. From Toole's second edition of Sound Reproduction, "Figure 18.17 shows samples of two excellent, high-priced loudspeakers, that do almost everything well. To these should be added loudspeakers “R” and “I” in Figure 18.14. Collectively, these are examples of the present-day “kings of the hill.” There are others, of course, but the measurements do not look very different. When they are put against each other in double-blind tests, the audible differences are small, somewhat program dependent, and listener ratings tend to vary slightly and randomly around a high number. In the end there may be no absolute winner that is revealed with any statistical confi dence; the differences in opinion are of the same size as those that could occur by chance."

Since these two speakers are the Salon2 and Array 1400 with different preference prediction scores, Toole himself notes that they performed similarly in double-blind listening tests. I think this absolutely takes away from any rationale of using the score to predict that one speaker which has a slightly higher prediction score WILL outperform another with a slightly lower prediction score in double-blind listening tests (in rooms configured similarly to Harman's, etc)

Young-Ho
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
785
I'm not sure what you're actually saying by "rationale." Can you explain further?

In the meantime, let me outline my scientific concerns how you (and really most of ASR) seem to be using this score (which is already stretched beyond its original formulation by adding EQ and/or subwoofer). Toole himself notes "predicted preference ratings correlated with those from listening tests with a correlation of 0.86, with a very high statistical significance (p = <0.0001)." Scores like this derived from regression analyses will show best results with the data sets from which they were derived.

Here is an example from the medical field: https://pubmed.ncbi.nlm.nih.gov/10608376/. Clinical prediction rules require follow-up validation studies with a new set of data: https://pubmed.ncbi.nlm.nih.gov/15292409/. Note "Clinical prediction rules typically demonstrate diminished performance in a new patient population because they are optimally modeled to the original data set." Here is another paper showing different results: https://pubmed.ncbi.nlm.nih.gov/15118038/. Nonetheless, Kocher's prediction rule quickly became referred to as "Kocher criteria," as seen here: https://pubmed.ncbi.nlm.nih.gov/30950940/.

Validation data for Olive's second preference prediction model was never published. From Toole's second edition of Sound Reproduction, "Figure 18.17 shows samples of two excellent, high-priced loudspeakers, that do almost everything well. To these should be added loudspeakers “R” and “I” in Figure 18.14. Collectively, these are examples of the present-day “kings of the hill.” There are others, of course, but the measurements do not look very different. When they are put against each other in double-blind tests, the audible differences are small, somewhat program dependent, and listener ratings tend to vary slightly and randomly around a high number. In the end there may be no absolute winner that is revealed with any statistical confi dence; the differences in opinion are of the same size as those that could occur by chance."

Since these two speakers are the Salon2 and Array 1400 with different preference prediction scores, Toole himself notes that they performed similarly in double-blind listening tests. I think this absolutely takes away from any rationale of using the score to predict that one speaker which has a slightly higher prediction score WILL outperform another with a slightly lower prediction score in double-blind listening tests (in rooms configured similarly to Harman's, etc)

Young-Ho
Thanks for making my point :) Anyway, I'm not a fan of preference tests. They are a band-aid in the light of audio's circle of confusion.
 

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,866
Location
Princeton, Texas
Thanks for making my point :) Anyway, I'm not a fan of preference tests. They are a band-aid in the light of audio's circle of confusion.

What are you a fan of, regarding loudspeaker evaluation?
 

ernestcarl

Major Contributor
Joined
Sep 4, 2019
Messages
3,110
Likes
2,327
Location
Canada
I don't own any cardiod type speakers, but I do have the KH120 (since it was already mentioned) to test easily side-by-side the Sceptre S8.

The KH120 has wider directivity above 2 kHz owing to its waveguide, but also less controlled directivity in the mids due to its smaller driver size and baffle.

Both monitors are placed in a very compromised position near the immediate sidewall, and somewhat in the middle of a hallway -- absolutely necessary due to space constraints in the room I'm in. Both monitors are pointing exactly on-axis to the center MLP of my main listening couch.

1640286669650.jpeg


While I am certain the KH120 will have a higher preference rating score, we know that doesn't account for everything.

DISTORTION
1640286873195.png

KH120 ~vs~ S8

There's probably a 1dB (maybe less?) loss of efficiency in the low end of the Sceptre S8 due to my having added extra damping inside. The improvement in simple measured distortion sweeps is negligible in reality, but it did help reduce some resonance between 400-500Hz (as evidenced by nearifield wavelet and burst decay spectograms compared in the past) so I kept the modification.

For distances of 2 meters or more, the Sceptres to me are more preferable due to the better low-end performance -- both use limiters and that flash alarmingly bright red when active. Needless to say, the Neumann KH120 starts limiting earlier and is a bit unsatisfactory when it comes to loud listening; bass subjectively comes across as "boomier" and a little "hyped" even before I added the extra damping to the S8.

You may have noticed the steeper slope in the high frequency of the KH120 above 12 kHz or so... destructive interference with the adjacent sidewall seems to be unavoidable here in this position.

1640287912815.png

*note last two (60 degree horizontal) traces

The Sceptre S8's narrowing directivity above 12 kHz helps to avoid combing/cancellation in the aforementioned region. I also think the narrowing directivity and dips (normally considered as "bad") makes this speaker sound less "harsh" to my ears -- notwithstanding other disadvantages like the high-mid excess and off-axis "bunching" between 6-11 kHz -- which can be reduced with EQ, according to taste.

With/out 5 cycle Windowing applied
1640288410914.png

*note the somewhat worse performance between 400-1,300 Hz of the KH120 with it's smaller, and less directive mid-woofer


1640288572946.png

effect of bunching between 6-11 kHz from the S8's small horn directivity waveguide is evident in this spectrogram view

If you really look closely above, we see that between 400-1,300 Hz, the S8 actually has better time domain performance -- at the final MLP.

ZOOMED-IN VIEW
1640288883158.png

more coherent or controlled mid-energy decay readily evident from the S8

That's all...


BTW, I just want to ask why the constant insinuation of some nefarious intent behind the data (before and after spectrograms) and explanations previously posted? I don't really understand why @markus has to keep on repeatedly insisting on getting the mdat data files. I mean, it would be nice, but not totally necessary. The example graphs @Kvalsvoll provided already seems pretty self-evident to me: where controlled directivity of some speakers can help improve performance in a real room. People's actual in-room situation may vary from the specific examples provided, of course -- as with this particular little side case note of mine.
 
Last edited:

Duke

Major Contributor
Audio Company
Forum Donor
Joined
Apr 22, 2016
Messages
1,558
Likes
3,866
Location
Princeton, Texas
Performance data.

You mean like the on and off axis frequency response data multiple sources have posted for the Dutch & Dutch 8c? Or the actual in-room decay performance data Kvalsvoll posted in this thread comparing a cardioid enclosure with a conventional enclosure?
 
Last edited:

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
785
You mean like the on and off axis frequency response data multiple sources have posted for the Dutch & Dutch 8c? Or the in-room decay data Kvalsvoll posted in this thread comparing a cardioid enclosure with a conventional enclosure?
Both are interesting. A full anechoic balloon data set should be standard. With the Klippel NFS getting into hands of enthusiasts this doesn't seem to be a pipe dream anymore.
 
Top Bottom