• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Speaker Equivalent SINAD Discussion

OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
Corrected. Doesn't really change any of my arguments though.
What say you about people that don’t want a subwoofer (or a certain someone won’t allow one), but they can choose between bookshelves or towers?

If a secondary list for subwoofer users (with actual crossovers) were made where LFX is disregarded (14.5Hz used), then that would of course be fine; but the main ranking list should stay as is (or improved ;)).
 

DDF

Addicted to Fun and Learning
Joined
Dec 31, 2018
Messages
617
Likes
1,360
THD is simply not a useful descriptor of sound quality. ...See Geddes' papers on the subject

The following was brought to my attention, calling for some caution when using Earl's distortion audibility threshold results. This needs further investigation, but ER4's were used in his study as "research" class headphones:
https://www.diyaudio.com/forums/multi-way/139046-tweeter-distortion-audible-14.html#post1764992

If the following measurements are accurate, these produce enough distortion to limit the resolution of test to invalidate some of the more sensitive outcomes
https://clarityfidelity.blogspot.com/2015/08/etymotic-research-er-4s-iem.html?q=etymotic

This could explain why his thresholds are higher than some previous audition research such as
Distortion Thresholds.gif
:

Back in '95, I saved this post from Sean Olive who studied this topic in detail:

"The best study -according to Cabot- on THD is by Bryan and Parbook (1960). Using a 357 Hz sine wave they plotted thresholds of individual
harmonics as a function of the fundamental's level. Their thresholds were much lower than what is generally expected, although Cabot points out
that unlike other studies, they kept the distortion of their system below their measured thresholds. Their lowest threshold measured was for the
fourth harmonic: .05% at a listening level of 70 dB. The thresholds of the harmonics closely follow the equal-loudness contours."

Italics mine. This indicates that by raising the playback level, even lower levels of harmonic distortion may be detectable but it depends on the ear's inherent non linear response and how that plays into the outcome. This is also with sines, music will provide some inherent masking, but it would set a lower limit in the interest of ultimate fidelity.
 

MediumRare

Major Contributor
Forum Donor
Joined
Sep 17, 2019
Messages
1,955
Likes
2,283
Location
Chicago
Whatever metrics are chosen, they should cover in a mutually exclusive and comprehensive way the issues that drive preference. So "tighter, faster bass" might be subjectively suspect terms, but they do capture some description of sensory experience that should be able to be measured, separate from simple FR. A waterfall chart tells us a lot, even starting with a flat FR line. Same for something like “precise, clean transients". I’m not claiming expertise or that these are the best examples, but we should be able to have measurements that correlate to a DBT-generated qualitative description. My first effort at such a list is here, but I gather we need something better than my #3. https://www.audiosciencereview.com/...ntrol-1-pro-monitors-review.10811/post-301852
 

q3cpma

Major Contributor
Joined
May 22, 2019
Messages
3,060
Likes
4,417
Location
France
I think trying to lump it in one metric is a fool's errand, honestly. Separating the whole into:
* Frequency response: on and off-axis; maybe do something to integrate cardioid response speakers?.
* LF extension, which is really a separate matter.
* Time domain: group delay (has anyone read AES's paper from Genelec on the audibility of group delay?) and cumulative spectral decay.
* Max SPL at 1, 3% and 10% under 300Hz.
* Price?
to have a nice five branch star would work well.
 
Last edited:

NTomokawa

Addicted to Fun and Learning
Forum Donor
Joined
Jan 14, 2019
Messages
779
Likes
1,334
Location
Canada
* Time domain: group delay (has anyone read AES's paper from Genelec on the audibility of group delay?) and cumulative spectral decay.
I really want to see more of this. Very few manufacturers seem to provide waterfall graphs...
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
What say you about people that don’t want a subwoofer (or a certain someone won’t allow one), but they can choose between bookshelves or towers?

If a secondary list for subwoofer users (with actual crossovers) were made where LFX is disregarded (14.5Hz used), then that would of course be fine; but the main ranking list should stay as is (or improved ;)).

That's precisely what I was suggesting. Two rankings, for the two major speaker set-ups - one using Sean Olive's full formula for speakers without subs, and another for speakers paired with a subwoofer, using the same formula but with the omission of the LFX variable (as this would be inaccurate when the speaker is used with a sub). Simple. Everyone's happy (including that certain someone).
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
That's precisely what I was suggesting. Two rankings
one that encompasses the full 20Hz to 20kHz, and another that has a greater lower-limit (the standard subwoofer crossover frequency of 80Hz seems like a good choice

If just disregarding LFX, then sure. I was initially thinking you meant alter all parameters to disregard bass and I was thinking how that could mess up the formula too much, then I realized the other parameters only go down to 100Hz anyway.:facepalm:
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,678
Likes
38,772
Location
Gold Coast, Queensland, Australia
Amir wants to think of this as large signal testing, so other factors must be addressed in the reviews.

With passive speakers, the impedance needs to figured into the score ranking, as wildly fluctuating speakers or diabolical impedance curves limit the amplifiers that will "play nice" with such speakers.

Power handling (can be done with tone-bursts), efficiency, and port noise are also extremely important and must also play into the scorecard.

I cannot see how loudspeakers can be ranked on a recommended scorecard-like list with Klippel testing alone.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
With passive speakers, the impedance needs to figured into the score ranking, as wildly fluctuating speakers or diabolical impedance curves limit the amplifiers that will "play nice" with such speakers.
I don’t think sensitivity/max SPL/how difficult it is to drive/etc. should be factored into a main score; as those “limitations“ change with what amplifier being used and other factors, thus for someone with a cheap low-wattage amp, an 85dB 4ohm speaker with difficult phase angles would not be ideal, but for someone with a really good high-wattage amp, that speaker is not at all difficult to load.

Sure, if all the other parameters are the same/similar, the speaker with the easier load would be suitable to more systems, but that type of scenario is rare.

Amir should still present this data and comment on it, but I don’t think it should be factored into the main score.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,599
Location
Seattle Area
I think we should have an independent score for how difficult a passive speaker is to drive it. The best work I have seen is in Hifi News by Paul Miller. Alas, it is not well documented. Essentially the phase angle and impedance are used with a survey of music SPL levels to compute a new single value equivalent impedance. In my survey of my library, I found 40 Hz to be the highest frequency. We could use that (and a region around it) to compute an effective impedance based on impedance+phase. What say you? :)
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,599
Location
Seattle Area
FYI I heard back from Sean Olive. Good news is that we can use his formula without worrying about licensing the patent. Bad news is that no one computed the value inside Harman so he has no code to give us.

I am starting to warm up to the idea of three or four scores for each speaker. One could be difficulty of drive per above. Another could be low frequency extension/power. Another smoothness of on-axis and another, off-axis. We could just give it scores A to F so that people don't zoom in too much on what they mean.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,250
Likes
11,551
Location
Land O’ Lakes, FL
I think we should have an independent score for how difficult a passive speaker is to drive it. The best work I have seen is in Hifi News by Paul Miller. Alas, it is not well documented. Essentially the phase angle and impedance are used with a survey of music SPL levels to compute a new single value equivalent impedance. In my survey of my library, I found 40 Hz to be the highest frequency. We could use that (and a region around it) to compute an effective impedance based on impedance+phase. What say you? :)
I absolutely agree that how hard a speaker is to drive should be documented; just not influencing the Preference score as having great amp alleviates the issue.Sensitivity & max SPL (point where THD shoots up; or some other metric) should also be documented.
We could just give it scores A to F so that people don't zoom in too much on what they mean.
Don‘t think for a minute people won’t criticize this or how it’s determined ;).
Another smoothness of on-axis and another, off-axis.
Have you decided how do generate the line the smoothness would be referenced to? I have a mathematics degree but am no statistician, so have no clue how to generate a linear regression curve that‘s based on the data and I have no clue if the software can generate that. I know Rtings uses one for soundbar tonality, so it must not be as complex as I am thinking
.

EDIT: Looks like there is a plug-in for Excel, well that makes it easy.

Also, if using NBD, I was looking over the formula and for
y-sub(b) is the amplitude value of band b within the 1/2-octave band n
I have no clue what “band b” is and it isn’t mentioned. Do you know or could you ask Sean?

Oh, and 100Hz-12kHz can’t be separated into whole 1/2 octave bands, so not sure what happens with that:
100Hz-150Hz
150Hz-200Hz
200Hz-300Hz
300Hz-400Hz
400Hz-600Hz
600Hz-800Hz
800Hz-1200Hz
1200Hz-1600Hz
1600Hz-2400Hz
2400Hz-3200Hz
3200Hz-4800Hz
4800Hz-6400Hz
6400Hz-9600Hz

9600Hz- ?
Does he just end it at 12000Hz instead of 12800Hz, or does he do something else?
 
Last edited:

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,678
Likes
38,772
Location
Gold Coast, Queensland, Australia
I think we should have an independent score for how difficult a passive speaker is to drive it. The best work I have seen is in Hifi News by Paul Miller. Alas, it is not well documented. Essentially the phase angle and impedance are used with a survey of music SPL levels to compute a new single value equivalent impedance. In my survey of my library, I found 40 Hz to be the highest frequency. We could use that (and a region around it) to compute an effective impedance based on impedance+phase. What say you?

A good start Amir. Where speakers exhibit low impedances at high frequencies perhaps that should also be considered. The impedance/phase plot could be run, but only commented upon or included if it was diabolical.

Efficiency would be calculated by Klippel in any case when (self) calibrating before each run?
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
You can deal with room mode at 2nd (or any other) harmonic with a precise EQ at a location of measurement microphone, but real problem is that folks refuse to understand that according to work of Toole & Olive THD was found to be a non-issue, which average forum member finds hard to accept after the obsession with SINAD in DACs and amps measurements. They have been told many times that 0.005% THD is much better than 0.0005% but now Toole is telling them that "in loudspeakers it is fortunate that distortion is something that normally does not become obvious until devices are driven close to or into some limiting condition", in spite of the fact that THD of best speakers on the market is in the range 0.1%-0.5% when playing at app 90dB at a 2m distance, and even many times larger below 200Hz.

This indeed does sound tricky to explain..
 
Last edited:

xr100

Addicted to Fun and Learning
Joined
Jan 6, 2020
Messages
518
Likes
237
Location
London, UK
Thanks. Still, I was hoping he had rated his own speakers using that metric so we had an example to look at.

IIRC, Earl Geddes did post on a forum somewhere that in his own blind tests it made no difference whether the "stock" B&C DE250 compression driver was used, or swapped with JBL or TAD parts (presumably in his Summa speaker's waveguide.)

His work on waveguides involved the development of formal analysis and methods to design which yielded a reduction of insidious "higher order modes." It might be suggested that THD metrics would rather be missing the point...
 
Last edited:

JJB70

Major Contributor
Forum Donor
Joined
Aug 17, 2018
Messages
2,905
Likes
6,151
Location
Singapore
The problems with any attempt to reduce performance indication to a metric or group of metrics is that it risks reducing things too much and distorting perception unfairly and unless people really understand the basis of how they're calculated and what they mean they can end up of questionable net benefit. I already think that SINAD values are causing issues for perception of electronic components.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,599
Location
Seattle Area
I think some serious context is missed here. The SINAD graph for electronics has created huge value for consumers. Company after company is going back to proper engineering, redesigning their audio products and producing far lower distortion and noise (hence higher SINAD). All of this has occurred at zero cost to consumers. Take the ESS IMD Hump. Adjustment of a couple of passive components results in reduction of some 20 dB in intermodulation distortion! Schiit has produced the same $99 DAC and Amps before with 20 to 30 dB improvement in SINAD.

At the same time, extremely low SINAD in 40s and 50s is shining a huge light on how bad some devices are. Think of Pass ACA amplifier kit. Many people were shocked with how bad those results were. Yet, Nelson Pass had already post a bunch of graph measurements to show the same but because it was not summarized in a simple to read comparison graph like SINAD, no one took notice. Even I didn't until I tested it.

Stereophile has been testing speakers and electronics for decades. Yet it has not had the effect that our measurements here have. Again, the fault is that the data lacks any kind of summary that shows comparative performance.

Here, the situation is far, far better than using SINAD. The work by Sean Olive is 100% based on double blind controlled testing to make sure the score correlates with listening test results. This is no simple measurement from 50 years ago that SINAD is. Here are some bits from the paper:
A Multiple Regression Model for Predicting
Loudspeaker Preference Using Objective
Measurements: Part II - Development of the Model

Sean E. Olive, AES Fellow

1578860891016.png


Correlation of 1.0 means perfection. That the model predicts listening test results with 100% accuracy. Here is the actual graph relative to results:
1578861003319.png


As you see the experimental results closely hug the linear prediction.

1578861154897.png


And how the tests were conducted:
1578861275421.png


If this kind of scoring is not good enough for you all, I don't know what is. This research is a gift. I highly suggest reading it in detail before scuffing at it.

And it is not like we can go without. I just measured another speaker that i will post soon. I am sitting here, seeing some anomalies in the measurements but no way of characterizing it at all in relative scale to what I have measured before.

At the end, the scale may just be good for showing the best and the worst. This is what SINAD is doing and is a great service and outcome. I don't care if someone argues between a speaker that gets a score of 6 or 7. I care about clearly identifying the dogs and heros.

As with speaker testing, there are many reasons not to do something. Get on board to solve this problem. The consumer needs a scale. It doesn't have to be perfect. It is not like he has any scale whatsoever to use right now. A compass is not as good as GPS but it can sure tell you more or less which way to walk if you are lost.
 

DDF

Addicted to Fun and Learning
Joined
Dec 31, 2018
Messages
617
Likes
1,360
Get on board to solve this problem......

Studies from Bech (mid 90s), Clarke ('83) and others have highlighted the outweighted contribution of floor reflection to perceived timbre and quality. I'm puzzled why the Harmon model doesn't account for this explicitly, and I think any ASR metric should. I'm knee deep in 20+ technical papers papers on reflection audibility to try and parse this out for my own curiosity.
 
Top Bottom