• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Speaker Equivalent SINAD Discussion

OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
@amirm
Can the program export the data files just for PIR/LW/Sound Power, or just vertical/horizontal?

Since I believe I have all the formulation done (but using just on-axis data; score of ~5.4 for the NHT), I can try my hand at the whole formulation if you can upload the other data file(s).

EDIT: For LFX, do we want closest Hz less than -6dB, closest Hz greater than, or closest Hz regardless?
 
Last edited:

Costas EAR

Active Member
Forum Donor
Joined
Jan 15, 2020
Messages
157
Likes
348
Location
Greece
Congratulations for the excellent tonality procedure of measurements we can enjoy in speakers!

Let's stop for a moment to see the facts for loudspeaker priorities in real life high fidelity.

As already stated, low frequency response is crucial for total perceived fidelity.
So, multiple subwoofers is the only way to go, regardless which speakers are used in any setup.

Multichannel audio, as Dr Floyd Toole stated, is much superior.
So, in terms of high fidelity, an immersive setup is the best possible way to go.
All speakers should be considered for this use, if high fidelity is the goal.

Room RT 60 is of crucial importance, even more that the speakers used.
So, in terms of high fidelity, the reverberation time across the frequency response, should be stable at 0.2-0.3 sec.

Each speaker is only useful in a certain hearing distance range. For many reasons, including the most important: spl capabilities.
So, in terms of high fidelity, the golden standard of reference level should be used, in every methodology of measurements, always for the sweet spot. So, 85 dB's, plus the well known target curve and the dynamic range of the popular audio multichannel formats.

THD? Of course it matters. Even in low frequencies, just try the big boys from genelec and Neumann, and you will understand the difference. They state 1% THD for the levels provided.

Step response? In terms of high fidelity? Of course. Bruno is not stupid to build speakers with excellent step response. Time domain is of equal importance with frequency domain.

And now the big bang theory in audio: dynamic range and dynamic headroom. Do you remember dynamic compression and dynamic distortion? The Achilles' heel of loudspeakers, the most crucial element, the holy grail of sound. White papers still available even from westlake audio.

Measurements of loudspeakers, demand some basics, and i would choose as a standard beginning, a well treated room, with proper RT 60 values across the frequency range, similar to what everyone should so, if high fidelity is the goal.

Spl capabilities in certain levels of thd are crucial. Neumann is the leader of published measurements, and also genelec.
That's why their speakers sound so damn good.

These are the facts.

I suppose that a great step in speaker measurements is starting in asr, and i would really like to see these well known basics to be kept in mind.

I would like to know the optimum hearing distance for each speaker, for proper reproduction of the spl's needed for reference level.

I am interested in tonality, of course, from 80 hz and above. There are 4 sub's waiting to reproduce the first 2 octaves, don't need anything further from the speakers.

I need to know the dynamic range of the speaker.

I don't care about THD at 60 dB's, there is no meaning. Equal loudness curves, speaker setup should reproduce the 80-85 phon curve. That's the goal.

High fidelity. Please. In speakers. Not only in DAC's. Just a tonality curve in a dac, ...is it useful?

Just my 2c..


Again, congrats for the great new era in ASR!
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
Congratulations for the excellent tonality procedure of measurements we can enjoy in speakers!

Let's stop for a moment to see the facts for loudspeaker priorities in real life high fidelity.

As already stated, low frequency response is crucial for total perceived fidelity.
So, multiple subwoofers is the only way to go, regardless which speakers are used in any setup.

Multichannel audio, as Dr Floyd Toole stated, is much superior.
So, in terms of high fidelity, an immersive setup is the best possible way to go.
All speakers should be considered for this use, if high fidelity is the goal.

Room RT 60 is of crucial importance, even more that the speakers used.
So, in terms of high fidelity, the reverberation time across the frequency response, should be stable at 0.2-0.3 sec.

Each speaker is only useful in a certain hearing distance range. For many reasons, including the most important: spl capabilities.
So, in terms of high fidelity, the golden standard of reference level should be used, in every methodology of measurements, always for the sweet spot. So, 85 dB's, plus the well known target curve and the dynamic range of the popular audio multichannel formats.

THD? Of course it matters. Even in low frequencies, just try the big boys from genelec and Neumann, and you will understand the difference. They state 1% THD for the levels provided.

Step response? In terms of high fidelity? Of course. Bruno is not stupid to build speakers with excellent step response. Time domain is of equal importance with frequency domain.

And now the big bang theory in audio: dynamic range and dynamic headroom. Do you remember dynamic compression and dynamic distortion? The Achilles' heel of loudspeakers, the most crucial element, the holy grail of sound. White papers still available even from westlake audio.

Measurements of loudspeakers, demand some basics, and i would choose as a standard beginning, a well treated room, with proper RT 60 values across the frequency range, similar to what everyone should so, if high fidelity is the goal.

Spl capabilities in certain levels of thd are crucial. Neumann is the leader of published measurements, and also genelec.
That's why their speakers sound so damn good.

These are the facts.

I suppose that a great step in speaker measurements is starting in asr, and i would really like to see these well known basics to be kept in mind.

I would like to know the optimum hearing distance for each speaker, for proper reproduction of the spl's needed for reference level.

I am interested in tonality, of course, from 80 hz and above. There are 4 sub's waiting to reproduce the first 2 octaves, don't need anything further from the speakers.

I need to know the dynamic range of the speaker.

I don't care about THD at 60 dB's, there is no meaning. Equal loudness curves, speaker setup should reproduce the 80-85 phon curve. That's the goal.

High fidelity. Please. In speakers. Not only in DAC's. Just a tonality curve in a dac, ...is it useful?

Just my 2c..


Again, congrats for the great new era in ASR!
Some notes:

Toole also shows that the more speakers you have, the less critical of their performance you are, hence why mono listening is done (it’s also easier).

RT60 is not valid in small rooms. You can still measure decay, just not using RT60.

Step response doesn’t tell you anything that other measurements do.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,363
Location
Seattle Area
@amirm
Can the program export the data files just for PIR/LW/Sound Power, or just vertical/horizontal?

Since I believe I have all the formulation done (but using just on-axis data; score of ~5.4 for the NHT), I can try my hand at the whole formulation if you can upload the other data file(s).

EDIT: For LFX, do we want closest Hz less than -6dB, closest Hz greater than, or closest Hz regardless?
I can export the graph data in any of the graphs you see posted.

Here is the PIR for NHT:
 

Attachments

  • NHT M-00 Estimated In-Room Response.txt
    3.5 KB · Views: 119

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,363
Location
Seattle Area
I don't care about THD at 60 dB's, there is no meaning.
Those SPL numbers were not correct. I have now calibrated the microphone and its settings and they are much closer to real. I still need to confirm though. Likely level was 80 to 90 dB for those tests. Not 60. There is a requirement in the standard for SPL so I will need to follow that (more or less).

I am working on compression tests. It is work in progress. Here is a sample where I started with 0.25 volt input to the speaker and kept doubling it:

1579210034835.png


I don't see a sign of compression. Alas, once I double the input again, the unit hit severe protection and I got garbled output.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,363
Location
Seattle Area
Thanks, can you post LW & SP so I can try doing the full calculation?
That should be in the second post. You just have to extract it from the other graphs in the spinorama.

Also, in the paper it mentions being able to use SP instead of LW for LFX. What say you?
Does it make a difference in those low frequencies?
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,069
Location
Zg, Cro
Those SPL numbers were not correct. I have now calibrated the microphone and its settings and they are much closer to real. I still need to confirm though. Likely level was 80 to 90 dB for those tests. Not 60. There is a requirement in the standard for SPL so I will need to follow that (more or less).

I am working on compression tests. It is work in progress. Here is a sample where I started with 0.25 volt input to the speaker and kept doubling it:

View attachment 46131

I don't see a sign of compression. Alas, once I double the input again, the unit hit severe protection and I got garbled output.

Shouldn't THD be measured with at least 2.83V? Like when you measure sensitivity at 1m..

AFAIK Soundstage measures at 90dB at 2m distance which is more than 2.83V.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,363
Location
Seattle Area
Shouldn't THD be measured with at least 2.83V? Like when you measure sensitivity at 1m..

AFAIK Soundstage measures at 90dB at 2m distance which is more than 2.83V.
These are active speakers so you can't go by that.
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,069
Location
Zg, Cro
Toole is wrong. You become more tolerant and more critical.

This reminds me on a writing I once saw on the wall in some toilet in a pub in Berlin. It went like this:

In one handwriting - "Nietzsche: God is dead"

In other handwriting - "God: Nietzsche is dead". :D
 

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,843
Likes
9,586
Location
Europe
Those SPL numbers were not correct. I have now calibrated the microphone and its settings and they are much closer to real. I still need to confirm though. Likely level was 80 to 90 dB for those tests. Not 60. There is a requirement in the standard for SPL so I will need to follow that (more or less).

I am working on compression tests. It is work in progress. Here is a sample where I started with 0.25 volt input to the speaker and kept doubling it:

View attachment 46131

I don't see a sign of compression. Alas, once I double the input again, the unit hit severe protection and I got garbled output.
Which speaker?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,699
Likes
241,363
Location
Seattle Area
Toole is wrong if he claims that. You become more tolerant and more critical.
It is not a claim. It is a controlled test and results published.
Comparison of Loudspeaker-Room
Equalization Preference for Multichannel,
Stereo, and Mono Reproductions: Are
Listeners More Discriminating in Mono?


Sean E. Olive1, Sean M. Hess1, and Allan Devantier1
1 Harman International, Northridge, CA, 91329, USA

1579219030430.png


1579219163856.png


Take the no-EQ mode. It was barely differentiated in surround. But in mono, listeners definitely did not like it as much.
 
OP
M

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,557
Location
Land O’ Lakes, FL
That should be in the second post. You just have to extract it from the other graphs in the spinorama.


Does it make a difference in those low frequencies?
OK, I have things mostly done I assume.

I am still not sure at all about the smoothness, not the calculation aspect (except how sum of X is exactly the same as the sum of Y), but of the linear regression generated by Sheets, for instance I have one for the SM of on-axis and the SM of the sound power, the latter looks much closer to the regression line, but the data says the former is smoother.

I also did 3 scores: as-is, using LFX_SP, ignoring LFX

There also are different options for LFX: using closest Hz below, closest above, as well as closest regardless.

As noted earlier, the exact frequency ranges to use are not stated so:

101.807 is starting Hz instead of 98.1445
12,126 is ending Hz instead of 11,712.90 or 12,553.70 or 12,996.10
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
I don’t think you can apply the same formula. Any small deviations in FR would be inconsequential to room modes.

You need to do 3 main things:
  • CEA-2010
  • FR before compression
  • Group Delay
Despite not having any entries for a while, Data-bass.com is currently the best place for subwoofer reviews.
Measurements of Rythmik F18 (interactive graphs are a plus; you can also sort all subwoofers measured by max SPL at a chosen frequency, as movie watches want max SPL at 20Hz or lower, whereas many music-only listeners usually only care about 35Hz-40Hz)

Measuring all the adjustments like Data-bass does would be a cherry on top, but not absolutely necessary.

As stated by Amir previously, matters of room acoustics like modes are a separate issue. We have to assume best case scenarios of room acoustics in order to judge speakers’ and subwoofers’ maximum potential, and finely differentiate between similarly performing products. And in those ideal situations, small deviations in subwoofer frequency response may indeed be noticeable. As I said in my previous post, the LFQ variable had close to a 20% contribution to predicting speaker preference in Olive’s first model, so there’s a significant probability it would have a consequential influence on subwoofer preference.

Of course, the other metrics you mentioned and that Data-bass measure (great site by the way!) in addition to frequency response could also be important and would be nice for Amir to measure and present in reviews, but I don’t think they should be included in a preference rating, as I’m not aware of large scientific studies that have proven them to be significant in predicting sound quality preference in the same way Olive’s has proven LFX and LFQ are. Data-bass themselves say in their ‘Bass Myths’ article:
There is also group delay, energy decay, etc. Most studies show that people are insensitive to even moderate amounts of energy delay in the bass range. Generally what causes these types of subjective terms to be used are differences in frequency response
(my emphasis)

The whole point of these ratings we’re proposing is to have scores for speakers and subwoofers using metrics that have been proven to correlate with sound quality preference in controlled, scientific studies, and LFX and LFQ are the best such metrics I’m aware of that meet these criteria for subwoofers. Anyway, we’re probably getting ahead of ourselves – let’s get the speaker ratings accurately nailed first :) Then we can move onto subs.
 
Last edited:

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
In the paper it mentions being able to use SP instead of LW for LFX if rear-ported speakers. What say you?

Amazing work with the spreadsheet calculations, thanks! I think you’ve misunderstood what Olive meant in his LFX description though. In the full AES paper (scroll down for the full paper - much easier to follow than the patent application), he says:
LFX is the log10 of the first frequency x_SP below 300 Hz in the sound power curve, that is -6 dB relative to the mean level y_LW measured in listening window (LW) between 300 Hz-10 kHz. LFX is log-transformed to produce a linear relationship between the variable LFX and preference rating. The sound power curve (SP) is used for the calculation because it better defines the true bass output of the loudspeaker, particularly speakers that have rear-firing ports.

The patent application says “The sound power curve (SP) may be used for the calculation” instead of “is used for the calculation” (my emphasis) in the AES paper. I believe ‘calculation’ in both refers to the -6 dB point of the sound power curve only, and ‘may’ was used in the patent application as it’s describing techniques that may be used to calculate predicted preference ratings. (You’ll notice he uses ‘may’ instead of ‘is’ throughout much of the patent application – this might be to do with legal wording which could have to be very technically precise for a patent application.) So, I’m pretty sure you need to use the mean level of the listening window between 300 Hz and 10 kHz for the reference level of the LFX calculation, as stated in the actual AES paper (and the LFX equations in both the paper and patent application).

For LFX, do we want closest Hz less than -6dB, closest Hz greater than, or closest Hz regardless?
Definitely not the last option, as Olive defines it as “the first frequency x_SP below 300 Hz” (not 'nearest' or 'closest') so it must be the same side of the -6 dB point every time. I would say closest Hz less than the -6 dB point is correct, as the next part of the definition, “that is -6 dB relative to the mean level y_LW”, I believe should be read as ‘at least 6 dB less than’ i.e. the ‘first’ frequency you ‘hit’ moving down the SP curve from 300 Hz that has the condition of being at least 6 dB less than y_LW. Otherwise, taking the closest Hz greater than the -6 dB point would mean the low extension frequency not meeting the condition of being -6 dB relative to y_LW, which would be incorrect according to the LFX definition and formula presented.

As noted earlier, the exact frequency ranges to use are not stated so:

101.807 is starting Hz instead of 98.1445
12,126 is ending Hz instead of 11,712.90 or 12,553.70 or 12,996.10

Olive states in the NDB definition:
N is the total number of ½-octave bands between 100 Hz-12 kHz
(my emphasis)

This would suggest 11,712.90 Hz should be used as the upper bound as it is within the range 100 Hz-12 kHz, whereas 12,126 Hz is outside this range. The former is also more consistent with the lower bound chosen (101.807 Hz), which is also within, not outside, the prescribed range.

Having said this, are we certain Olive is referring to the lower and upper bounds, and not the center frequency of the lowest and highest bands, as I previously suggested? I have more reason to think this after seeing some excerpts from Part 1 of his paper, A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements: Part I - Listening Test Results. I don’t have access to the full paper, but found excerpts and on this blog by a Chinese acoustic engineer. Here, in reference to this chart from the paper, he quotes Olive as saying:
In band 2 (centered on 64 Hz) there was a wider variance in scores indicating speakers differed more in this range. Many loudspeakers were judged to have too much energy in bands 5 (2.9 kHz) and 6 (10.1 kHz).

So at least in these listening tests, Olive has defined bands by their center frequencies, not their lower and upper bounds. This might suggest he did the same in the second paper when devising the preference formula, and so “bands between 100 Hz-12 kHz” actually means ‘bands with center frequencies between 100 Hz-12 kHz’. Maybe @amirm can clarify this with Sean Olive?
 
Last edited:
Top Bottom