• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Speakers measurements anatomy

alter4

Member
Joined
Mar 27, 2021
Messages
33
Likes
3
Hi folks, i am sorry if this is wrong place to ask but i would like to understand speakers measurement theory.
I understand really good measurements of DACs and amplifiers but those "3d direction plots" sounds like magic. Could you please provide good articles references. I think language doesn't matter, Google translate works)
 

JSmith

Master Contributor
Joined
Feb 8, 2021
Messages
5,251
Likes
13,597
Location
Algol Perseus
speakers measurement theory


JSmith
 

alex-z

Addicted to Fun and Learning
Joined
Feb 19, 2021
Messages
915
Likes
1,697
Location
Canada
If you mean the CEA-2034 measurements, it is culmination of the total radiated energy of the speaker. You do many off-axis measurements, both horizontally and vertically in order to get this data. It can be presented in graph format, a 2D polar plot, or a 3D balloon image.

IIRC, minimum measurement resolution is every 10 degrees to meet the CEA-2034 standard, and the Klippel NFS operates at 1 degree resolution.

In the case of the Klippel NFS system, there is some complex math added to remove the influence of room reflections, allowing anechoic data in a normal household space. If you own an anechoic chamber, the measurements can be used directly with no processing. If you lack the NFS and chamber, the third solution is time gating. That is the process of looking at the impulse response, and cutting off the measurement before the reflections arrive at the microphone. Unlike the Klippel system, this causes loss in measurement accuracy, because the data is truncated, rather than reconstructed/filtered.
 

Koeitje

Major Contributor
Joined
Oct 10, 2019
Messages
2,309
Likes
3,976
Erin’s Audio Corner. YouTube channel. He has a playlist explaining speaker measurements.

Amir made several YouTube videos as well.
Erin's video series is really good.
 
Joined
Oct 5, 2022
Messages
83
Likes
103
3d directivity plots are just regular frequency response plotted at various angles. One axis of the graph represents the angle of the microphone relative to the speaker, the other axis is the frequency, and the color is the amplitude (loudness). It's not hard to understand, if you are having trouble I am probably not explaining it well.

Personally, I prefer to look at the spinorama graphs. It's easier understand the directivity.

You may also be interested in this classic whitepaper (now 20 years old!) from Harman that explains spinorama measurements and why you should care about directivity:

 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,564
Likes
1,710
Location
California
To save you a lot of time, spin measurements have only been demonstrated to have good (but not excellent) correlation with subjective listener preferences, and this is using computerized analysis. Some people think they can simply eyeball a series of spin measurements and determine with confidence how that speaker will sound - no they cannot. Took me a while to figure that one out.

Now "2D" measurements of solid state devices (ie amps and DACs) are state of the art - meaning they can be used to determine transparency, or confirm the absence of the ability to differentiate between devices under controlled listening conditions. Some people think that this capability for using measurements to predict SS sound quality (or more precisely, transparency) transfers over to loudspeakers (and other transducers). It does not.

Have fun.
 

ferrellms

Senior Member
Joined
Mar 24, 2019
Messages
300
Likes
260
To save you a lot of time, spin measurements have only been demonstrated to have good (but not excellent) correlation with subjective listener preferences, and this is using computerized analysis. Some people think they can simply eyeball a series of spin measurements and determine with confidence how that speaker will sound - no they cannot. Took me a while to figure that one out.

Now "2D" measurements of solid state devices (ie amps and DACs) are state of the art - meaning they can be used to determine transparency, or confirm the absence of the ability to differentiate between devices under controlled listening conditions. Some people think that this capability for using measurements to predict SS sound quality (or more precisely, transparency) transfers over to loudspeakers (and other transducers). It does not.

Have fun.
I believe that you can look at test results and figure out how a speaker will sound. Spinorama does a better job than any other any measurement.

In fact, I'd go so far as to say spinorama is the a better indicator of how a speaker will sound across different listening environments than listening to the speakers yourself in a particular location. If you can get a chance to listen to a bunch of speakers in your own environment, well good for you, go for it!
 

tuga

Major Contributor
Joined
Feb 5, 2020
Messages
3,984
Likes
4,285
Location
Oxford, England

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,564
Likes
1,710
Location
California
I believe that you can look at test results and figure out how a speaker will sound. Spinorama does a better job than any other any measurement.

In fact, I'd go so far as to say spinorama is the a better indicator of how a speaker will sound across different listening environments than listening to the speakers yourself in a particular location.
I appreciate you opinion. However, Harman's own published research does not support your belief - in fact, it quantifies just how much computerized analysis of a spinorama does and does not predict listener preferences. And that's computerized analysis. Eyeballing a set of charts is going to be far less accurate.
I'll also add that in a different thread on ASR, I posted partial spin-equivalent data for 3 speakers and asked participants to interpet and rank each of them, and guess what, I got widely varying answers for ranking and interpretations of how each speaker would sound.
While spinoramas are better than, say, a single on-axis FR plot, the information they contain, combined with our ability to "eyeball" and interpret the data, do not allow us to make reliable predictions of how the speaker will sound.
But I know that a lot of people believe they have "the gift" of being able to convince themselves and others that they can magically interpret a series of 3-D measurements, analyze them in their heads, and tell you reliably how that speaker will sound with more accuracy than Harman's computerized analysis can. I mean come on, let's turn back on the critical thinking.
 

phoenixdogfan

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
3,348
Likes
5,292
Location
Nashville
Amir made an absolutely excellent one in his first review which was the JBL 305. It explains each measurement and its significance
 

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,907
Likes
16,972
I'll also add that in a different thread on ASR, I posted partial spin-equivalent data for 3 speakers and asked participants to interpet and rank each of them, and guess what, I got widely varying answers for ranking and interpretations of how each speaker would sound.
First of all those plots you posted in that thread cannot be really compared to a spinorama https://www.audiosciencereview.com/...the-preferred-loudspeaker-fun-exercise.14742/

Second, because getting different answers from people with different levels of experience is a poor argument that none can interprete spinoramas quite well.

While spinoramas are better than, say, a single on-axis FR plot, the information they contain, combined with our ability to "eyeball" and interpret the data, do not allow us to make reliable predictions of how the speaker will sound.
You shouldn't extrapolate your experience to others, initially on above thread you had posted even only the listening window plots and added the horizontal directivities later after objection of others, which tells quite about your understanding on this topic.

In the end a full set of measurements like from S&R, ASR and Erin will show very well the problems and deviations from neutrality of a loudspeakers as well its tonal and radiation character. As individual preference is always undermined by individual fluctuations it will not definitely tell if listener X will definitely prefer loudspeaker A or B but the possibility someone will and most important distinguish a well engineered neutral loudspeaker from a more flawed one and that is what is important.
 
Last edited:

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,564
Likes
1,710
Location
California
First of all those plots you posted in that thread cannot be really compared to a spinorama https://www.audiosciencereview.com/...the-preferred-loudspeaker-fun-exercise.14742/

Second, because getting different answers from people with different levels of experience is a poor argument that none can interprete spinoramas quite well.


You shouldn't extrapolate your experience to others, initially on above thread you had posted even only the listening window plots and added the horizontal directivities later after objection of others, which tells quite about your understanding on this topic.

In the end a full set of measurements like from S&R, ASR and Erin will show very well the problems and deviations from neutrality of a loudspeakers as well its tonal and radiation character. As individual preference is always undermined by individual fluctuations it will not definitely tell if listener X will definitely prefer loudspeaker A or B but the possibility someone will and most important distinguish a well engineered neutral loudspeaker from a more flawed one and that is what is important.
Couple of strawman's in there, and I'm going to assume that they were not intentionally constructed.

Just so there's no confusion this time, let me be specific. Harman's own published and peer-reviewed research demonstrates that a computerized analysis of spinorama data can predict 74% of the variation in listener preferences in a sample of 70 loudspeakers playing 3 tracks of rock music. 74% is pretty good, but it's not a 100%. But we're not done there.
What people are claiming here is that that they can eyeball a series of spin charts and do better than the computer. This is the equivalent of saying that a human being can do a better job simply by glancing at the same series of charts that correspond to FR measurements at various off-axis angles, then weigh the hundreds of different imperfections (including +/- dB FR deviations of varying bandwidth on the various charts), and come up with a better "interpretation" than a computer.
But I'm not done yet. The 74% correlation was constructed based on a restricted selection of music (rock), on a defined sample of 70 loudspeaker models. As should be obvious, extrapolating outside the conditions of the original correlation experiment is going to lower your accuracy. Put simply, on ASR, we're typically talking about interpreting spins on loudspeakers outside that original sample AND for listener preferences that invariably include a wide range of music genres for playback.

So how good do you think the correlation is now? It ain't 74% by any stretch. Maybe 50%? 30%? This isn't rocket science. It just requires an individual to think independently instead of assuming people whom you have never met on an internet forum possess skills that are too good to be true.
 
Last edited:

MZKM

Major Contributor
Forum Donor
Joined
Dec 1, 2018
Messages
4,251
Likes
11,560
Location
Land O’ Lakes, FL
but those "3d direction plots"
I’ll just copy/paste from the Audioholics article:

image


image


Same as above but rotated:
image


Now a colored by SPL loudness:
image


Now a top-down view:
image


Here it is as a 360° projection:
image
 

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,907
Likes
16,972
Couple of strawman's in there, and I'm going to assume that they were not intentionally constructed.
...
What people are claiming here is that that they can eyeball a series of spin charts and do better than the computer.
I see strawmen on your argumentation, which people exactly claimed that they can do better than a computer and where exactly, please with links?

As I wrote above, for most of the acknowledged people here the spinorama is a very good descriptor for FR/radiation/tonal issues unlike just the LW you initially used in your thread. Personally I prefer having additionally also the full horizontal and vertical directivity plots as just the standard CEA-2034 plot can for example hide some individual horizontal issues if they are compensated vertically.
 
Last edited:
  • Like
Reactions: GDK

fluid

Addicted to Fun and Learning
Joined
Apr 19, 2021
Messages
694
Likes
1,198
Just so there's no confusion this time, let me be specific. Harman's own published and peer-reviewed research demonstrates that a computerized analysis of spinorama data can predict 74% of the variation in listener preferences in a sample of 70 loudspeakers playing 3 tracks of rock music. 74% is pretty good, but it's not a 100%.
The specifics are slightly different, the link below has the paper which is for speakers despite the title.

https://www.researchgate.net/publication/332210798_A_Statistical_Model_that_Predicts_Listeners'_Preference_Ratings_of_Around-Ear_and_On-Ear_Headphones

"A new model has been developed that accurately predicts preference ratings of loudspeakers based on their anechoic measured frequency response. Our model produced near-perfect correlation (r = 0.995) with measured preferences based on a sample of 13 loudspeakers reported in Part One. Our generalized model produced a correlation of 0.86 using a sample of 70 loudspeakers evaluated in 19 listening tests. Higher correlations may be possible as we improve the accuracy and resolution of our subjective measurements, which is a current limiting factor."

There were two studies and it can be confusing as they quote figures for different types of measurements but the ones above are for 1/20 oct Anechoic Measurements. In the second study the original 13 were able to have preference predicted to a very high degree of confidence. So when the speakers are all very similar in size and makeup the confounding factors are reduced and the outcome is much more predictable. When the sample size was expanded to 70 the speakers varied much more, the prediction became less accurate but still 0.86 is a fairly good confidence level.

What people are claiming here is that that they can eyeball a series of spin charts and do better than the computer. This is the equivalent of saying that a human being can do a better job simply by glancing at the same series of charts that correspond to FR measurements at various off-axis angles, then weigh the hundreds of different imperfections (including +/- dB FR deviations of varying bandwidth on the various charts), and come up with a better "interpretation" than a computer.
I would agree that in reality it is quite difficult to make good quotative comparisons between spin graphs on a quick look particularly if they are presented in different aspect ratios. If the graphs are created in the same way then it is easier to gauge with a quick look whether something is likely to have a chance of sounding good or has obvious measurable response issues.
But I'm not done yet. The 74% correlation was constructed based on a restricted selection of music (rock), on a defined sample of 70 loudspeaker models. As should be obvious, extrapolating outside the conditions of the original correlation experiment is going to lower your accuracy. Put simply, on ASR, we're typically talking about interpreting spins on loudspeakers outside that original sample AND for listener preferences that invariably include a wide range of music genres for playback.
I think it is reasonable that any correlation with variables such as the music used will not necessarily apply to all other situations, but I don't think that it would reduce the confidence to a level less than pure chance.

Spin graphs are much more useful at picking the good from the bad than they are from good to great.
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,564
Likes
1,710
Location
California
I see strawmen on your argumentation, which people exactly claimed that they can do better than a computer and where exactly, please with links?
There you go again! This one appears to be intentional.

So are you acknowledging that a person eyeballing a series of spin charts cannot possibly exceed the accuracy of Harman's computerized analysis in being able to predict loudspeaker listener preferences?
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,564
Likes
1,710
Location
California
The specifics are slightly different, the link below has the paper which is for speakers despite the title.

https://www.researchgate.net/publication/332210798_A_Statistical_Model_that_Predicts_Listeners'_Preference_Ratings_of_Around-Ear_and_On-Ear_Headphones

"A new model has been developed that accurately predicts preference ratings of loudspeakers based on their anechoic measured frequency response. Our model produced near-perfect correlation (r = 0.995) with measured preferences based on a sample of 13 loudspeakers reported in Part One. Our generalized model produced a correlation of 0.86 using a sample of 70 loudspeakers evaluated in 19 listening tests. Higher correlations may be possible as we improve the accuracy and resolution of our subjective measurements, which is a current limiting factor."

There were two studies and it can be confusing as they quote figures for different types of measurements but the ones above are for 1/20 oct Anechoic Measurements. In the second study the original 13 were able to have preference predicted to a very high degree of confidence. So when the speakers are all very similar in size and makeup the confounding factors are reduced and the outcome is much more predictable. When the sample size was expanded to 70 the speakers varied much more, the prediction became less accurate but still 0.86 is a fairly good confidence level.

Correct. The original 13 were used to derive an original regression formula as a proof of concept, and it was possible to create a regression with r=0.995. However, that same formula did not fit very well when trying to apply it to a larger sample, and so a new regression formula was developed for the larger sample of 70 loudspeakers. The r for the final regression formula was 0.86. That's an r-squared of 0.74, which in plain language, means that the regression formula, derived from computerized analysis of the measured performed only accounted for 74% of the variation in blind listener preferences.

Put very simply, r=0.86 is really good in order to be able to make the argument that the Harman model of measurement interpretation correlates well with listener preferences, compared to prior models (like the Consumer Reports model). BUT, when it comes to accurately predicting those listener preferences, you have to go further and understand that the r-squared is only 74%. And if you look at the actual vs predicted score chart visually, it's very clear how much uncertainty that actually describes.
Spin graphs are much more useful at picking the good from the bad than they are from good to great.

THIS, I completely agree with. And the data in the Harman paper you linked supports this (meaning if you look at the variation in blind listener preferences with low predicted scores vs high predicted scores, there's quite a bit of differentiation in the blind listener preferences). But when trying to differentiate between a "5" and a "6," forget it. There's so much overlap.
 
Top Bottom