• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Predicted In Room Response?

Dave Zan

Active Member
Joined
Nov 19, 2019
Messages
172
Likes
496
Location
Canberra, Australia
I have seen a lot of plots of the Predicted (or Estimated) In Room Response.
Presumably created automatically by the Klippel software.
But I can't seem to find a definitive statement of the precise calculation from the Spinorama data.
Maybe it has been posted and I just can't find it in all the other posts that come up in a search.
If so then can someone point me the definition?
Otherwise, since it is such an important metric, maybe it would be helpful to place it and similar information in a "sticky" post at the top of this forum.
May help reduce some of the debate I noticed as I searched the topic?

David

Update: Thanks to MZKM
Does anyone know how this was derived or the paper that published it?
 
Last edited:
From Amir’s first speaker review:
https://www.audiosciencereview.com/...mkii-and-control-1-pro-monitors-review.10811/

1611489954359.png



For a really good speaker of, it thus should be fairly close to the Early Reflections curve, but will likely be a bit lower in level depending on how wide/narrow the dispersion is.
 
...where for example the KEF Reference 5 has a Smoothness (PIR) of 0.99:

That is actually a pretty pertinent question, and well worth a thread resurrection.
I note that the data for the KEF Reference 5 is manufacturer provided.
I don't want to be cynical merely on principle but that Smoothness number looks a bit suspicious to me.
The context is that the Preference Score is calculated from the Predicted In-room Response.
The statistics used to create that calculation are not entirely above question.
There are inevitable simplifications, for instance a linear model is used that clearly breaks down eventually even if it works OK within some limits.
There is also a non intuitive interaction between Smoothness and other components of the preference score.
This is the specific reason I am a little suss on the 0.99 KEF claim.
The issue has been discussed already in at least one of the Preference Score threads on this site, https://www.audiosciencereview.com/...er-preference-ratings-for-loudspeakers.11091/ well worth a read.
AFAIK it was never determined if Sean Olive, the creator of the the score, was actually even aware of the issue.
It looked possible he may just have taken what dropped out of his automated comparison software because it was quite usable, whatever the theoretical problems.
An outline of the Spinorama is discussed in Toole's book.
Most of the actual Olive papers on the calculations are available on-line.
Does that answer your questions?

Best wishes
David
 
Last edited:
Thanks David - I was more trying to understand how the Smoothness PIR is calculated and if it relates to the graph below:

View attachment 108108

Is the maximum value 1.0?

Is a higher number 'better'?

The Revel Salon 2 for example has a Smoothness of 0.84 but the KEF R3 has a value of 0.89. I love my KEFs but I think we all know which is the better speaker so what does this single number represent and how do we use it?
 
Thanks David - I was more trying to understand how the Smoothness PIR is calculated and if it relates to the graph below:

View attachment 108108

Is the maximum value 1.0?

Is a higher number 'better'?

The Revel Salon 2 for example has a Smoothness of 0.84 but the KEF R3 has a value of 0.89. I love my KEFs but I think we all know which is the better speaker so what does this single number represent and how do we use it?
The definition of smoothness (SM) is given in the "Olive score" patent.

SM.PNG


My comments/opinions/criticisms on it is that the Pearson correlation coefficient (r^2) seems to be an arbitrary choice. There are multiple methods to "quantify" smoothness (or errors) to a linear regression fit, why this particular one? It also doesn't appear to base on psychoacoustics understandings.

Below is from the research by Drs Toole and Olive when they were at NRC Canada. It showed resonances of different Q's and peaks that are just detectable using symphonic music (figure source).

Different Qs.PNG


When I calculated the smoothness my simulation of these 3 resonances, I got very different smoothness numbers. [I haven't verified my narrow band deviation (NBD) calculations, so my numbers may not be correct.]

Resonance.jpg


The low Q curve is much "smoother" than the high Q ones. Granted that they are all very small numbers, but still the relative magnitude differences are huge. I don't know if one can say these resonances are just detectable, but some are a lot more objectionable than the others.

The Pearson's r metric penalizes more on narrow but high peaks, and less on shallow but broad bumps. However, Toole said that narrow spikes may be less serious than broad bumps. The 708P FR curves have lots of little spikes and dips, and therefore lower Olive scores. But they seem not to be very audible defects.
 
A problem with the Pearson's r is that it is somewhat slope dependent. I was a bit baffled by the example in my previous post that the r^2 numbers were so small. I applied a slope to the curves and the SM (= r^2) numbers are drastically different. They are now all >0.9 (i.e. much smoother curves).

Resonance2.jpg


From Dr Olive's study, it was clearly stated that the slope of the PIR matters little. But in the contrived case of a near zero slope a very low SM score may be the result. (A horizontal flat line will blow up the Pearson's r calculation with a 0 divide by 0.)
 
Back
Top Bottom