I don't think so. -30deg slope and +30deg slope should result in the same corr. coefficient, just the sign would be opposite.
So yeh, I think in other words you are confirming what I understood to be the case, i.e. that a speaker with a steeply upward-sloping PIR would score higher (all else equal) than a speaker with a flat or shallowly-downward sloping PIR.
Not necessairly, it all depends how correlation coef is used. Upward sloping PIR may as well score worse than the one with flat PIR.
Ok, so do we think that Olive deliberately built in SM_PIR such that it would advantage speakers with the steepest possible downward-sloping PIR?
If so, it seems hard not to conclude that, on the basis of the data set, listeners tended to prefer speakers with steeply narrowing directivity over speakers with wider/more constant directivity.
This simply seems wrong to me, as after you reach certain negative slope with PIR that is perceived optimal making the slope further negative should sound less optimal IMHO.
Would you be able to do it with 70 speakers and only 4 of these variables though?
Here's an excerpt from section 2.1 of the paper:
And, later in section 4 (this is about Test One though):
So it does look like Olive was well-aware of the over-fitting problem. I do agree that the model being evaluated against the same dataset it was trained with is worrying, though.
We know the model is far from perfect. But it's the best we've got right now, and it's still miles ahead of anything that has ever been attempted with regard to producing reliable objective ratings of loudspeakers.
The Mallow’s CP value is 4 indicating that the model is not too over-fitted for the number of variables used
The lower correlation was likely related to the model being too tightly fitted to the small sample (13 loudspeakers) and/or the loss of precision from combining subjective data from 18 unrelated tests.
1) Up to this point, the model has been tested in one listening room.
2) The model doesn’t include variables that account for nonlinear distortion (and to a lesser extent, perceived spatial attributes).
3) The model is limited to the specific types of loudspeakers in our sample of 70.
4) The model’s accuracy is limited by the accuracy of the subjective measurements.
It seems wrong to me too, yet a difficult conclusion to escape.
How many buyers of this inexpensive speaker have the inclination, ability and means to EQ this speaker?
How many audiophiles would bother?
Just askin'. View attachment 65320
How many buyers of this inexpensive speaker have the inclination, ability and means to EQ this speaker?
How many audiophiles would bother?
Just askin'. View attachment 65320
Because on-axis had higher correlation with the sampled preference data. As I've mentioned in other threads, this is the expected result, as the on-axis is likely more representative of the direct sound, because every speaker was listened to directly on-axis from a single position. This is not representative of real world use, however, where we know most people listen off-axis, where people tend to move more, and where some speakers are meant to be heard off axis.The other thing I'm struggling to understand is why he used on-axis and not LW.
Because on-axis had higher correlation with the sampled preference data. As I've mentioned in other threads, this is the expected result, as the on-axis is likely more representative of the direct sound, because every speaker was listened to directly on-axis from a single position. This is not representative of real world use, however, where we know most people listen off-axis, where people tend to move more, and where some speakers are meant to be heard off axis.
But the paper was not interested in optimizing the tested speaker's performance, finding out which specific speakers are the best, or what real world use is like. It was interested in correlating listening impressions to measured frequency responses.
At best, it maybe just tells us that for the speakers tested, a 60 degree wide and 20 degree tall listening window is not as representative to preference to a speaker listened on axis from a single chair as the actual on-axis curve is. That's not a surprise. I'd be willing to bet good money a smaller listening window, maybe +/- 10 degrees, would've fared better, but we don't have the raw data to know.
For our purposes of trying to figure out which speakers are better than others, imo the listening window is still more useful. Harman engineers themselves seem to prefer optimizing for the listening window, as can be seen in so many spinoramas. Kevin Voecks of revel says in this thread: "As our research has long indicated, the listening window is a far better indicator of direct sound quality than is any on-axis curve. "
Well, it‘s also my understanding the waveguides inherently cause issues on-axis that are not seen even a few degrees off-axis. Since Harman uses waveguides, it would benefit them to focus on the listening window.Because on-axis had higher correlation with the sampled preference data. As I've mentioned in other threads, this is the expected result, as the on-axis is likely more representative of the direct sound, because every speaker was listened to directly on-axis from a single position. This is not representative of real world use, however, where we know most people listen off-axis, where people tend to move more, and where some speakers are meant to be heard off axis.
But the paper was not interested in optimizing the tested speaker's performance, finding out which specific speakers are the best, or what real world use is like. It was interested in correlating listening impressions to measured frequency responses.
At best, it maybe just tells us that for the speakers tested, a 60 degree wide and 20 degree tall listening window is not as representative to preference to a speaker listened on axis from a single chair as the actual on-axis curve is. That's not a surprise. I'd be willing to bet good money a smaller listening window, maybe +/- 10 degrees, would've fared better, but we don't have the raw data to know.
For our purposes of trying to figure out which speakers are better than others, imo the listening window is still more useful. Harman engineers themselves seem to prefer optimizing for the listening window, as can be seen in so many spinoramas. Kevin Voecks of revel says in this thread: "As our research has long indicated, the listening window is a far better indicator of direct sound quality than is any on-axis curve. "
Maybe @amirm can help us test it.
I created another filter which increases the slope of the PIR and I have attached a filter so if you have time for another listening test we would greatly appreciate.
PIR - original in red, initial correction in blue, increased slope correction in green. Filter is attached (.txt -> .wav). Plz also apply your room EQ 100Hz filter as you did with 1st test.
View attachment 65384
This won’t prove much. You need to compare speakers that are near identical in every aspect except the directivity. Increasing the slope via EQ will bring down the on-axis and listening window so they are no longer neutral.
I’m about to be out of the house for a while, so it’ll have to wait (I foolishly deleted the rows to match the octave smoothing rather than hide them, so I manually will have to do the adjustment again).@MZKM Can you plz calculate rating for this filter for PIR with increased slope?