Then maybe they wouldn't mind releasing the data for the 57 other speakers used in the test (as we have the 13 originals).he was very incredulous and said point blank that they were only focused on headphones.
It is worthwhile to look at how his formula changed between the 13 bookshelves and the 70 total.
13 bookshelves:
70 total:
LFX for the bookshelves initially played little value, which makes sense as they all being bookshelves would lend their bass extension to not being too dissimilar.
LFQ (deviations in the bass) goes from near 20% to not factored in at all, which seems odd to me.
I have no clue why NBD_ON replaced AAD_ON. Maybe resonances became more of an audible issue.
Let's look at the correlation for all these for the 13 bookshelves:
We now know that Smoothness heavily favors tilt, such that two on-axis responses that are both neutral-ish but have different degrees of jaggedness won't score too dissimilar. So having a correlation of <0.2 makes sense, equally so with the Listening Window as that usually is only slightly tilted. What doesn't make sense is if it is so poorly correlated, why did it make up >25% of the original model, which was highly accurate?
Smoothness on the Sound Power was swapped for the Smoothness of the PIR, which the above graph backs up doing, but the % was dropped considerably. Knowing what we know, this is likely due to many of the tower speakers being 3-way and thus having wider directivity, which in turn reduces the tilt and makes it less accurate, the paper states that ideal slopes could be tied to directivity.
Now that we have the 13 bookshelf Spins and their subjective rankings, at least some analysis can be done to see if another formula can be made which does not rely on slope and yet has high probability, which in theory can be used for tower speakers as well. However, as pointed out, this Infinity was rated as being the best of the group of 13, yet it was only given a 6.16, so trying to predict the scores of better measuring speakers is less accurate.