It is not very subjective. Research shows otherwise.I can give a small example how taste vary.
My father like to put the treble in speakers all the way up and I like it all the way down.
It is very subjective.
It is not very subjective. Research shows otherwise.I can give a small example how taste vary.
My father like to put the treble in speakers all the way up and I like it all the way down.
It is very subjective.
Again, what "other people say" surely cannot count if you want to dismiss my listening tests which have far more rigor than theirs. There is not one speaker, no matter how flawed, that doesn't have countless "other people" who say they are great.Other people who listened to them. I was just intrested how they measured.
Which response curve? On-axis? If so, it doesn't always point down. The predicted-in-room does because sound becomes directional at higher frequencies so when you take into account reflections for that measure, high frequency response tilts down.
This is the last speaker I tested:
The high frequencies are actually tilting up, not down. So I don't know what you are saying there.
As I explained if you mean the in-room prediction, then physics mandates that due to direcitivity increase as frequencies go up (see the red line above). The smaller the wavelength of sound relative to size of the driver producing it, the more directional it becomes.
Obviously, the pure objective data provided by Amir's speaker measurements are incredibly valuable -- an unprecedented level of quality objective data on a wide range of speakers. I think many objectivists also would claim that subjective impressions from a single trained listener offer less useful information to shoppers than the objective data. Perhaps the subjective impressions are useful primarily if the reader is fully aware of (and aligns with) with Amir's personal speaker preference (e.g. bass-boosted speakers capable of reaching extremely high SPL in a large room).
To be fair, I don't think Amir misrepresents the meaning of the "Recommended" status in the reviews themselves. The reviews I've seen all honestly disclose the subjective nature of such conclusions. But that doesn't mean there isn't confusion and unintentionally misleading data that result from that.
Unfortunately, I do believe that the phrasing does end up misleading readers (unintentionally, I am sure):
Specifically: I think it's quite fair for readers to assume that reading a "Recommended" status from a site called "Audio Science Review" will be interpreted as something reflective of objective measurements (or at least something resembling a scientific method). In that case, assigning the conclusion "Recommended" or "Not Recommended" to a speaker entirely based on the subjective review portion could be tragically misleading (even if unintentionally so) since it will inevitably lead to some shoppers missing out on speaker(s) that may have been better for them than just those from the "Recommended" list.
In contrast, a more accurate status descriptor (like "Amir's Subjective Score" or "Amir's Preference" or "Subjective Reccomendation") would completely solve this problem.
This misleading effect is unfortunately made worse by otherwise very helpful compilations like this: https://www.audiosciencereview.com/forum/index.php?pages/SpeakerTestData/. When I go to results compilations like the above, especially on a site focused on audio science and objective measurements, the first thing I want to do is sort via some kind of objective ranking! You can use the bars to filter for preference score min and max, but the even more prominent filter here is the unqualified "Recommendation" status which begs the user to filter to just the "Yes" entries.
Anyone I know trying to narrow down a selection of good speakers would first filter to the "Recommended = Yes" speakers, perhaps not knowing that this has absolutely nothing to do with the objective measurements or preference scores.
IMO the status "Recommended" without qualification in this list is perhaps more dangerous of misleading than anything else on this site, but it's not really the "fault" of the compilation: Compilations will always exist. This is why I want to emphasize how misleading the unqualified descriptor "Recommended" is, at least out of context of the review writeup itself.
The objective part is the preference score, which is calculated and covers far more shades of gray than a Yes/No recommendation. The merit of the formula is openly discussed as well.I believe the recommend vs. not recommend should be purely based on objective factors such as price and measurements. I would prefer if you did that, and perhaps also had a subjective recommendation separate from that that includes your listening impressions, aesthetic of the speakers, form factor, warranty, etc.
What precisely about the measurements should lead me to that?I believe the recommend vs. not recommend should be purely based on objective factors such as price and measurements.
Am I missing something or does that compilation disprove what you're saying along with the notion put forth by so many that his recommendations are wildly inconsistent and often disagree with the measured performance?
If you sort those speakers by preference score (calculated from objective measurements), the bottom 20 speakers get zero recommendations. Of the top 20 speakers, 15 get recommendations. That seems to be a decent correlation to me.
Price Pref. Prefsub. Recommend?
$75 4.4 6.6 No
$78 4.5 7.2 No
$550 5.0 7.2 No
$700 5.6 7.4 No
$1000 4.7 6.6 Yes
Rich (BB code):Price Pref. Prefsub. Recommend? $75 4.4 6.6 No $78 4.5 7.2 No $550 5.0 7.2 No $700 5.6 7.4 No $1000 4.7 6.6 Yes
Those are all the ones that seem really off to me. Those are the only 5 I could find, which given the number of reviews, is impressive. I think I may have been too critical in the past. I was focusing way too much on the bad apples, and not seeing the big picture.
I'm interested in why those 5 are so off, though. Perhaps they are doing something really wrong(or right) that the Olive preference scores doesn't account for?
This isn't as much about taste as it is about what happens to our hearing as we get older. As we get older, we gradually lose our ability to hear high frequencies, so we try to compensate for it by cranking up the treble knob.I can give a small example how taste vary.
My father like to put the treble in speakers all the way up and I like it all the way down.
It is very subjective.
This isn't as much about taste as it is about what happens to our hearing as we get older. As we get older, we gradually lose our ability to hear high frequencies, so we try to compensate for it by cranking up the treble knob.
And secondly, it doesn't include directivity, so narrow directivity, especially around important frequency ranges, is going to hurt more.
The Olive Preference Rating does take directivity into account in the balance of NBD_PIR (values wide directivity) and SM_PIR (values narrow directivity). It does so in a contrived, indirect, difficult-to-interpret way, but still, all else being equal, a speaker with different directivity will get a different score.
Isn't that appropriate given:Adding: Maybe a better way to state it would be "does not sufficiently take into account any preferences regarding directivity."
How do you put it in a preference score if you don't know what's preferred?And honestly, I don't think that the data in the Olive study is remotely sufficient to say anything about directivity width and preference either.
Isn't that appropriate given:
How do you put it in a preference score if you don't know what's preferred?
Stop right there. Become a forum donor.I want to say that I really like that this site
That's one of my guesses that higher sensitivity speakers have higher distortion. And you may like higher sensitivity speakers.I was not calibrating output levels at that time so you can't go by that. Newer reviews are at 86 or 96 dB SPL, enabling proper comparisons. Prior reviews kept the input voltage constant which works for speakers with identical sensitivity but not otherwise.