• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Amir recommendation criticism

Status
Not open for further replies.

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
I can give a small example how taste vary.
My father like to put the treble in speakers all the way up and I like it all the way down.
It is very subjective.
It is not very subjective. Research shows otherwise.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
Other people who listened to them. I was just intrested how they measured.
Again, what "other people say" surely cannot count if you want to dismiss my listening tests which have far more rigor than theirs. There is not one speaker, no matter how flawed, that doesn't have countless "other people" who say they are great.

Indeed, I provide my recommendation with zero care to what other people say. Their assessment has no basis to be right whatsoever. It is just random votes by people with no understanding of the research, no training, no measurements, no protocol, etc. It is just entertainment to read their feedback and that is that. Imaging this. Detail that. Bass is this other. It is all nonsense that you can't possibly integrate together.
 

TWhitcombe3

New Member
Joined
Jun 5, 2020
Messages
1
Likes
8
Location
NY
I commend Amir on his evaluative process. I have followed many of these threads from a distance as a newer hobbyist and have found the information extremely helpful in dissecting and evaluating what makes a speaker and DAC quality performing. Just as informative has been the points presented in thread responses to the reviews, and it is apparent people judge for themselves based on the actual measurements not one listener's personal recommendation. Any ear has individual bias, and after a few reviews we can dissect the individual idiosyncrasies of a person's tastes and weigh it against our own empirical experiences for a moving baseline. If he said I liked the speaker as opposed to using "recommend" would this be an issue at all? Let's try to focus on the bigger picture here, the raw data presented here is astounding. Criticism should be constructive, not destructive. Don't lose sight of the forest through the trees, this platform is a wonderful wealth of shared information and passionate individuals.

Keep up the good work Amir, and blessings to everyone!
 

Snoochers

Active Member
Joined
Mar 28, 2020
Messages
187
Likes
70
I believe the recommend vs. not recommend should be purely based on objective factors such as price and measurements. I would prefer if you did that, and perhaps also had a subjective recommendation separate from that that includes your listening impressions, aesthetic of the speakers, form factor, warranty, etc.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
Which response curve? On-axis? If so, it doesn't always point down. The predicted-in-room does because sound becomes directional at higher frequencies so when you take into account reflections for that measure, high frequency response tilts down.

I wasn't looking closely enough. The curves that are generally downward-sloping toward high frequency are the predicted "in room response" and obviously they will slope downward toward high frequency given that the on-axis response is ideally flat and given the directivity is more narrow at high frequency.
 

KaiserSoze

Addicted to Fun and Learning
Joined
Jun 8, 2020
Messages
699
Likes
592
This is the last speaker I tested:

index.php


The high frequencies are actually tilting up, not down. So I don't know what you are saying there.

As I explained if you mean the in-room prediction, then physics mandates that due to direcitivity increase as frequencies go up (see the red line above). The smaller the wavelength of sound relative to size of the driver producing it, the more directional it becomes.

I appreciate this explanation and have to apologize because I was simply confused. I was looking at the predicted room response curve.
 

txbdan

Active Member
Joined
Apr 21, 2020
Messages
213
Likes
198
Obviously, the pure objective data provided by Amir's speaker measurements are incredibly valuable -- an unprecedented level of quality objective data on a wide range of speakers. I think many objectivists also would claim that subjective impressions from a single trained listener offer less useful information to shoppers than the objective data. Perhaps the subjective impressions are useful primarily if the reader is fully aware of (and aligns with) with Amir's personal speaker preference (e.g. bass-boosted speakers capable of reaching extremely high SPL in a large room).

To be fair, I don't think Amir misrepresents the meaning of the "Recommended" status in the reviews themselves. The reviews I've seen all honestly disclose the subjective nature of such conclusions. But that doesn't mean there isn't confusion and unintentionally misleading data that result from that.

Unfortunately, I do believe that the phrasing does end up misleading readers (unintentionally, I am sure):

Specifically: I think it's quite fair for readers to assume that reading a "Recommended" status from a site called "Audio Science Review" will be interpreted as something reflective of objective measurements (or at least something resembling a scientific method). In that case, assigning the conclusion "Recommended" or "Not Recommended" to a speaker entirely based on the subjective review portion could be tragically misleading (even if unintentionally so) since it will inevitably lead to some shoppers missing out on speaker(s) that may have been better for them than just those from the "Recommended" list.

In contrast, a more accurate status descriptor (like "Amir's Subjective Score" or "Amir's Preference" or "Subjective Reccomendation") would completely solve this problem.

This misleading effect is unfortunately made worse by otherwise very helpful compilations like this: https://www.audiosciencereview.com/forum/index.php?pages/SpeakerTestData/. When I go to results compilations like the above, especially on a site focused on audio science and objective measurements, the first thing I want to do is sort via some kind of objective ranking! You can use the bars to filter for preference score min and max, but the even more prominent filter here is the unqualified "Recommendation" status which begs the user to filter to just the "Yes" entries.

Anyone I know trying to narrow down a selection of good speakers would first filter to the "Recommended = Yes" speakers, perhaps not knowing that this has absolutely nothing to do with the objective measurements or preference scores.

IMO the status "Recommended" without qualification in this list is perhaps more dangerous of misleading than anything else on this site, but it's not really the "fault" of the compilation: Compilations will always exist. This is why I want to emphasize how misleading the unqualified descriptor "Recommended" is, at least out of context of the review writeup itself.

Exactly what i was going to add. The Recommendation score simply carries too much weight in how it's presented.
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
I believe the recommend vs. not recommend should be purely based on objective factors such as price and measurements. I would prefer if you did that, and perhaps also had a subjective recommendation separate from that that includes your listening impressions, aesthetic of the speakers, form factor, warranty, etc.
The objective part is the preference score, which is calculated and covers far more shades of gray than a Yes/No recommendation. The merit of the formula is openly discussed as well.

Anyone looking to make a final purchasing decision should look at the data and then read the listening impressions and ensuing conversations, which have been very productive.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
I believe the recommend vs. not recommend should be purely based on objective factors such as price and measurements.
What precisely about the measurements should lead me to that?
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,700
Am I missing something or does that compilation disprove what you're saying along with the notion put forth by so many that his recommendations are wildly inconsistent and often disagree with the measured performance?

If you sort those speakers by preference score (calculated from objective measurements), the bottom 20 speakers get zero recommendations. Of the top 20 speakers, 15 get recommendations. That seems to be a decent correlation to me.

Playing around with this more, there actually is pretty good correlation between the Olive score and the subjective listen.

Sorting by Preference Score(desc)
9 of the top 10 are recommended
11 of the top 15 are recommended
15 of the top 20 are recommended

Sorting by Preference Score(asc)
20 of the top 20 are not recommended
22 of the top 25 are not recommended

Sorting by Preference with score is less correlated, but that makes perfect sense given that he listens without subs.

I had never seen this page before. It's fun to look at the data like this. His subjective scores seem to be much more inline with the measurements than I thought. I think the few really wrong examples were warping my perception, but seeing the data like this makes it much easier to see the big picture, and the big picture actually shows good correlation.

Amir's weakness seems to be assessing value, mainly for the super cheap speakers, which makes sense when he's comparing it to what he's heard before.

Sorting by Price(desc)
6 of the top 10 are recommended
10 of the top 15 are recommended
14 of the top 20 are recommended

Sorting by Price(asc)
2 of the top 16 are recommended
5 of the top 20 are recommended
6 of the top 25 are recommended

With a reviewer who is a perfect assessor of value, I would expect to see a 50% recommendation(with a large enough sample size), so these(especially the cheap ones) are a bit off.

There are a few really odd examples that didn't make that list like the little Sony SS-CS5 or the KEF Q350.

There are some examples where the listening impressions deviate wildly like:

Rich (BB code):
Price  Pref. Prefsub. Recommend?
$75    4.4   6.6      No
$78    4.5   7.2      No
$550   5.0   7.2      No
$700   5.6   7.4      No

$1000  4.7   6.6      Yes

Those are all the ones that seem really off to me. Those are the only 5 I could find, which given the number of reviews, is impressive. I think I may have been too critical in the past. I was focusing way too much on the bad apples, and not seeing the big picture.

I'm interested in why those 5 are so off, though. Perhaps they are doing something really wrong(or right) that the Olive preference scores doesn't account for?
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,923
Likes
7,616
Location
Canada
Rich (BB code):
Price  Pref. Prefsub. Recommend?
$75    4.4   6.6      No
$78    4.5   7.2      No
$550   5.0   7.2      No
$700   5.6   7.4      No

$1000  4.7   6.6      Yes

Those are all the ones that seem really off to me. Those are the only 5 I could find, which given the number of reviews, is impressive. I think I may have been too critical in the past. I was focusing way too much on the bad apples, and not seeing the big picture.

I'm interested in why those 5 are so off, though. Perhaps they are doing something really wrong(or right) that the Olive preference scores doesn't account for?

I suspect the two major explanations are: preference score ignores frequency, and if a speaker is good in some ranges but bad in critical ranges like midrange vocal frequencies, it's going to be bad no matter the score.

And secondly, it doesn't include directivity, so narrow directivity, especially around important frequency ranges, is going to hurt more.

Finally, distortion is relevant particularly with the two-way coaxials as it seems like, based on information mentioned elsewhere, they are very prone to IMD which may bother some people more than others. Especially people who are trained to hear it or sensitive to it. Increasingly I believe that two-way coaxial is just a flawed design no matter what lipstick you put on it.

E: Also, I should add, there is some trade off with characteristics involved. For example if a small speaker can't get loud without bad distortion, then it will have trouble getting a recommendation unless it's exceptional in some other ways, even if it has a decent score. That seems fair to me.
 

escape2

Addicted to Fun and Learning
Joined
Mar 8, 2019
Messages
883
Likes
944
Location
USA
I can give a small example how taste vary.
My father like to put the treble in speakers all the way up and I like it all the way down.
It is very subjective.
This isn't as much about taste as it is about what happens to our hearing as we get older. As we get older, we gradually lose our ability to hear high frequencies, so we try to compensate for it by cranking up the treble knob. :)
 
OP
S

st379

Member
Joined
Jun 5, 2020
Messages
29
Likes
24
This isn't as much about taste as it is about what happens to our hearing as we get older. As we get older, we gradually lose our ability to hear high frequencies, so we try to compensate for it by cranking up the treble knob. :)

LOL could be :)

But it still shows a 180 degree different in sound preference. Also we have different bass preference and I don't think that it is affected by age.
 

edechamps

Addicted to Fun and Learning
Forum Donor
Joined
Nov 21, 2018
Messages
910
Likes
3,620
Location
London, United Kingdom
And secondly, it doesn't include directivity, so narrow directivity, especially around important frequency ranges, is going to hurt more.

The Olive Preference Rating does take directivity into account in the balance of NBD_PIR (values wide directivity) and SM_PIR (values narrow directivity). It does so in a contrived, indirect, difficult-to-interpret way, but still, all else being equal, a speaker with different directivity will get a different score.
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,923
Likes
7,616
Location
Canada
The Olive Preference Rating does take directivity into account in the balance of NBD_PIR (values wide directivity) and SM_PIR (values narrow directivity). It does so in a contrived, indirect, difficult-to-interpret way, but still, all else being equal, a speaker with different directivity will get a different score.

Maybe, but I don't think it does so in a remotely useful way, honestly.

And honestly, I don't think that the data in the Olive study is remotely sufficient to say anything about directivity width and preference either.

Adding: Maybe a better way to state it would be "does not sufficiently take into account any preferences regarding directivity."
 

Jon AA

Senior Member
Forum Donor
Joined
Feb 5, 2020
Messages
465
Likes
905
Location
Seattle Area
Adding: Maybe a better way to state it would be "does not sufficiently take into account any preferences regarding directivity."
Isn't that appropriate given:
And honestly, I don't think that the data in the Olive study is remotely sufficient to say anything about directivity width and preference either.
How do you put it in a preference score if you don't know what's preferred?

What it does do, of course, is penalize speakers with uneven directivty/directivity problems--because having a good PIR score means having a bad on-axis score and visa versa. Seems about right to me.
 

Sancus

Major Contributor
Forum Donor
Joined
Nov 30, 2018
Messages
2,923
Likes
7,616
Location
Canada
Isn't that appropriate given:

How do you put it in a preference score if you don't know what's preferred?

It's not a criticism of the preference score, just one of the possible explanations for why recommendations might differ from good scores. I agree there isn't sufficient public research to include it.
 

steve29

New Member
Joined
Oct 24, 2019
Messages
2
Likes
1
Maybe I've been misunderstanding, but I always thought that Amir's recommendation of speakers was based on how room-agnostic the speakers were? That is, speakers that sound mostly the same in a variety of rooms would be recommended, but speakers that were very picky about the room (flooring, walls, ceiling, positioning, etc) would get the lesser panthers; I thought the recommendations did not have much to do with subjective sound quality.
 

JohnYang1997

Master Contributor
Technical Expert
Audio Company
Joined
Dec 28, 2018
Messages
7,175
Likes
18,292
Location
China
I was not calibrating output levels at that time so you can't go by that. Newer reviews are at 86 or 96 dB SPL, enabling proper comparisons. Prior reviews kept the input voltage constant which works for speakers with identical sensitivity but not otherwise.
That's one of my guesses that higher sensitivity speakers have higher distortion. And you may like higher sensitivity speakers.
 
Status
Not open for further replies.
Top Bottom