You have never needed to buy curtains.
Keith
Keith
*waits for the "Caption the Photo" contest to start
"And this, well, nevermind..."
I had a ton of subjectivists dismiss the tests on the basis of the amplifier used in Harman tests!And this needs a better amplifier.
And what was that amplifier?I had a ton of subjectivists dismiss the tests on the basis of the amplifier used in Harman tests!
I had a ton of subjectivists dismiss the tests on the basis of the amplifier used in Harman tests!
You can see it in the picture I posted: Proceed Amp. So no low-end junk amplifier.And what was that amplifier?
The other myth here is that we all have different preferences (hence his comment on who to listen to). That is just not so when we test speakers blind. Here again is Harman's research and preference for four speakers among different groups of listeners:
View attachment 19866
Note that the relative scoring of each speaker remains the same no matter which group is selected.
HARman trained listeners are more picky but have the same order preference that reviewers and academics had as I have highlighted.
They key thing is all groups rated the speakers in the same order. Trained listeners were more picky and the ratings varied by a larger amount which is what one would expect.If everyone have the same preference then the lines should be perfectly horizontal.
Just by looking at the graph, it shows that we do have individual preference.
I understand, but my point still stand, the graph shows that we do have individual preference.They key thing is all groups rated the speakers in the same order. Trained listeners were more picky and the ratings varied by a larger amount which is what one would expect.
So untrained, trained, musicians, salespeople, reviewers, all, all groups picked the same kind of speaker as best. This is what you would expect. They all picked the same as worst (unfortunately for Ray and myself). So everyone has the same ordered preference for the same qualities, but trained and experienced people are more picky about it and indicate larger differences in their evaluation of the sound.
All seems perfectly reasonable to me.
The other myth here is that we all have different preferences (hence his comment on who to listen to). That is just not so when we test speakers blind. Here again is Harman's research and preference for four speakers among different groups of listeners:
View attachment 19866
Note that the relative scoring of each speaker remains the same no matter which group is selected.
HARman trained listeners are more picky but have the same order preference that reviewers and academics had as I have highlighted.
I understand, but my point still stand, the graph shows that we do have individual preference.
Even the order of the preference on the graphic is not absolute. If you look at the graph, it shows that for Acad10, loudspeaker I & loudspeaker P is reversed compared to everyone else. (".......So everyone has the same ordered preference for the same qualities......"<--- incorrect)
Which proves my point that we do have individual preference.
Actually the relative scoring of each speaker is different. (does not remain exactly the same) I measured the relative difference using a ruler.
Probably not as specifically as you would like. Chapters 13 and 17 do touch on the subject.Perhaps there is more on this in the book, or in Harman's published research?
Hi Dragon,
@Blumlein 88 and @amirm may have been a little careless with some word choices, but the thrust of what they are saying is that "relative" choices are generally preserved between groups. "Relative" means the rank is preserved, but not the absolute value or absolute difference. The P - I distinction deserves special consideration.
One can see from the graph the significant result:
P,I >* B >** M
* for 15/16 individual groups, and the combined group (p<0.0001)
** for all individual groups, and the combined group (p<0.0001)
Even though individual scores and differences between scores vary, the rank is well preserved.
The difference between P and I is less clear. Since the tabular data equivalent to the graph is not in the paper, one has to eyeball it. But it is clear that for 9/16 groups there is not a statistically significant difference. For 4/16 groups there is clearly a statistically significant difference, with 3 preferring P and 1 I. For 3/16 the significance is unclear from the graph (since I don't trust pixel measurements). When all the groups are combined the mean difference between P and I is 0.336 (from the paper), with a significance of p=0.0214.
Some will argue that is significant, but from my experience with this type of data, since the p=0.05 cutoff is arbitrary, other ways of viewing the data are relevant, and I would prefer further tests to draw a conclusion about P vs. I.
Cheers, SAM
That's interesting.I think what the graph shows is that all the groups showed similar preferences.
I don't have a copy of Dr Toole's book and in any case I'm not sure how much is explained by the research in it, but unlike @dragonspit4 my concern is not so much about the relative scores varying (sure, the do, but not very significantly, at least in the case of B vs M vs P/I). My concern is that the tests all took place in the same room.
I'd like to see how different rooms affected preferences, and to what extent the results hold when the room is varied.
Perhaps there is more on this in the book, or in Harman's published research?
Harman has three separate rooms where this type of research has been performed in. I showed the smallest one we sat in when we tested. Here is another where they test in-wall speakers and EQ products:I don't have a copy of Dr Toole's book and in any case I'm not sure how much is explained by the research in it, but unlike @dragonspit4 my concern is not so much about the relative scores varying (sure, the do, but not very significantly, at least in the case of B vs M vs P/I). My concern is that the tests all took place in the same room.
I'd like to see how different rooms affected preferences, and to what extent the results hold when the room is varied.
Nice pictures!Harman has three separate rooms where this type of research has been performed in. I showed the smallest one we sat in when we tested. Here is another where they test in-wall speakers and EQ products:
View attachment 19999
That is Dr. Olive there and the session I mentioned where he tested us as a group. Behind the screen is a triangular section of the wall that rotates with different speakers mounted on it. You can see it in this private harman presentation:
View attachment 20002
Then there is a larger room where they test multichannel speaker systems:
View attachment 20000
View attachment 20001
All of this research is extensively documented across countless papers from Dr. Toole and Olive over a number of decades. I have quoted many of them in the past and can quote more. It all points to the same consistent story that when tested blind, most of us like the same type of speaker response: ones with smooth and well behaved frequency response. Make variations from this and in controlled tests, listeners don't like the sound.
I have taken the blind test twice and both times voted the same as what their research indicates.
You can but you need an anechoic chamber with measurements every few degrees in both horizontal and vertical direction. You then apply a special weighting based on which ones come at you directly versus reflected from horizontal and vertical directions (indirect sounds). Once there, you get extremely high correlation to listening tests. It is not perfect and listening tests needs to confirm but it is a very good predictor.Can we measure all the frequency response of different speakers (from same brand or different brand), and determine which one sounds the best? (determine best speaker by frequency response) and make a objective claim that one speaker is superior in sound compared the others?