I saw the video earlier. Replied. I was going to link it here but didn’t want to give him any publicity. Imagine my surprise when I see it posted here by none other than Sean himself.
I don't normally watch these things, but Gene Dellasala @Audioholics sent me the link probably with intention to geting a reaction out of me.
I listened to 30 s, dismissed it.. Later I went back and listened to more, and heard him talking about Harman. He didn't say we don't listen but I thought he sort of mischaracterized or dismissed the effort we put into doing listening and doing it right.
At any rate, the main objections I have to this video are:
1) It dismisses the importance of measurements which are are highly predictive of listeners' loudspeaker preference ratings.
2) It dismisses tests done under laboratory conditions as being irrelevant to consumers because he says "variables like a) rooms, b) programs, c) hearing and d) personal tastes"
a) Rooms _ As people know here, the room is mostly dominant below the room transition frequency (~200 Hz) but above that the ANSI 2034 measurements are generally good predictors of sounds that will be heard in a room (e.g. the PIR or predicted-in-room response ). Above 2-3kHz, due to the directivity behavior of the speaker and room absorption, the listener hears mostly direct sound which is represented by the on-axis/listening window. Good off-axis response and smooth directivity ensure the reflected sounds are neutral.
b) Program -- Yes, loudspeaker-program interactions are real but you deal with them from a statistical standpoint. Andrew gives examples of bright programs being compatible with dull speakers. What is the chance that you only listen to just bright mixes or dull mixes? On average, programs likely converge on neutral, so you make the speaker neutral to be compatible with neutral recordings, and use tone controls for bright and dull recordings. The recording industry monitors are generally converging on flat so hopefully neutral recordings will be the trend (assuming the producer has no serious HF hearing loss).
c). Hearing - We generally screen for normal hearing although in larger studies we may include unscreened listeners and older listeners. Does that invalidate the results for Andrew's audience who may have significant hearing loss? So far, I have seen little evidence that people with slight-moderate hearing loss prefer speakers that are not neutral. I just saw a recent study where 3 groups of listeners. (normal, slight, moderate HL) preferred the same headphones. The difference was the more hearing loss, the noisier and less discriminating the ratings where. The HL groups used a smaller range of ratings but the rankings, at least for the top 50 percentile ranked headphones was consistent with the normal hearing listeners.
d) Taste- When listeners are asked to rate headphones or speakers based on preference there is remarkable agreement on which ones are most and least preferred. Recently I looked at segmentation of listeners based on headphone preferences and found there are three groups: 64% who prefer the Harman Target Curve, 21% who prefer the Target with less bass and 15% who like the Target with more bass. So again, there appears to be a majority of people who like what they consider neutral or accurate.
So how can 35+ years of research into perception and measurement of loudspeakers and headphones not jive with Andrew's reality of what he and listeners prefer?
The simple answer is: the lack of controlled, unbiased listening. When you do not control the variables (normal hearing, listener training, loudspeaker position, double- blind, loudness matching, randomized order, program, statistical analysis, hidden anchors and references ) you will tend to get random, noisy and unexpected results. A single stimulus demonstration doesn't even allow the opportunity for the listener to hear what is "neutral".
In the absence of measurements we are encouraged to let our ears choose what we like, but no guidance is given on how to do scientific listening so we avoid making rookie mistakes. Instead let all the acoustic, psychological and physiological nuisance variables and biases run amok, roll the dice, and choose whatever speaker they like or sounds fun. Andrew even gives us some speakers that measure bad but sound "fun" to further bias his audience towards making mistakes.
This is *exactly* the situation where good technical measurements are needed to help consumers avoid making poor decisions in sub-optimal listening conditions (or no listening which reflects increasingly more internet sales) yet, the video generally dismisses their usefulness.