@Mad_Economist Thanks for the interesting presentation. It's a shame people in the audience felt the need to interrupt you so often.
Honestly,
@oratory1990 and I regularly do presentations together, and I appreciated his input. The main issue is that, for the recording, he was not picked up well. I'll get a crowd mic next time if possible!
I wonder how big an issue positional variation really is. If you put HP2 on your head and heard no bass you would adjust the positioning until you did. I assume people who are unable to do this, because of glasses for example, would simply decide early on HP2 will not be right for them.
This one is kind of a known unknown - we don't have a super robust body of data for how variable in situ FR is based on positioning on the same listener's head as a function of time. We can quantify the range of variation on a dummy head, or more ideally we can measure the in situ behavior on real human heads (MIRE), but we're getting an approximation.
I will say, I think you slightly overestimate how responsive people are to both of the types of variation that we see in situ (broadband bass level variation, and narrow band treble peaks and dips) - certainly, these are audible, but particularly if it's only one channel that has a "big" variation, you'd be surprised by what people will put up with and not even consciously identify.
Surely that is a caveat to the main judgement, which should still be how well it broadly follows the Harman OE 2018 target?
There's a separate presentation to be done on this topic, but if we look at the actual subjective preference ratings from Olive, Welti, & Khonsaripour 2018, while
tracking with Harman OE2018 is unambiguously good, several headphones which very much
did not track with OE2018 were scored comparably well.
Similarly, if a headphone sounds different on different heads, surely the main judgement is still how well it broadly follows the Harman OE 2018 target? In other words, is a headphone that has low rHpTF variation but poor compliance with the Harman OE 2018 target still not likely to be rated worse than a headphone with higher rHpTF variation but good compliance with the Harman OE 2018 target?
In a world where EQ didn't exist, sure? In the world of EQ, however, we can quite easily make broadband adjustments to correct failings which are visible on generic fixtures (excessive treble response, insufficient bass, etc), whereas individual-specific variations would require either in situ measurements on the individual (realistically, not happening for 99.99% of users) or correcting by ear (pretty bad at fixing narrow-band issues). Like, I kinda think that the proof is in the pudding of the headphones
@Sean Olive chose for the experiments here: They are designs less likely to vary in situ on users for various reasons.
I can imagine an example where this isn't the case, which would be if a headphone measures exceptionally well (good compliance with target) but for some reason interacts with real people's heads so differently that it sounds nothing like expected on anyone's head. I don't know of any real examples of this.
By dint of anecdote, a number of high-Z closed designs have reports like this. This includes both cheap designs like the K371 and expensive ones like the Stealth. This is something I'd like to thoroughly document in a test which monitors real time in situ FR on human heads using dual channel FFTs, but I don't expect there to be any big surprises: the likely result is "people who dislike the Stealth/Expanse/371/K550/etc are getting different in-situ response than the mannequin response".
Are these not essentially ergonomic issues we already know about and should hold in mind when designing/reviewing/choosing headphones?
I mean, I guess you can frame them as ergonomic, but they're measurable acoustic effects which are, at present, not widely measured. Like, even among the real grognards, I know a
lot of people who don't know that open headphones have more consistent bass response than closed designs, and while it's something we sometimes make reference to in reviews, we seldom actually, you know, quantify it. Which sticks in my craw, because it is quantifiable, and it's a quantifiable effect which can impact human preference for headphones, and those should be quantified.