Everybody has given me a ton to think about here and it will take me a long time to fully comprehend everything that has been said. Thanks for everyone who has participated in the convo so far. In the meantime I want to make some things clear because I might not have done the best job explaining.
First of all, my idea of a "standard" has
nothing to do with preference. In a way, whatever the standard ends up being is somewhat arbitrary. Take video calibration for instance. There are standards there, but all that calibrating to those standards means is that what you see on your TV is going to be as close as possible to what the post-production team saw on their monitors because they're using the same standard. But the standard doesn't determine what the content actually looks like. The movie team does, and you do too, should you decide to deviate from the calibration.
Another analogy: imagine loudspeakers didn't exist and the only way people listened to music was headphones. The Harman Curve (the one for headphones, not loudspeakers) is not an official standard but let's say it became one. It seems like a lot of people have an issue with the Harman Curve, but if it became a standard, everyone's issues with it would pretty much become moot. Audio engineers would mix and master music and film using headphones that had the Harman Curve and if there was anything wrong with the curve, they would be compensating for it automatically. Eventually the industry would start remastering old recordings made before the Harman Curve became a standard to make sure they sound good on headphones that use the Harman Curve. Really, the standard could be the goofiest looking curve imaginable and it would still work as long as engineers are using it and expect the average consumer to be using similar headphones. Of course different headphones and EQ options would always be available to anyone whose preference doesn't line up with the average audio engineer, or as budget options, and yet the standard would still be really helpful as a reference point.
Now that that's out of the way, I want to make it really clear what I am actually proposing.
I was watching
this Audioholics video and they were discussing how certain speakers with certain directivity would sound incorrect if they EQ'd them to match their usual preferred frequency response. Directivity influences the mix of reflected and direct sound, which influences our perception of the frequency response in a way that the raw frequency response measured from the listening position does not show. My thought was "It would sure be nice if there was a way of measuring frequency response that would naturally compensate for this irregularity, so that instead of all that trial and error, you really could just use your usual preferred target frequency response and be good to go."
I was also looking at
this thread by Amir and saw just how different a target curve can look like for a big theater versus a small room. This difference is IIRC largely because in a larger room the mix of reflected and direct sound is different, which again, influences our perception of the frequency response in a way that the raw frequency response doesn't show.
I'm no scientist, but couldn't we theoretically make a bunch of binaural recordings with a dummy head in as many different kinds of rooms as we could manage with a bunch of different speakers that have different directivity playing the same music, and ask, say, 100 audio engineer participants to listen to these recordings on headphones and equalize every recording until they all sound like they have the same frequency balance to them? And then we could use that data to create an algorithm that could extrapolate how pretty much every possible mix of reflected and direct sound influences our perception of frequency balance? And then once we have that, could we not create a standard curve for that perceived frequency response that all music and film studios could use?
And again, I am well aware that this only standardizes one aspect of sound and doesn't mean your system would sound the same as another using the same PFR. However, that still seems useful. Like how correct screen calibration is based on the brightness of the room. No one can argue that watching a movie in a bright room will ever be the same as watching a movie in a dark room yet with the right calibration they are somewhat comparable. No one can argue that a typical projector can look like an OLED screen, yet a standard for calibration is still useful for both of them. A reference point is useful.
I don't mean to come in and act like I know what's best for the future of audio. There is SO much I don't know. I am just legitimately curious if this makes sense.