Because that sounds like punishment, not enjoyment.
To condense and adapt from
a previous discussion, a source, especially a streaming source, is IMO the absolute worst place to put EQ. EQ here is defined as processing to mitigate flaws in the playback system, as opposed to
tone controls to adjust the program material. That makes sense to include in the source material
EQ is an obvious benefit, but in my view should be
localized to each distinct playback chain. [note - this is different from, and I think better than, the earlier phrasing.] To put it in the source rather than in the individual playback chain requires the listener to unnecessarily waste time time messing with stupid software, when she just wants to listen to music reproduced well.
Here's a reasonable hypothetical to illustrate why EQ should be as far down the chain as possible. Consider a home with the following:
-Three rooms for attentive listening: a family room with an immersive system, a formal salon with two good "full range" speakers, and a home office with a 2.1 channel nearfield setup as well as headphones. Plus various background audio zones (ceiling speakers, outdoor speakers, HomePods or Sonos, etc.), TWS earbuds, and so on.
-Three categories of content sources for which the audio is available on every system (streaming audio with their apps, YouTube, other streaming video services with their apps)
-Two local-only content sources: vinyl in the 2-channel salon and a digital disk spinner in the immersive family room
Here, if the transducer/room EQ resided in a player software,
it would not improve the fidelity of music or other content played from 80% of the potential sources. And even if source EQ were available in all five sources,
one would have to mess with multiple different EQ UIs in real time to duplicate EQ settings for each listening environment in order to equalize the transducers.
Let's assume one has low standards and only cares about fidelity for streaming audio. OK...what happens if you were listening to something headphones that require drastic EQ in your home office then decide to flip the AirPlay stream to the 2-channel system so you can share the track with someone else? The answer is
PLEASE MAKE IT STOP!!!! Basically you've poisoned a listening session because there's a stupid button you have to click on the software UI to change the EQ. However, you
just wanted to share some music so messing with a stupid effing computer slipped your mind. (I'll grant you this problem could be avoided if the software is sophisticated enough to automatically switch EQ presets for each AirPlay stream, analogous to what RME ADI-2 does for each input. But does that actually exist?)
By contrast, let's look at a more sensible approach:
-the immersive system and desktop systems have bass management and manual or automated EQ in their respective processor
-the stereo system has manual or automated EQ built into its preamp or integrated amp, and
-the headphones are powered, with correction in their own hardware, or the endpoint/DAC/headphone amp has selectable EQ presets if you have multiple headphones
With the sensible approach, one can seamlessly move music from source to source and system to system, and experience each in the best fidelity it can offer you without once having to remember to mess with software. You just set, forget, and enjoy - unless you want to change the processing for a given system. The problem is, the sensible approach requires sensible gear. And that's gear that does the necessary signal processing.