Forgive me for rehashing just a little bit in this paragraph, but it will make my point clearer. I've said be that, to me, MQA boils down to two things: 1) Compression, and 2) Potential (because I'm skeptical) audio improvement over what it's compressing (24/96k, for instance). And again, I feel the compression is too mild to produce enough value for the ensuing baggage of MQA. However, if the compression is not compelling, but the audio improvement is real and significant, an alternate implementation that would have far less intrusion on the audio chain and industry would be to build the process in players and stream 24/96k (if that's the bar) instead of MQA.
Now, I realize an argument could be that the encoding process (for deblurring or wherever the supposed improvement is) needs to be tuned by an expert and can't be done effectively by automation. If so, that just doubles down the issue with mastering engineers—someone else is deciding how it should sound. (Obviously, it's debatable whether anything was improved if someone other than the people in charge of the music production are tasked with "improving" it.)
But absent that, if the deblurring/etc. were built into DACs, then mastering engineers and studios would own such DACs and none of this would be an issue. A happy side effect of that is that if the improvement proves to be—or not to be—real or significant, it's going to be obvious out of the gate. MQA succeeds or fails right away and we move on, without first "re-imagining" an industry.