I think you are referring to making non-listening judgements to the responses of the system to such signals?
Certainly any such system should handle any audio signal. I've been building and playing with synths for 50 years, these things are in music, even at times naked. We can't even assume a "musical" balance of frequencies. This band and song is available on Apple Music, just as a quick-found example, but I could come up with pure white noise too. The only thing I expect is that it would sound pretty much the same on anyone's encoding, I don't care so much how it looks on analyzers, especially in frequency bands I can't hear...granted, it would be hard for me to tell if the below is properly encoding, but there is an actual noise music genre, and I know people who are actually very much into it so I guess they would be able to tell if one of their favorites sounded right
(be sure to turn your sound down...)
You partially answer your own question there. There are two misunderstandings here:
1.
That the encoder needs to handle all cases perfectly. It does not. The lossy codec that was used to encode that video likely went nuts and distorted that clip. But either that doesn't matter in this kind of content. Or falls in the category of, "yes, it doesn't produce good quality in 0.0001% of the clips. So what?"
A true mathematical lossless codec could fail on this as well and just spit out the source instead of compressing anything. That would be a failure to do its job but people don't notice.
OP wants such pathelogical clips to produce correct output. That is not in the cards for any compression algorithm.
2. That something that sounds noise like to a human, must be the same as test signals used by OP. You just can't make such a determination by ear or intuition. You need to measure. I just did that with the clip you post in youtube. I recorded about a minute of it and did a full analysis of it:
We see that the spectrum nicely drops off as frequencies increase -- precisely what MQA algorithm is counting on. We have 50 dB reduction in level at 20 kHz that we have at lowest frequency.
The start of this clip is far more noise like so let's analyze that:
Again, the same trend. There is no equal spectrum and certainly not one that extends beyond 20 kHz. I don't care what you synthesized I am pretty sure you did not use one that filled the ultrasonics with same level.
Remember, if you had flat response, you would blow up your tweeter and your ears instantly at any decent listening level! So if you could listen to it with the little tweeters in your speakers relative to the massive size of the woofer, you were not generating "pure white noise." If you do create such content, it will not be playable or listenable. What MQA then does to it is not material. It could pump a bunch of noise of its own and you wouldn't know the difference.
Conclusion
It is OK to have corner cases where an encoder can't handle the content. This is the case with just about any compression algorithm. Pathological cases are called that for a reason. If a test signal falls in this category, then it cannot be used to judge the quality of the codec if it doesn't represent 99.999% of the music out there.
Second, you need to analyze the spectrum to know. What appears noise-like is not what it seems. And if you did create such crazy signals, it would be damaging to many things with MQA being the least of your worries.