I'm surely not even close as tech savvy as you, but it is hard to deal the fact that I and everyone truly interested in this matter can understand the basics of MQA, reading the same public information available, while you don't.
One example: MQA states in their graphs and techie explanations that they capture a "triangle" of significant data (the specific shape of it as previously analyzed by the algorithm previous to coding -that's why even you mentioned the "electronic music" option for coding-, and btw based in a the most fundamental profile of the harmonic content that every musical instrument has, that people like Bach discovered centuries ago), and whose limits are in its upper side the maximum leves at each frequency (almost always diminishing the further you go in frequency.... or you will tell us that you have never seen a FFT of an instruments' note?), and in the lower limit the noise floor of the combined effects of the multiple steps of digitalization equipment or later, the reproduction chain.
Then, what's inside that triangle of significant data is captured, *in a lossless way, there that matters*, and what outside is not (and that's what can be named "lossy" if you are taliban enough, because it purposely forgets the noise and the alleged "data" above music). Then what's not inside is gained space that is better used for receiving data coming form upper origami's folds (where the same process occurs, but in a even more bold fashion, as the true data is even less, and so the significant space of true data is much more smaller). That obviously means, as
@filter_listener said, a variable bitrate in frecuency. Why use 24 bits depth in, say, 15 Khz, while the significant musical data only spans for 5 to 6 bits there, while the rest is silence or noise? If I use the remaining 18 bits for other more useful purposes than recording that noise floor, is it that a sin?
And then, knowing that (sorry, I simply can't believe you don't), you feed your test tones full of noise and square waves, that obviously place a lot of high amplitude content where you already know that MQA will not properly code it, because it will fall outside the space where MQA is designed to code. Exactly as if you were dealing with a standard compression algorithm that could deal with images, music, or whatever. What is the purpose of that? What useful conclusion can your readers have if you mislead their opinions point at thing like: look! what a disaster! those rookies of MQA (no other than the same guys that invented the first useful lossless compression schemes!) can't properly rebuild a square wave, because they are rendering 5 o 6 bits at that given frequency, instead of the 15 bits needed to properly reconstruct that square there for those higher harmonics!
If instead you don't know, which seems to be your current argument ("I never claimed to understand the inner working of MQA") those basic characteristics of the MQA process, how you even dare to qualify them, let alone measure it like if it were a common compression algorithm?
(sorry for my "lossy" English, I hope you understand the idea....)