With the mass replacement of redbook on Tidal, surely a lot of these MQA are made from 44.1K or 48K originals. So then the spectra have a gap between the music and the fake ultrasonics, like this (from
https://audiophilestyle.com/ca/reviews/mqa-a-review-of-controversies-concerns-and-cautions-r701/)
View attachment 125479
When you find a fact as crude as the one showing this graph, if you are interested in learning something instead of just bashing MQA, what would be interesting is to discuss why do you think that gap is happening, that happens to be exactly at the Nyquist frequency of Redbook (other than the presumption of most people here that this MQA process is done by absolutely incompetent engineers, of course)...
My guess: MQA claims that it is trying to correct the aliasing problems found in sources when dealing with digital data (instead of analog sources), applying some filter techniques. Perhaps if they can't fix it because of the nature of the source presented, the algorithm assumes that is better just to erase a small 1/3 octave supra-aural band instead of leaving those aliasing artifacts in. Is it that bad? from a religious "losslessness" credo it is; if you are instead interested in the aural results, perhaps it is not.
Another surprising element in these tests, that after over a thousand posts nobody has dared to discuss: it is evident that is you are measuring "losslessness" of a source file against a MQA-process, that by definition replaces the original noise floor with real information dithered as to still appear as noise to a normal DAC, you will get gigantic differences: that noise floor which you are expecting to find as an exact copy doesn't exist anymore in MQA. And yet, not a single person has commented why is this. I think, just perhaps it is not because MQA engineers are stupid, but instead because they are doing this on purpose: they are trying to achieve a different result (gain space for real information in what's garbage in the source file).
In the upper bits of the audio spectrum, above the threshold of maximum expected amplitudes of music, it is happening the same thing: Instead of recording voids, MQA is recovering that space to store real data. As the practical bit depth used to register the content in this sampling is smaller, when you feed the algortithm with test tones, like square waves, totally discrepant with this assumptions, the reconstruction of that content (that square wave) cannot be done. The resulting wave in GoldenEar test shows exactly that. If any, the only thing this test it is proving is that MQA is doing what their papers say they do.
If any of the people commenting here dare to read just the most general papers fo the MQA process, (most available for free elsewhere), they would be aware of the above things. Then we would be discussing if those assumptions are correct to not, but all people here insist in analyzing this issue as if MQA were just a flawed new kind of pkzip algorithm...
As a crude analogy, I t's like these test are comparing a file copy of a bunch of data done from an old, highly used hard disk, to a fresh new one. Files copied are "lossless" (that's what MQA claims, I agree, rather carelessly); then Archimago and GoldenEar are not looking if those files are identical (the music) but comparing each sector of each disk against the other. The first one, with all file fragmented and rest of old files deleted (the noise floor) is obviously different form the contiguous files encountered in the new disk. This new disk also doesn't contain all the garbage of those hidden erased files from the old one. Thus, it is "lossy"!, I will close my Tidal account!! They are lying to me!!! I may even build a case for seeing them!!!!