I can't speak for all 'people' here, so speaking for myself... I find
the MQA technology a lie and garbage because of
>this<. Namely, again and again I keep asking three straight, logical, relevant questions. And in response, get numerous circular piles-and-piles of either "let me tell you how great MQA idea is" or "how dare ya'll to test it so incorrectly!" Without any 'correct' tests as an alternative...
For the record, I think I understand the concept behind
the MQA idea - I believe it was a neat idea conceived by audio's clever minds to solve the then-bandwidth-issue. However (a) they never succeeded to properly implement the idea and (b) the problem itself went away (eg, I've stated it
>here<.)
So, based on the above, in my eyes I declare that the MQA is a scam
. This is based on (1) the existing body-of-data comparing performance of MQA and other hi-res open formats (eg, 24/48 PCM); and (2) MQA's and Tidal's marketing - continuously misleading us consumers (eg, about losslessness, superior quality, artist's intentions).
Now, I gladly retract my statement, if/when proofs - of losslessness, superiority, etc. - will be offered.
Here, I put my good name to it - Raindog.
In my view, one of the main ideas of MQA (even a paradigmatic change in that), and in open contradiction with the test methods used here, is that the space of the file
doesn't need to be agnostic of the content it will register, because you are wasting much of that space, and that agnostic behavior forces (via unwanted filters) to smear the signal of the music to be recorded.
If instead what is to be registered is MUSIC,
that information occupies only a fraction of the whole possible bitspace of, say, 96K x 24b file. That space is the famous orange triangle shown in MQA graphics. This limited space of real music is not mere speculation: it has been backup by the statistic of thousands of recordings, as well as the very physics of music (vibrational behavior of chords, surfaces, whatever), and countless research data by third parties.
Then, below it there is noise and the limits of your auditory system, and so, you can use that wasted space in at least two ways:
- storing information instead, then dithered to still appear as noise. What information? the one brought from ultrasonic real content.
- shaping that noise, as for the file to gain headroom or better S/N ratios.
Above it, there is
silence. That silence starts much earlier in the bandwidth that people normally think. How do they use it?:
- Not for additional space, as that would made the file incompatible with the Redbook standard
- But instead, to applying very loooong customs filters so as to reach the Nyquist frequency (now at 48K in the example) in the softest possible way. So long in fact, that they probably even go inside the audible bandwidth, correctly assuming that the higher bands of music will not have big amplitudes in tose frequencies.
- Why is that?: those vert soft filters allow to maintain time coherence in all frequencies (any signal has three components: amplitude, frequency and time. PCM doesn't deal the best way with the last one). Then, phases of harmonics are not smeared with what would, otherwise, be brickwall-like filters. With time coherence guaranteed and the avoidance of brickwalls comes the most important issue: pre and post ringing of impulse response is vastly shortened: from an extension of 5000uS of Redbook, or 500 uS of a 192K PCM, to 10 uS of MQA. That is what makes MQA sound better than any PCM file, at least in theory (we may disagree with that, but even so, that wouldn't make the test better).
- This also would explain imho (I've not seen described this way by MQA), why MQA resolves in a much better way the spatial information: is there is less time-smearing, signals spread in both channels are much better coherent.
- As an additional benefit, all of these avoid the aliasing frequencies that a normal PCM can't avoid. This is fairly obvious.
- Please correct me if I'm wrong, but the square waves that GoldenEar has just posted as a flaw of MQA show exactly what was described above: the effect of those long custom filters, as they are receiving information above of what the filter is cutting, then increasing the ripples of the response. What you are detecting is exactly what MQA is designed for: it is not a flaw, but instead the opposite, when applied to the data space they are intended to process.
An important issue: the ONLY purpose of a higher sampling in a normal PCM is to capture the ultrasonic harmonics of notes. Then, of course, if you believe that's the purpose, and your low your hearing is limited to 20 Khz, that is thought o be useless. And then people, using their understanding instead of audition, see no benefit in it. "Redbook is good enough, everything above is useless".
Instead, the theory of MQA says there is another, much more relevant reason for higher samplings: to gain space for softer slopes of the low-pass filters needed, for the "deblurring" reasons explained above. In that way they achieve the time coherence aimed.
With all that, we get in the space required, the amount of data needed to be captured, and the possibility to package all this in a much smaller file. This is the origami, and it could be further analyzed here. But if we not clear the above issues, we could hardly make this additional step.
Summing up, MQA "models" the file space to be use in a way NO PCM file has done before. This is, imho, perhaps the biggest contribution of this idea. But it is in open contradiction with standard test methods used with a normal PCM file (like those used by Archimago and GoldenEar), because it is NOT a standard PCM file in the first place.
About the "lossless" issue:
I do recognize, as I have said many times here, that the use of that "sacred word" has been a bit careless from MQA's part. But what needs to be understood here is that they are referring the concept to the original MUSIC information as it was registered in the original analog master, or to a digital master with their time issues corrected.
Not with any kind of information or tones you may feed the system in a test.
Not also with intermediate entry points of any PCM. By definition, MQA assumes that PCM will contain the timing errors they are trying to fix, and so, by definition you cannot expect the output to be equal to the input. That it is the task of a back-and-forth compression algorithm, which MQA is not. MQA is instead a channel to transmit the information of the master of the studio to the end user (and not the other way around).
If seen this way (although I recognize it is a bit of a stretch) it is MQA the only one "lossless", because every standard PCM will have a degree of time smearing that MQA has not.