Archimago and GoldenEar set up a test with roots in the most common way we know how to test audio tools for reproducing music. With your way of thinking music has to adopt to MQA's definition of technical standards. That not right is it?Because in ultrasonic band the content of music (just harmonics of increasingly lower magnitude) is NOT 24 bit deep, but as Atkinson's post says, 5 bits, and decreasing even more the higher you move in frequency. The rest is noise below, and empty bits above. And so, they don't need to pack 24 bits below the noise of the first 24Khz band, but instead much, much less (and btw, this is not a mysterious, hidden information; it is explained by MQA in their documents even with annoying detail).
Unless, of course, your signal is white noise of big amplitude or square wave, that instead DOES have very high amplitudes of harmonics in ultrasonic (and also in the high region of the sonic bandwidth). Which, btw, is the reason why Amir tests of DACs increase the band to ultrasonics when measuring square waves. That's why MQA performs bad with that type of signal: it is not intended to process that. One of the many reasons why these tests are flawed.
Btw, I can't figure out what's worse: if Archimago and GoldenEar didn't know that (almost unbelievable); or that they knew it and even then, they prepared a test file knowing the type of failures they would get.
Remember MQA are markeded as high resolution, not as 'some part of the audio spectrum' as we now learn by reading spesifications. So yes, I think it would be correct of them to test as they did, — even if they set up MQA to fail simply by the fact how MQA are marketed.
I do make music using modular synthesizers. Colored noise and square waves are musical tools of the trade. Should I comply with a new audio standard trying to establish new and restricted quality for this artform? I think not, — most people would dismiss MQA and rather move on with their creation using tools that better suit the purpose.
With regards to JA explanation. How would one apply dither to the two half? Would one wait to do it in the decoder after the unfold? But then the non-unfold will suffer, so the encoder will have to do this layer after truncating. But then the decoder truncate this layer again...
On a sidenote. By Qobuz own definition any audio file with 24 bit depth are to be considered Hi-Res content. This from their leagal pages.
Then we find 24b MQA encoded music on their platform wrongly labelled Hi-Res without any warning that special licensed hardware or software are needed for 'hi-res' playback.
What we are seeing are MQA sneaking its way into our audio chain like a trojan horse with streaming providers silently comply with the new standard even when it actually are ruining their whole business scheme. If anything, — what MQA has taught us is that 24b audio are now a dated hi-res scam.
Last edited: