• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
976
Likes
1,519
FLAC is excellent at moderate resolution. However, a modern DXD master (which some audiophiles increasingly demand) is heavily noise shaped, with really high, near full level noise mountain from 50KHz to 150KHz (@amirm shown this in his hires explainer video). And FLAC can't compact it a lot, because it's random noise.

MQA is certainly not perfect, but it makes some reasonable choices what to encode and what to leave behind. As higher resolution files are both ultrasonically noisy and not well compactible, I think some data extraction is not an unreasonable approach.
If you want to keep at least a pretense of fairness, then compare MQA to 24/96 FLAC, not to 24/352.
 

UliBru

Active Member
Technical Expert
Joined
Jul 10, 2019
Messages
124
Likes
338
I think there's a strong risk, looking at how this thread's been going, that we'll end up missing the assumptions and working principles of MQA (edit: regardless of what they are) and be unable to adequately characterize their product. It's pretty clear that the video that started this thread is lacking in that regard. With gear testing you often know the function of DUT ahead of time (unless we're dealing with tweaks), but MQA is a new territory for us.
We know that a LPCM file of high samplerate contains all data. Including unnecessary data like high frequency noise which we cannot perceive.
MQA now introduces the removal of all the data which are not really required. Of course this introduces assumptions (and experiences) which data must remain. The goal is to get a CD like file size but to still keep all high frequency data. So there are some steps (please do not nail me on the example numbers):
1. throw away all lower bits which are not perceived. So an assumption is that we do not hear the lowest 4 bits, 20 bits are anyway sufficient.
2. extract all high frequency content and pack it into these lower 4 bits. This means that the 4 bit range must be sufficient to keep all necessary high frequency information.
3. By step 2 the max. high frequency amplitude must be limited. Now the MQA research has shown that the natural frequency response falls down a lot in the high frequency range. This explains the triangle assumption. And it allows to pack the data into the origami.

Trying to encode a wav track which does not meet the triangle requirement immediately leads to an overflow as there are not enough bits in the origami. This means a loss of data and/or a loss of transparency.

So the GO video may have failed here.

Anyway it is now interesting to study some signals which are allowed in the LPCM range. I have created a triangle line with -2dB/kHz slope (as published by BS). Then I have created some bandlimited waves with basic frequency 1.5 kHz and adjusted its 1.5 kHz frequency to the -3 dB level of the triangle.
Each harmonic with a level above the triangle means that the triangle condition is missed.

Squarewave:
Squarewave.png


Triangle wave:
Triangle.png


Sawtooth:
Sawtooth.png


So we can clearly see that all these waves are not legal from MQA requirements. The squarewave e.g. needs to be attenuated by at least -60 dB to be valid = within the triangle.
Now of course it can be discussed if such signals are natural signals. Maybe not. But for the LPCM range they are legal and all these wave forms are basic waves e.g. for music synthesizers. But we must expect that the MQA encoding and folding will be lossy.
 
Last edited:

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,448
Likes
4,812
So we can clearly see that all these waves are not legal from MQA requirements. The squarewave e.g. needs to be attenuated by at least -60 dB to be valid = within the triangle.
Now of course it can be discussed if such signals are natural signals. Maybe not. But for the LPCM range they are legal and all these wave forms are basic waves e.g. for music synthesizers. But we must expect that the MQA encoding and folding will be lossy.

What's interesting is also the use of language.

Saying something like
the tester maliciously used illegal signals!

sounds very different from
the tester knew our product is intrinsically limited and does not support xxx and yet ...

even if they end up meaning roughly the same thing.

Also, maybe a stupid question here but aren't all signal within standard range "legal" for LPCM? (I guess people could nitpick about square waves and their "lossy" representation).
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
783
Thanks Uli for the visualizations.

So while MQA is not a completely lossless codec for all possible types of audio signals, it might indeed be lossless for most real world content.

The question is, do we need it? It seems like a solution to a problem that doesn't exist or that doesn't exist anymore. It would have been a fantastic solution decades ago when distribution was limited to CD. These days we have completely lossless and royalty free codecs and the bandwidth to distribute that content to the consumer. So it seem there's no technical need for MQA.

If there's no technical need for MQA, why is it even used? Obviously someone had an idea how to make money with it. And this is where the story takes a dark turn. The use of MQA in its current form and the deceptive marketing surrounding it is not in the interest of us consumers. This definitely should be discussed. Maybe in its own thread separate from the discussion whether MQA is a technically "neat" solution or not.
 
Last edited:

Jimbob54

Grand Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
11,096
Likes
14,753
Thanks Uli for the visualizations.

So while MQA is not a completely lossless codec for all possible types of audio signals, it might indeed be lossless for most real world content.

The real question is, do we need it? It seems like a solution to a problem that doesn't exist or that doesn't exist anymore. It would have been a fantastic solution decades ago when distribution was limited to CD. These days we have completely lossless and royalty free codecs and the bandwidth to distribute that content to the consumer. So it seem there's no technical need for MQA.

If there's no technical need for MQA, why is it even used? Obviously someone had an idea how to make money with it. And this is where the story takes a dark turn. The use of MQA in its current form is not in the interest of us consumers. This definitely should be discussed. Maybe in its own thread separate from the discussion whether MQA is a technically "neat" solution or not.

Yup, this is exactly the reason for a lot of the scepticism. I've said before that I am 100% convinced the pitch to Tidal and the labels (and anyone else they approach) is massively different than the spin to the end consumer. Trying too hard to be all things to all people.
 

Music1969

Major Contributor
Joined
Feb 19, 2018
Messages
4,669
Likes
2,845
OP says Schiit Magnius headphone amp lowers dynamic range by increasing the level of low level sounds. If he is so good at determining this by ear, then he should have performed a bunch of listening tests with MQA with real music and educate his audience that way.

Has @GoldenOne withdrawn from the blind test he initially agreed to?
 

pjug

Major Contributor
Forum Donor
Joined
Feb 2, 2019
Messages
1,775
Likes
1,562
Yes, I have looked at this, but as discussed it seems like the triangle excludes some seemingly normal music, and as Amir pointed out, "may just be a marketing thing".
I don't really get the triangle business, except as a crude representation of the nature of the signal that has to fit in two rectangle windows. The triangle makes it look like MQA ramps down the number of bits used. But my understanding is that there are just two areas, the audible frequencies at something like 16-18 bits and the untrasonic with the remaining bits. Am I missing something?

Edit: So then a high frequency tone that extends outside of the orange triangle, like I've drawn below into the yellow curve, should encode without a problem. Is this correct or not?

1622636975249.png
 
Last edited:

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,727
Likes
7,987
@Raindog123 @tmtomh I had the use of language in mind. AFAIK, MQA claims that the unfolding process makes their file an efficient container of several sample rates and bit depths. So the comparison has to be done across the folded and unfolded versions with several PCM files, which means it will always show a difference in the MQA file due to the changes the sample structure. Characterizing that difference requires assigning some kind of framework, defined either as preserving information or some other factor. MQA understood this very well. They moved the criterion from preserving all the information to preserving only the musically useful information, from just giving you a file in their container to certifying its contents, from creating a decoder for playback to defining how the hardware should function. All to ensure as little variation in playback as possible and justify use of the word lossless. Or to claim that the goods are better than lossless. It repeats the old topic of fidelity, but instead of discussing an event, its recording and reproduction, they focus on the file, its transmission and playback. Instead of using the positive term fidelity they use the negative term loss. So instead of pursuing the highest fidelity they are attempting to assure the least loss. It's like an auditing exercise. Kind of a commercialization of the traditional audiophile mindset.

I think there's a strong risk, looking at how this thread's been going, that we'll end up missing the assumptions and working principles of MQA (edit: regardless of what they are) and be unable to adequately characterize their product. It's pretty clear that the video that started this thread is lacking in that regard. With gear testing you often know the function of DUT ahead of time (unless we're dealing with tweaks), but MQA is a new territory for us.

That's a great explanation and analysis, thanks! "Commercialization of the traditional audiophile mindset" - an apt way to look at MQA.
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
It seems like a solution to a problem that doesn't exist or that doesn't exist anymore. It would have been a fantastic solution decades ago when distribution was limited to CD. These days we have completely lossless and royalty free codecs and the bandwidth to distribute that content to the consumer. So it seem there's no technical need for MQA.
That's true for stereo and some multichannel, but not for immersive content like ambisonics. The data requirements are huge. There FLAC isn't enough (8 channels) and WavPack (255 channels) is a comparitively unfamiliar for a lot of people.

Maybe they have bigger fish to fry, but this is speculation.
 

markus

Addicted to Fun and Learning
Joined
Sep 8, 2019
Messages
681
Likes
783
That's true for stereo and some multichannel, but not for immersive content like ambisonics. The data requirements are huge. There FLAC isn't enough (8 channels) and WavPack (255 channels) is a comparitively unfamiliar for a lot of people.

Maybe they have bigger fish to fry, but this is speculation.

Ambiwho? :) More seriously, that's another MQA-like marketing case but the consumer has already lost before even knowing: Dolby Atmos. They have laid the groundwork for world domination about 10 years ago. No way around it any time soon.
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,448
Likes
4,812

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,727
Likes
7,987
If you want to keep at least a pretense of fairness, then compare MQA to 24/96 FLAC, not to 24/352.

Yes, this is crucially important, for several reasons:
  • MQA's communications have allowed (to put it generously) an interpretation that content with sample rates above 96k can be preserved and reconstructed by MQA. No MQA file can contain any content above the Nyquist frequency of 96k. In conjunction with the "authentication" aspect, this is deeply misleading. Proposing that DXD is the closest analogue to MQA is based upon, and furthers, this false and misleading idea.
  • Claims have been made by MQA and by at least one person repeatedly in this thread, that MQA's file size and data rate provide much-needed savings over PCM, and the comparison has often been made with DXD, whose file sizes and data rate are enormous. But again, this is not apples to apples and so it is misleading to promote an MQA-DXD comparison.
  • DXD is an overly specific, and cherry-picked comparison, as MQA is comparing itself to lossless PCM in general and making its case based on its alleged superiority to all lossless PCM. So unless DXD is the dominant or most typical high-res audio format - which it most certainly is not - it is at best not a useful comparison, and at worst an intentionally bad-faith comparison that attempts to rig the deck.
Finally, while it's tedious and rather sad to have to say this yet again, I fear it is nevertheless necessary: none of the above is to say that sample rates above 96k are necessary in the first place (they're not), or to claim that music files contain frequencies above the Nyquist of 96k (they usually don't), or to claim that if any >48k frequencies do exist in some files that such frequencies are necessary (they're not) or that they're even music (they're usually not).

Moreover, just as there is no evidence that we need to preserve frequencies over 48k (or 44.1k in the case of an 88.2k sample rate), there is also no evidence that we need to preserve frequencies over 20k. It's all ultrasonic, and we can't hear any of it. (Yes of course we need headroom for filtering, but that's the opposite - to keep ultrasonics out of the audible spectrum while preserving maximum fidelity within the audible spectrum.)

Rather, it is MQA and many of its supporters, not me and not many other MQA critics and skeptics, who make the claim (or in some cases passively let the claim slide for opportunistic reasons) that "ultrasonics are important." The entire line of discussion about triangular sampling and "retaining only the music and not the noise" - and the whole "natural music" vs "not really music" debate - all proceeds from this nonsense about preserving ultrasonic frequencies based on the notion - endorsed and promoted by Bob Stuart - that we can hear or "sense" ultrasonics when we listen to music.
 
Last edited:

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
I don't really get the triangle business, except as a crude representation of the nature of the signal that has to fit in two rectangle windows. The triangle makes it look like MQA ramps down the number of bits used. But my understanding is that there are just two areas, the audible frequencies at something like 16-18 bits and the untrasonic with the remaining bits. Am I missing something?

Edit: So then a high frequency tone that extends outside of the orange triangle, like I've drawn below into the yellow curve, should encode without a problem. Is this correct or not?

I think what it actually does is more like this (an illustration for 16-bit "MQA-CD" in red, solid is un-dithered area, dash is a 3-bit dither) [using Meridian's old pic]:

Redbook_vs_MQA__.png
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
Put it this way: In another, past thread here on another topic, I stated that the resampling process of taking a 96k source down to 44.1k changed the audio data, and therefore a 96k PCM file and a downsampled 44.1k PCM file could sound different because of the non-integer sample-rate conversion, and that the sonic difference could be seen in a difference file from trying to null-compare the two.

In response, our knowledgeable and currently thread-banned friend @mansr [edit: it was @danadam actually] told me I was mistaken because my method of trying to null-compare the 96k original with the 44.1k downsampled version couldn't work. Instead, he explained, I should downsample the 96k to 44.1k, then resample the 44.1k back to 96k and compare the two 96k files. When I did so, they nulled out 100% for all frequencies up to 22.05kHz, indicating that the audible-range information from the original 96k file could be perfectly reconstructed. So I had to admit I was mistaken in my initial claim, which I was happy to do since I had learned something - I didn't realize that non-integer resampling was still lossless; in other words the different Nyquist limits of course made a difference in the ultrasonics, but in the audible range the non-integer resampling was a non-issue in terms of the ability to perfectly reconstruct the original content that was originally in a higher sample rate.

My point is that I think it obscures more than it reveals to call the well-documented limits of human hearing "perceptual" in the same way that lossy codecs' compression algorithms are perceptual - and therefore it also obscures more than it reveals to call downsampling that perfectly preserves, bit for bit, the audible-range musical data, "lossy." By that logic, a 176.4k file created from a 352.8k original is lossy. Sure, there is a clear logic by which that claim can be made - but in order to employ that logic you have to stretch the term "lossy," in the context of sound reproduction for humans, to the point where it becomes meaningless (which is not what Amir is trying to do, but which is most certainly what many promoters of MQA have attempted and are continuing to attempt to do).

We can certainly debate the relative merits of various encoding and compression algorithms independently of questions of mathematical lossiness/losslessness, and I have absolutely no problem with doing so.

But to lump something as fundamental as frequency and sample rate into the same lossy bucket as perceptual encoding - to me that is a mixing of domains, and when it comes to the discussion of MQA, a mixing of goals and purposes as well. Amir says he can pass a blind test distinguishing 320k mp3 from lossless. He would never make any parallel claim that he could do so with two files that were bit-identical except for frequencies above 22kHz - nor, I think, would he or most others here be inclined to believe such a claim made by anyone else. Returning to my prior example, the difference file between the musical data in a PCM file and an mp3 file made from that PCM file will be audible. By contrast, the difference file between the data in a 96k PCM file and a 44.1k PCM file made from that 96k original will not be audible. That's a meaningful difference.

I think at some point this becomes a philosophical, perhaps even semantic, debate. But I think it is both practically and epistemologically improper to equate (implicitly or explicitly) downsampling and perceptual encoding under a simple heading of "lossy."
When we lump MQA with lossy codecs, we commit a methodological mistake. This is because we are not differentiating the very divergent design intents of both approaches. Design intent comes first and only after that design implementation can be contextually understood. Form follows function.

Consider, that the explicit goal of a lossy codec is to, well, loose as much data as possible, while remaining audibly transparent and they operate in the baseband. The goal of MQA is to retain all the musical Information, including near ultrasonics. The LOSSY codec is actively shedding musical data, while MQA is trying to hold on to it. These are dramatically different design intents.

With such different design intents, implementation is vastly different. Perceptual codec has a complex psychoacoustic engine that looks for and discards musical detail that is judged to be masked to the listener. MQA is much simpler - it identfies the ultrasonic music limits and encodes it into the baseband LPCM, with a bit of bit-shifting. That's it (if you remove the "deblurring" step). Outside of the ultrasonic limit and noise floor, MQA makes no decisions about music or it's perception by the listener. Perceptual codec makes decisions about music audibility thousand times every second.

One could imagine a very low compression rate audiophile lossy codec that reaches into ultrasonics and filters out the crazy noise in the DXD master to deliver exceptional sound quality. Most people would reject it out of hand.

If there is a "perceptual" goal in MQA, it's not for the listener, it's to your equipment. They are attempting to hold any code/decode differences to the original at or below our systems' SNR. Judging by various tests from @Archimago and others, they appear to be succeding. At that point, they become practically lossless to the listener.
 
Last edited:
Status
Not open for further replies.
Top Bottom