• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,497
Why isn't this thread been promoted to the home page @amirm, this should be right up there as a sticky and kudos to @GoldenOne

He doesn't take the conclusions seemingly to be convincing? He's most likely more resonating with the pro-MQA claims about how it perceptually delivers on what it purports (irrespective of whether that's due to psychology or not). I also feel he may sympathize with the claimed goals of the format/business as I believe his work at Microsoft ages ago was trying to do something similar with respect to a proprietary codec push (though nowhere near as fantastical as the claims MQA makes on top of the proprietary codec being so restricted to outside sources, instead the work at MS seemed have been concerned with a DRM push strictly speaking).

This is true. But as I understand it, GoldenOne's intent in testing was to test MQA against its marketing claims, not as a lossy codec. As an aside, I'd be very interest to see how MQA does compared the the various other lossy codecs on the market. But until MQA codecs are easily available, it seems that would be difficult.

That's what happens when you refuse to address basic red flags. But the thing is, MQA (Meridian) is stuck in a catch-22 position due to the conjunction of claims they hold to. Reason being: If you submit to having your work open for audit, or provide the encoder for scientific testing using live subjects instead of simply machine measurements, you then run into a few problems depending on the sort of tests you run:

  • Compared to lossless (if MQA wins), you now have the problem of "original artist intent" not being possible, as the encoder would now be claiming to know intent of artists it has never been exposed to, and encoding lossy, but with results that don't demonstrate lossless results (which should be bit perfect among lossless encoders once the PCM is actually decoded). Which is one of the more supremely fantastical claims MQA makes. But if it's not perceptually lossless, then it cannot definitionally be considered the intent of the artist, because the artist would have to be assumed to be submitting something they themselves don't have access to give their approval on (it's not like the artist is working with a on-the-fly MQA "sounds" when they're producing music).

  • If the results end up not being statistically significant, then they would lose the main thrust of the products' claim EVEN IF it was the case that this encoder was somehow accounting for the intent of artist, you now have the situation that MQA does nothing over lossless audibly. In this scenario you're basically left with an encoder with no real purpose, and no real way to demonstrate any superiority over another. I suppose they can still try to make unsubstantiated claims about how privy they are to every ADC's profiling, and how the encoder magically accounts for them all regardless of content fed to be encoded. Regardless, the inference structure of how the can claim that we as users will be hearing what the producer is hearing and intending, still requires a formal argument even if the ADC universal knowledge claim is granted.

  • Compared to lossy, you still have the original artist intent issue, but now compound the problem by having to do a different kind of comparison to what amounts to an entirely different question compared to lossless. What I mean by this is, when benching against lossy, the limits of the comparison come down to locking in bitrate, or locking down filesize (or both). It's quite uninteresting to get results that say one codec is better, if for instance one codec can only do 256kbps, while another can do one in the thousands of kbps, or if one codec has a filesize of 5x. Those comparisons aren't interesting unless you're intending to bench against lossless eventually. But lets assume per filesize and bitrate, MQA beats something like Opus and Musepack for example to some statistically relevant degree - you're still left with artist intent issue. Basically, the better MQA does, the larger the problem of artist intent grows.

  • The final problem you get is, if it beats lossy in blind tests against similar bit-rate and/or filesize, you would then have more scrutiny drawn to it, to reveal the pattern of it's algorithmic operation. This was what OP wanted to potentially trend toward, but was cut short as the 3rd file he tried to send in, was basically denied (he wanted to see if there was correlation between the pollution of the entire file and if it had something to do with how much reconstruction of ultrasonic content was occurring by comparing his original file, to one that OP created which had less intense tones and such). In this scenario, you have what is classically the start of a new section of study, one that threatens the marketing-overriding intent of any engineering department (which is a big no-no for a company like Meridian especially). Meaning they would have people looking to explaining away the phenomena, and if that were to happen, that directly threatens the MQA business model if the patterns of the algorithm can be revealed to some appreciable degree, as it could then be adopted by potentially new codec creations.
In the end, you're left with either showing MQA is perceptually lossless (and would need to then bench against lossy, and beat it in relevant metrics of bitrate, filesize, encoder performance, etc...) and having to now account for how "artist intent" is being preserved. But if there are no testing-wins for MQA, then you can have a much easier time peddling the artist intent thing (though not with OP's video being out in the public).

One final thing, you could actually have a disaster scenario, where MQA loses in a mishmash that doesn't just 1v1 against encoders, but instead puts them all into the mix, and whichever gets a higher preference score, is the winner. In this realm, if MQA loses - MQA then loses VERY hard, especially if it's against multiple bit-rate tests, and MQA does worse against a codec that's functioning at lower system resources.

For MQA to make any sense without resorting to metaphysical or fantastical claims like a conscious encoder that telepathically knows all artists' intents, either way you cut it, they need to either drop the claim of "original artist intent" / "master quality". Either way it goes, with the claims MQA holds to. Revealing anything, or even saying anything more than they already have, potentially works against them when all factors are considered. This is why you don't see them actually daring to challenge any other codecs in any appreciable manner with controls.

I made a thread not long ago, asking if anyone can blind test against a few MQA tracks fully unfolded from a DAC, versus a DAC that has zero MQA compatibility since I wasn't able to. Didn't get much traffic tbh, but I'm still curious as you are. Then again I should seek out trained listeners, as I'm not one of them.
 

Jimbob54

Grand Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
11,115
Likes
14,782
I would love to see the pitch to the streamers, hardware manufs and labels vs the marketing to the consumer and be a fly on the wall at some of the training briefings within the industry.
 

sandymc

Member
Joined
Feb 17, 2021
Messages
98
Likes
230
I'd just like to point out that pushing it until it breaks is a perfectly valid way of evaluating a lossy/perceptual codec.

Yes. And the reported errors on test tracks is not encouraging for MQA in that regard. It's not at all uncommon for compression algorithms to break down if they're used on data they're not designed for. E.g., most any compression mechanism will be unable to compress encrypted data - if they are used, data size can actually expand. The expansion of size in one of the test tracks could be showing a similar syndrome. But the correct way to handle that eventuality is to fall back to no compression, or an alternate mechanism, not break entirely.
 

mansr

Major Contributor
Joined
Oct 5, 2018
Messages
4,685
Likes
10,705
Location
Hampshire
It is trivially proved that any lossless compression algorithm must produce larger output for some inputs. MQA isn't lossless, though. The reason MQA files are often larger than the plain FLAC equivalent is that they contain a lot of noise and noise-like data that the FLAC layer can't compress.
 

Atanasi

Addicted to Fun and Learning
Forum Donor
Joined
Jan 8, 2019
Messages
716
Likes
796
Yes. And the reported errors on test tracks is not encouraging for MQA in that regard. It's not at all uncommon for compression algorithms to break down if they're used on data they're not designed for. E.g., most any compression mechanism will be unable to compress encrypted data - if they are used, data size can actually expand. The expansion of size in one of the test tracks could be showing a similar syndrome. But the correct way to handle that eventuality is to fall back to no compression, or an alternate mechanism, not break entirely.
Lossless compression falls back to no compression, lossy compression falls back to encoding some garbage that fits in the bitrate limit. The worst-case bitrate of a lossless encoder is a bit larger than the uncompressed data, the additional bit could literally be a single bit that says that the following data is uncompressed.
 

sandymc

Member
Joined
Feb 17, 2021
Messages
98
Likes
230
It is trivially proved that any lossless compression algorithm must produce larger output for some inputs. MQA isn't lossless, though. The reason MQA files are often larger than the plain FLAC equivalent is that they contain a lot of noise and noise-like data that the FLAC layer can't compress.

Well, if that is so, then MQA is a really, really badly thought out design. Adding data that can't be compressed to a data stream, and then running it through a compressor is just stupid. First compress what can be compressed, then add the incompressible stuff. Presumably they did that to try to maintain backward compatibility to red book? If so, not clever.
 

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
then MQA is a really, really badly thought out design. Adding data that can't be compressed to a data stream, and then running it through a compressor is just stupid.

Personally, I do not think that's what they do exactly. Still the point holds - they had an idea that - upon practical consideration and implementation - no longer holds water and had to be patched-upon-patched. Turning into a Frankenstein (or Elephant Man). And those "ugly babies" tend to be hidden from public eye, at all costs...

I tried to point it out here, by quoting L. Carroll. (For those needing translation, it means "doing silly shiit to cover up silly shiit done in first place.)
 
Last edited:

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,822
I suppose they can still try to make unsubstantiated claims about how privy they are to every ADC's profiling, and how the encoder magically accounts for them all regardless of content fed to be encoded.

I still consider this claim a distraction and highly speculative (if not out and out fatuous). A public demo of this capability would be easy to do and reveal nothing proprietary at all.
 

oursmagenta

Active Member
Forum Donor
Joined
Jan 19, 2021
Messages
161
Likes
187
Location
France
I wonder if there is an elephant in the room here.

a) We already stream literally s**t-load of data with netflix/amazon prime video/Disney+/Apple TV+/ HBO Max etc all in 4K HDR + multichannel audio tracks, space storage is pretty cheap these days, and we have FLAC.
b) Some in the audio industry want to promote a new closed, drm-loaded format that promises us, audio-fools something better than audio-heaven whereas there isn't (let's say to date for the sake of science) any strong argument/evidence based of the audible benefit of MQA over FLAC.

I have a hard time trying to make sense of b) since we already have a) as a reality.
 
Last edited:

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,961
Likes
2,626
Location
Massachusetts
1) It can in theory represent some of the spectrum above 22.05 kHz that is completely thrown away in Redbook. We know redbook represents zero data above its band limit. So anything that can be preserved can be technically called superior.

2) A signal that has the statistical content and amplitude of a suite of music. I know Meridian/MQA team had performed such an analysis and based on that, determined how much of that spectrum they needed to represent. In the extreme case, if content just disappears in noise, they could simply generate noise in the decode and not have to encode anything.

A proper encoder would analyze the ultrasonic content and determine what part of is correlated with music and what is not. The latter can be thrown out or just represented as noise. The former then can be heavily quantized down to only represent that correlation. Say the spectrum moves from -90 dB to -85 dB at 40 kHz. Then all you need is a single bit to represent that dynamic range. PCM audio has flat encoding where it wastes the full 24 bits whether there is any useful content.

There are HUGE assumptions in MQA encoding in this manner since that is how almost all music is. Violating that completely breaks the encoder as it can't remotely squeeze any paradoxical content into the container it has.

3) I expected MQA to publish such data in AES. And more controlled testing based on my conversations with Bob Stuart in early days. And of course encoders be available for sale (not free, but available to buy). None of this happened. Whether this indicates the above goals were not met, I don't know.

What I do know is that whatever it does, MQA lights up the "high-res master" light on the DAC. To the extent people can't hear ultrasonics anyway, that maybe the end goal that is needed to make people think MQA is better. And job is done. :) For good measure, they could and most definitely have, thrown in some content that they know is better master so any in-the-field comparison would be null and void in their favor.

Let me suffice it to say that "the extent people can't hear ultrasonics anyway" is absolutely zero. That is why they are called ultrasonics. ;)
Here is the dictionary definition:
ultrasonics
[ uhl-truh-son-iks ]SHOW IPA

noun (used with a singular verb)
the branch of science that deals with the effects of sound waves above human perception.

Once you accept the definition of ultrasonics, MQA becomes as relevant as $5000 speaker cable.
A $5000 speaker cable may not harm the sound but some have been shown have demonstrable degradation and deleterious properties.
MQA is lossy, a less flattering description demonstrably deleterious.

MQA is not HD-Audio it is a lossy folding technic, a poor reconstruction filter, and 8-tons of marking intended to simulate HD-Audio.
HD-Audio is already a tenuous proposition.
It is possible that HD-Audio performs better in some systems due to the individual DAC handling for the sample rate.
It is also possible the HD-Audio performs worse on some systems due to modulating the ultrasonics into the audible range. This would be rightly considered distortion. Perhaps, HD-Audio has benefits but MQA has nullified them all.

MQA is not HD-Audio has it reduces the available dynamic range, operates at a lower sample rate, and the has a reconstruction filter that should at the very least result in a panther that is lying flat or beheaded. ;)

Am I correct that MQA is also restrictive in that it eliminates additional REQ/PEQ processing with another D/A conversion?
If that is the case, I find even tacit support by some, requires serious introspection.

- Rich
 
Last edited:

John Atkinson

Active Member
Industry Insider
Reviewer
Joined
Mar 20, 2020
Messages
168
Likes
1,089
Why not? They all follow perceptual models of hearing to get their massive compression ratio. At 128 kbps, only 8% of the bandwidth is used. No way you can code pathological signals and expect to get anything near the quality that it normally produces for music.

See my 2008 article on testing lossy codecs with multitone test signals to examine how they manage the limited "bit budget." (Page 2 has the actual test results for MP3 and AAC at various bitrates.) https://www.stereophile.com/features/308mp3cd/index.html

John Atkinson
Technical Editor, Stereophile
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,497
I'd just like to point out that pushing it until it breaks is a perfectly valid way of evaluating a lossy/perceptual codec.

Given the constant secrecy MQA pulls, I feel this is pretty much one of the more valid ways of evaluating their codec. Until the day we are provided access to the encoder and decoder. Otherwise you have to do far more demanding jumping of hoops to extrapolate appreciable results.

I still consider this claim a distraction and highly speculative (if not out and out fatuous). A public demo of this capability would be easy to do and reveal nothing proprietary at all.

It is speculative and an obvious distraction. The only reason I say they can try to make such a claim, is because it's their safest bet against revealing anything about their tech (which is their goal obviously at this point, as all benefit of the doubt for good faith approach went out the window long ago). Meaning they can make the claim, but since no one can disprove it, they can continue with their nonsense. When I say they can still continue to make unsubstantiated claims if they wish, I meant it in a deragatory sense, in the same way a pathological liar, can still keep on lying if they wish to proceed with their nonsense.

Make no mistake though, they take a PR hit with continuing to evade providing the demonstration to prove that claim (and any other really that they don't offer proof for). They seemingly did the calculation, and it pays to take such a hit, over actually revealing if the claim is actually true by demonstration.
 
Last edited:

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,961
Likes
2,626
Location
Massachusetts
If you put square wave in MP3, it sounds bad too. You can choose to not do that, or do as what artist do: they don't care as they don't think the audience is remotely that critical. I can't listen to DBS Satellite radio services due to massive amount of compression artifacts (they have bandwidth as low as 32 kbps!). Yet they have millions of customers so clearly fidelity is not critical as we all know. And content owners don't seem to care.

MQA would never to be applied to sources with digital clipping over overages due to their rigorous standards ;)
If MQA can figure out a royalty scheme, it will be coming to MP3 and satellite radio, after all, they are working on screwing up CDs.

- Rich
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,449
Likes
4,818
See my 2008 article on testing lossy codecs with multitone test signals to examine how they manage the limited "bit budget." (Page 2 has the actual test results for MP3 and AAC at various bitrates.) https://www.stereophile.com/features/308mp3cd/index.html

John Atkinson
Technical Editor, Stereophile

Interesting: the potential benefits of AAC vs MP3 seem obvious and your conclusion was clearly expressed, based on that test signal analysis.

That really demonstrates there is a double standard as far as MQA is concerned.

On one hand, the behavior MP3/AAC/OGG/etc...with test signals is worthy of investigation and publication and the results described as conclusive.

On the other hand, when that method is applied to MQA, reactions range from "oh, but you can't do that" to "this shows you don't understand anything".

That aspect of @GoldenOne test is essentially the only one that was vehemently criticized by the MQA defenders here because it was the tiny thing they could use to deflect attention from all the other glaring issues (vs marketing claims) and yet, it is exactly what you did back then, except that he had to delegate the encoding to a third party because encoders aren't publicly available.
 

mieswall

Member
Joined
Nov 10, 2019
Messages
65
Likes
112
Let me suffice it to say that "the extent people can't hear ultrasonics anyway" is absolutely zero. That is why they are called ultrasonics. ;)
Here is the dictionary definition:


Once you accept the definition of ultrasonics, MQA becomes as relevant as $5000 speaker cable.
A $5000 speaker cable may not harm the sound but some have been shown have demonstrable degradation and deleterious properties.
MQA is lossy, a less flattering description demonstrably deleterious.

MQA is not HD-Audio it is a lossy folding technic, a poor reconstruction filter, and 8-tons of marking intended to simulate HD-Audio.
HD-Audio is already a tenuous proposition.
It is possible that HD-Audio performs better in some systems due to the individual DAC handling for the sample rate.
It is also possible the HD-Audio performs worse on some systems due to modulating the ultrasonics into the audible range. This would be rightly considered distortion. Perhaps, HD-Audio has benefits but MQA has nullified them all.

MQA is not HD-Audio has it reduces the available dynamic range, operates at a lower sample rate, and the has a reconstruction filter that should at the very least result in a panther that is lying flat or beheaded. ;)

Am I correct that MQA is also restrictive in that it eliminates additional REQ/PEQ processing with another D/A conversion?
If that is the case, I find even tacit support by some, requires serious introspection.

- Rich
Rich,
You need to read the papers of MQA to understand why they are capturing ultrasonics (some of them a bit dubious, I recognize, such as the non-audible perception of these frequencies). And, btw, this argument would invalidate any sample rate higher than 44.1 Khz... I bet everybody here do believe HD files sound better than Redbook. One of the purposes (but not the only one) is to move the Nyquist frequency much higher in the spectra, as to avoid that the phase-shift anomalies of filters applied in the ADC conversion to fall in the audible region, thus obtaining the audible band with all the harmonics in full time-coherence with its fundamentals. That's why they must work from masters for a better MQA file, btw.

BTW, if these flawed tests of Archimago or GoldenEar show something, it is precisely that they confirm some of the things that MQA says in their papers:

1- The anomaly of ripples in the square wave response is exactly what is expected to happen if you are aware that MQA is expecting less dynamics at higher frecuencies (as by definition happens with all music), and thus, using smaller effective bitrate in that part of the spectrum (the rest of the 24-bit word used as room for origami folds). If you model a square wave in a wave simulator, and then start reducing amplitude of higher harmonics, that's almost exactly what you should get.

2- The fact that Archimago obtained larger MQA files when the input was a 16/44 shows that the most compressible areas for a Flac (void information), may in fact be used by MQA to store information coming from the upper folds of the origami, and so, less compressed. That, and that the file is of 24bit depth instead of 16, of course...

3- The anomaly in the spectrum of Archimago surrounding 22 Khz... which happens to be the Nyquist frequency (22.05 Khz in a Redbook). One of the process that MQA is trying to achieve is to eliminate the aliasing that ADC normally generates around the Nyquist frequency. If presented with flawed information, probably the MQA process prefers to just erase a tight band around de Nyquist frequency instead of leaving the aliasing artifacts that would be much more perjudicial.

4- I think it is extremely interesting the see-saw response (completely inaudible, btw) in his pure sine wave plot. It may show that MQA is using some kind of sigma-delta-like process in the conversion, as that pattern if typical of the reversing polarity of a significant 1-bit around a predefined margin of the signal. I haven't seen this described in what I've read of MQA. If not, inadvertently GoldenEar may have disclosed an interesting fact about how MQA is working. I would test this at several frecuencies to see what happens in each one in this respect.
 
Last edited:

mansr

Major Contributor
Joined
Oct 5, 2018
Messages
4,685
Likes
10,705
Location
Hampshire
Rich,
You need to read the papers of MQA to understand why they are capturing ultrasonics (some of them a bit dubious, I recognize, such as the non-audible perception of this frequencies). And, btw, this argument would invalidate any sample rate higher than 44.1 Khz... I bet everybody here do believe HD files sound better than Redbook. One of the purposes (but not the only one) is to move the Nyquist frequency much higher in the spectra, as to avoid that the phase-shift anomalies of filters applied in the ADC conversion to fall in the audible region, thus obtaining the audible band with all the harmonics in full time-coherence with its fundamentals. That's why they must work from masters for a better MQA file, btw.

BTW, if these flawed tests of Archimago or GoldenEar show something, it is precisely that they confirm some of the things that MQA says in their papers:

1- The anomaly of ripples in the square wave response is exactly what is expected to happen if you are aware that MQA is expecting less dynamics at higher frecuencies (as by definition happens with all music), and thus, using smaller effective bitrate in that part of the spectrum (the rest of the 24-bit word used as room for origami folds). If you model a square wave in a wave simulator, and then start reducing amplitude of higher harmonics, that's almost exactly what you should get.

2- The fact that Archimago obtained larger MQA files when the input was a 16/44 shows that the most compressible areas for a Flac (void information), may in fact be used by MQA to store information coming from the upper folds of the origami, and so, less compressed. That, and that the file is of 24bit depth instead of 16, of course...

3- The anomaly in the spectrum of Archimago surrounding 22 Khz... which happens to be the Nyquist frequency (22.05 Khz in a Redbook). One of the process that MQA is trying to achieve is to eliminate the aliasing that ADC normally generates around the Nyquist frequency. If presented with flawed information, probably the MQA process prefers to just erase a tight band around de Nyquist frequency instead of leaving the aliasing artifacts that would be much more perjudicial.

4- I think it is extremely interesting the see-saw response (completely inaudible, btw) in his pure sine wave plot. It may show that MQA is using some kind of sigma-delta-like process in the conversion, as that pattern if typical of the reversing polarity of a significant 1-bit around a predefined margin of the signal. I haven't seen this described in what I've read of MQA. If not, inadvertently GoldenEar may have disclosed an interesting fact about how MQA is working. I would test this at several frecuencies to see what happens in each one in this respect.
Are you feeling OK? That amount of nonsense in one person can't be healthy.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,497
Am I correct that MQA is also restrictive in that it eliminates additional REQ/PEQ processing with another D/A conversion?
If that is the case, I find even tacit support by some, requires serious introspection.

- Rich

Yep, I don't think you can apply DSP unless you get some special permission to have access to a fully unfolding MQA DAC's chip and have perhaps some FPGA module that interfaces with it before the analogue output. Aside from one user earlier in the thread stating there are MQA DDC's out there (in Korea I believe) that allows digital output - I see no way of being able to maintain full unfold capability, and applying PEQ in the digital realm for MQA. Unless I'm missing something blatantly obvious here or whatever..

MQA would never to be applied to sources with digital clipping over overages due to their rigorous standards ;)
If MQA can figure out a royalty scheme, it will be coming to MP3 and satellite radio, after all, they are working on screwing up CDs.

- Rich

I actually didn't know the MQA-CD disaster existed for the longest time. But it seems mostly relegated to the Japanese market. Certainly lots of audiophiles there.
 

levimax

Major Contributor
Joined
Dec 28, 2018
Messages
2,398
Likes
3,527
Location
San Diego
Rich,
I bet everybody here do believe HD files sound better than Redbook.
I bet not. Have you ever tried to take a 192/24 file and dither it down to 44.1/16 and ABX it level matched in something like Foobar2000 ABX and reliably tell the difference? I have tried and can not tell a difference so I do not believe HD sounds better than Redbook.
 
Status
Not open for further replies.
Top Bottom