• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA: A Review of controversies, concerns, and cautions

Status
Not open for further replies.

garbulky

Major Contributor
Joined
Feb 14, 2018
Messages
1,510
Likes
829
I listened to some of those sample MQA files and compared it directly to the PCM version. They both came from the same master found here: http://www.2l.no/hires/ I used a Mytek Liberty. This DAC is an MQA decoder. (Note subjective no DBT) I didn't hear a difference. I only tried a few songs though.
 
Last edited:

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,416
Location
Seattle Area, USA
How do I get MQA plugins for ProTools so I can master stuff in MQA?

I've never seen this advertised, so I've no idea how any new music is going to be created with it.

(Tidal's MQA remastered streams, on the other, are almost surely driven by scripts, given there are 1 million of them now)

Answered my own question: MQA encoder is not available for retail sale, rumor is they're about $20k.
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
It wasn't clear from what I read, and the posters (on a recording / pro audio form) were tight lipped about any details, citing NDA.

I was thinking, if it’s hardware, it’s more expensive to ship to @amirm for measurements.

;)
 
Last edited:

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,867
Likes
37,882
Is this encoder hard or software?
There was a claim, unsubstantiated, that MQA has a cloud based encoder for record companies to use processing back catalog to MQA. My understanding is it is mostly software. But of course MQA doesn't want to be clear about any of this. The MQA plug in seems unlikely. Until the cloud based encoding (if true) you had to send files to MQA to do the encoding for you.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,867
Likes
37,882
I listened to some of those sample MQA files and compared it directly to the PCM version using a Mytek Liberty DAC. This DAC is an MQA decoder. (Note subjective no DBT) I didn't hear a difference.
The Mytek DACs, I think including the Liberty, keep the same lazy minimum phase filters in use if you engage MQA whether it is MQA or non-MQA material. It's okay, the listening test done at McGill university indicated no difference either.
 

garbulky

Major Contributor
Joined
Feb 14, 2018
Messages
1,510
Likes
829
The Mytek DACs, I think including the Liberty, keep the same lazy minimum phase filters in use if you engage MQA whether it is MQA or non-MQA material. It's okay, the listening test done at McGill university indicated no difference either.
I assume you mean this isn't the way it's supposed to work? Is it supposed to be minimum phase for MQA and something else for non MQA?
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,867
Likes
37,882
I assume you mean this isn't the way it's supposed to work? Is it supposed to be minimum phase for MQA and something else for non MQA?
My memory on the full details is a bit hazy. But yes, something like you described. Also certain steps in certain order change the output filter, but if you play an MQA file it leaves the output filtering in the MQA mode. I don't have a Mytek, but have seen the issue discussed on another forum.
 

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,416
Location
Seattle Area, USA
The Mytek DACs, I think including the Liberty, keep the same lazy minimum phase filters in use if you engage MQA whether it is MQA or non-MQA material. It's okay, the listening test done at McGill university indicated no difference either.

I've only listened to MQA half-way, i.e. through the software-only 1st unfold and, so far, through about half a dozen AB tests, have been able to identify MQA with high confidence.

Is listening to only the 1st unfold expected to leave the file 'more different'?
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,814
Likes
8,271
RE the "lazy minimum phase filter": Some units do keep the MQA filter engaged for non-MQA material, because they have trouble switching filters on the fly without producing audible pauses or clicks between MQA and non-MQA tracks. Other units engage MQA filtering only for MQA tracks and can switch back and forth between filters on the fly with no apparent issues.

RE MQA as software vs hardware: MQA encoding is software, meaning it can be done via software app/plugin, or of course through dedicated hardware that runs the software. MQA decoding is software and hardware. Specifically, the first "unfold" happens in software. This unfold "restores" the 44.1kHz or 48kHz MQA file to 88.2 or 96kHz by decoding the ultrasonic info that is lossy-encoded and buried in the 15th and/or 16th bit of the encoded/"folded up" MQA file. This "restoration" is lossy.

The second "unfold" has to happen in hardware, aka your DAC. This step upsamples the file to the max supported by the DAC for that type of file (e.g. 176, 192, or 384kHz), and applies a digital filter that allegedly is customized to your DAC.

Now, the first unfold can also be performed by the DAC/hardware as far as I know - the system can work with the first unfold in software or hardware. But the final unfold has to happen in hardware because it (allegedly) depends upon the specifics of the DAC, since the filter is (allegedly) customized to the DAC.

I say (allegedly) because folks have tested this and found that this final "custom" filtering applies the same filter across a large number of different DACs - meaning MQA's claim of hardware-customized filtering essentially is BS.

Also, it's important to understand that the upsampling that happens in the 2nd unfold also is BS - it literally is just upsampling. In other words, if the original PCM master file is 192kHz, MQA encoding downsamples it to 96k - in a lossy process that forever throws out 1/2 the samples - and then folds that downsampled 96k signal to 48k to create the MQA file. Then when the file is "unfolded," it's unfolded back to 96k, with again some loss. And then when it's "2nd unfolded" back to 192k, MQA is literally just doubling every sample in the 96k stream - none of the extra samples/data from the original 192k file remain.

So when you start with a 192kHz PCM file, MQA preserves 1/4 of the samples losslessly (for a 48kHz sample rate), and another 1/4 of the samples partially, in lossy format (for the sample rate above 48k and up to 96kHz) - and 1/2 of the samples are thrown out and lost entirely, forever (above 96k, up to 192kHz).

This is putting aside the question of bit-depth, since MQA losslessly preserves only about 14 bits of the first 16 bits, and then reconstructs bits 17-24 in a lossy manner (because it's impossible to encode/store that data losslessly in only 2 bits of the 16-bit stream).

This is why a lot of folks say that MQA is effectively about a 17-bit, 96kHz medium.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,757
Likes
242,241
Location
Seattle Area
Thanks for the nice explanation. I will comment on this part:
Also, it's important to understand that the upsampling that happens in the 2nd unfold also is BS - it literally is just upsampling. In other words, if the original PCM master file is 192kHz, MQA encoding downsamples it to 96k - in a lossy process that forever throws out 1/2 the samples - and then folds that downsampled 96k signal to 48k to create the MQA file. Then when the file is "unfolded," it's unfolded back to 96k, with again some loss. And then when it's "2nd unfolded" back to 192k, MQA is literally just doubling every sample in the 96k stream - none of extra samples/data from the original 192k file remains.
I am trying to think in all the high-res samples I have examined whether any of them have usable content beyond 48 kHz. As such, it is perfectly proper to throw away the spectrum above that. And then, in the interest of maintaining any characteristics that the original 192 kHz sampling provided, to upsample back out.

So in the context of perceptually preserving high-resolution content, what they do is valid.

It is only an issue if they had claimed mathematically lossless which MQA is not.

I ask again: why would we, as the objectivist group, get up in arms over the content above 48 kHz getting thrown out? Why would we be bothered that all the junk above that spectrum is discarded?

It is as if we want to wear the hat of subjectivism while using objectivism to critique it. :)
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,757
Likes
242,241
Location
Seattle Area
This is putting aside the question of bit-depth, since MQA losslessly preserves only about 14 bits of the first 16 bits, and reconstructs only partially (lossy) bits 17-24.
Let's agree that anything above 20 bits is pure noise. Getting rid of that is goodness as we know it is just encoded noise from ADC. The question then becomes what is between 17 and 20 bits and there, I am perfectly fine with perceptual analysis and lossy coding of that.

The original AES paper made a very good case for all of this. But somewhere they lost the plot and started to claim or imply that MQA is an archival/lossless copy of the master. If they dial that back, I don't see anything wrong with MQA as an attempt to get > 16/44.1 but without a massive bandwidth hit.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,757
Likes
242,241
Location
Seattle Area
Bandwidth is a non issue and will continue to become cheaper. MQA may reduce costs for the streaming companies but it is of no benefit to the end consumer who has to buy systems supporting this proprietary format to simply (at best) stand still.
Solving the issue for the service providers is a big deal. As we have seen, they are super reluctant to linearly increase their streaming costs with non-linear/much smaller benefit to consumers. Using the first sample in 2L library we see this:

1539618719392.png


For a service provider to switch from CD quality to 24/192 they would have to increase their bandwidth by a factor of 155/19 = 8. That is a big hit to their costs.

For a service provider to get the "24/192" moniker and get that down to 46/19 = 2.4 is a major improvement.
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,814
Likes
8,271
Thanks for the nice explanation. I will comment on this part:

I am trying to think in all the high-res samples I have examined whether any of them have usable content beyond 48 kHz. As such, it is perfectly proper to throw away the spectrum above that. And then, in the interest of maintaining any characteristics that the original 192 kHz sampling provided, to upsample back out.

So in the context of perceptually preserving high-resolution content, what they do is valid.

It is only an issue if they had claimed mathematically lossless which MQA is not.

I ask again: why would we, as the objectivist group, get up in arms over the content above 48 kHz getting thrown out? Why would we be bothered that all the junk above that spectrum is discarded?

It is as if we want to wear the hat of subjectivism while using objectivism to critique it. :)

Amir, I don't disagree with the individual statements here, but the way you have put them together does not make sense to me.

Throwing out content above 48kHz, aka downsampling from 192kHz to 96kHz: I agree with you - there's no need for those ultrasonic frequencies. But then the question is, once the samples are discarded, what possible "characteristics that the original 192kHz sampling provided" could there be - in other words, once you've downsampled from 192 to 96, what on earth is the point of re-upsampling to 192? What "perceptual" preservation are you talking about? The only "preservation" of 192k that MQA provides, is the color of the little LED light on MQA DACs, and a "192k" readout for musical streams that in actuality are only 96k. To me that reads like marketing and fraud rather than "perceptual preservation."

The point here is that it is not me (and others critical of MQA) who is saying that a 192k sample rate is worthwhile - it is MQA that is saying that. Without the underlying idea that 24/192k resolution is important, MQA becomes pointless.

In other words, MQA files are, as far as I know, a little larger than 16-bit, 96kHz FLAC files and a little smaller than 24/96 FLAC files, on average. If that's the case, then why not just go with 24/48 or 24/96 FLAC files? The bandwidth/streaming requirements would be more or less the same.

In that case, the only remaining argument for MQA is that their files sound better than 24/48 or 24/96 FLAC because they "correct" for the sonic signatures of the ADCs used in the recording process, and for the specific characteristics of the DAC used during playback. The former claim makes sense only when the ADC is known and when it's not a modern, multitrack recording that uses multiple ADCs, multiple A-D-A-D steps (as in the very common rock mastering practice of re-amplifying/re-recording certain tracks in a multitracks). And the latter claim has been shown to be largely BS, as MQA DACs using a variety of DAC chips, from variety of manufacturers, all have the same small number of filters applied during the final unfold - meaning the filters are not "customized" but instead merely reflect Stuart/MQA's subjective preference for minimum-phase, apodizing filters.
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,814
Likes
8,271
Let's agree that anything above 20 bits is pure noise. Getting rid of that is goodness as we know it is just encoded noise from ADC. The question then becomes what is between 17 and 20 bits and there, I am perfectly fine with perceptual analysis and lossy coding of that.

The original AES paper made a very good case for all of this. But somewhere they lost the plot and started to claim or imply that MQA is an archival/lossless copy of the master. If they dial that back, I don't see anything wrong with MQA as an attempt to get > 16/44.1 but without a massive bandwidth hit.

You neglect to mention that MQA does not simply lossy-encode bits 17-24 - they also have to store that lossy encoded data somewhere. And for compatibility reasons they store that encoded data within the first 16 bits of bit depth.

So what you have with an MQA file non-random noise (in other words, something that is not dither) embedded in the 15th and 16th bit, plus lossy encoded bits 17-24.
 

mansr

Major Contributor
Joined
Oct 5, 2018
Messages
4,685
Likes
10,705
Location
Hampshire
In other words, MQA files are, as far as I know, a little larger than 16-bit, 96kHz FLAC files and a little smaller than 24/96 FLAC files, on average. If that's the case, then why not just go with 24/48 or 24/96 FLAC files? The bandwidth/streaming requirements would be more or less the same.
MQA files are between 24/48 and 24/96 FLAC in size. 24/96 dithered to 18 bits and compressed with FLAC is somewhat smaller than MQA.
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Thanks for the nice explanation. I will comment on this part:

I am trying to think in all the high-res samples I have examined whether any of them have usable content beyond 48 kHz. As such, it is perfectly proper to throw away the spectrum above that. And then, in the interest of maintaining any characteristics that the original 192 kHz sampling provided, to upsample back out.

So in the context of perceptually preserving high-resolution content, what they do is valid.

It is only an issue if they had claimed mathematically lossless which MQA is not.

I ask again: why would we, as the objectivist group, get up in arms over the content above 48 kHz getting thrown out? Why would we be bothered that all the junk above that spectrum is discarded?

It is as if we want to wear the hat of subjectivism while using objectivism to critique it. :)

@amirm , you’re close to making a straw man here. Nobody claims that they puke when listening to 48 or 44 kHz. The «en armes» resistance movement started as a response to MQA’s claims that their algorithm could improve upon existing master files (thus pissing the master engineers off) and their other claims targeting audiophiles; taking for granted in the process that audiophiles and audiophools are the same.

The technology has its merits. Nobody disqualifies a tecnology per se; but Meridian didn’t stop with their initial AES paper in their attempt to market the technology.

Add to this lack of transparency, bully behaviour and the fact that record labels own MQA stocks. Further, quotes about «world domination» piss off the libertarians. Looking at the technology in isolation, disregarding MQA’s behaviour and goals, is counter productive.

And at the end of the day, we already had efficient encoding techniques for minimizing bandwidth.

Today, we stream 4K TV programs. Audio is not bandwidth hungry compared to TV. And none of the broadcasters or streamers own MQA stocks, lending little support to your bandwidth hypothesis.

If an MQA discussion is about technology only, one misses the picture, which is more about industry dynamics than kHz and bits.

You should know this very well, given your know-how of how the industry made HDMI a closed technology, and not an open one.
 

garbulky

Major Contributor
Joined
Feb 14, 2018
Messages
1,510
Likes
829
RE the "lazy minimum phase filter": Some units do keep the MQA filter engaged for non-MQA material, because they have trouble switching filters on the fly without producing audible pauses or clicks between MQA and non-MQA tracks. Other units engage MQA filtering only for MQA tracks and can switch back and forth between filters on the fly with no apparent issues.

RE MQA as software vs hardware: MQA encoding is software, meaning it can be done via software app/plugin, or of course through dedicated hardware that runs the software. MQA decoding is software and hardware. Specifically, the first "unfold" happens in software. This unfold "restores" the 44.1kHz or 48kHz MQA file to 88.2 or 96kHz by decoding the ultrasonic info that is lossy-encoded and buried in the 15th and/or 16th bit of the encoded/"folded up" MQA file. This "restoration" is lossy.

The second "unfold" has to happen in hardware, aka your DAC. This step upsamples the file to the max supported by the DAC for that type of file (e.g. 176, 192, or 384kHz), and applies a digital filter that allegedly is customized to your DAC.

Now, the first unfold can also be performed by the DAC/hardware as far as I know - the system can work with the first unfold in software or hardware. But the final unfold has to happen in hardware because it (allegedly) depends upon the specifics of the DAC, since the filter is (allegedly) customized to the DAC.

I say (allegedly) because folks have tested this and found that this final "custom" filtering applies the same filter across a large number of different DACs - meaning MQA's claim of hardware-customized filtering essentially is BS.

Also, it's important to understand that the upsampling that happens in the 2nd unfold also is BS - it literally is just upsampling. In other words, if the original PCM master file is 192kHz, MQA encoding downsamples it to 96k - in a lossy process that forever throws out 1/2 the samples - and then folds that downsampled 96k signal to 48k to create the MQA file. Then when the file is "unfolded," it's unfolded back to 96k, with again some loss. And then when it's "2nd unfolded" back to 192k, MQA is literally just doubling every sample in the 96k stream - none of the extra samples/data from the original 192k file remain.

So when you start with a 192kHz PCM file, MQA preserves 1/4 of the samples losslessly (for a 48kHz sample rate), and another 1/4 of the samples partially, in lossy format (for the sample rate above 48k and up to 96kHz) - and 1/2 of the samples are thrown out and lost entirely, forever (above 96k, up to 192kHz).

This is putting aside the question of bit-depth, since MQA losslessly preserves only about 14 bits of the first 16 bits, and then reconstructs bits 17-24 in a lossy manner (because it's impossible to encode/store that data losslessly in only 2 bits of the 16-bit stream).

This is why a lot of folks say that MQA is effectively about a 17-bit, 96kHz medium.
Do you have any evidence of this regarding the second unfold?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,757
Likes
242,241
Location
Seattle Area
Amir, I don't disagree with the individual statements here, but the way you have put them together does not make sense to me.

Throwing out content above 48kHz, aka downsampling from 192kHz to 96kHz: I agree with you - there's no need for those ultrasonic frequencies. But then the question is, once the samples are discarded, what possible "characteristics that the original 192kHz sampling provided" could there be - in other words, once you've downsampled from 192 to 96, what on earth is the point of re-upsampling to 192? What "perceptual" preservation are you talking about? The only "preservation" of 192k that MQA provides, is the color of the little LED light on MQA DACs, and a "192k" readout for musical streams that in actuality are only 96k. To me that reads like marketing and fraud rather than "perceptual preservation."
It is a presentation issue. Again, the market for high-resolution audio is for people who think they need high-resolution audio. Any downsampled 192 kHz to 96 kHz would be scuffed at by such audience. By sampling back to 192 kHz, whatever benefit people think there is out of that higher sample rate is preserved.

There are sites that sell 96 kHz content at lower prices than 192 kHz for example. At least that is what I remember so don't challenge me on that. :)
 
Status
Not open for further replies.
Top Bottom