• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,948
Likes
2,617
Location
Massachusetts
Forget the “3 bits” that people are mentioning. It’s a red herring (or maybe the sound of axes grinding). I already described in message https://www.audiosciencereview.com/...-music-on-tidal-to-test-mqa.22549/post-759747 how, with a digital audio recording of actual music, it is possible to create a hidden data channel in the least significant bits without losing resolution or “bits.” So forget about MQA for now and consider the following thought experiment (which has nothing to do with “deblurring,” “leaky” reconstruction filters, B-splines, etc):
....
But again, to talk about “losing 3 bits” or “truncating” the audio data is incorrect.

John Atkinson
Technical Editor, Stereophile


There seems to be a blindness to the very real concern that CD quality is under assault, the true master is under assault by a company that wants to preserve the "crown jewels". We had the crown jewels and MQA wants to remove them.

The answer seems to be, just pay the royalty it will be fine.
Who cares about the crazy computer audiophiles, we long for the days of the record store. Good grief.

The red herrings are swimming over at MQA inc., masquerading as key features.

The proponents of MQA don't seem to acknowledge the damage that will be done if MQA becomes the only master. The loss of resolution, added noise, added ultrasonics added to the sources. We may soon have new titles offered with less than 44.1/16 resolution for the fist time since the CD became mainstream.

MQA is so bad, it doesn't seem to dare call itself HD-Audio. That's quite statement from a company that has been to be a proven prevaricator.

Without MQA hardware/software or with with MQA bypassed to allow for processing freedom, exactly how does the a file with "3 bits" used for MQA folding get these bits back?

What miracle has the exact same bit serving both functions?

I prefer my herring pickled as well as simple answers to simple questions.

- Rich
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,447
Likes
4,805
As I wrote, if the starting point is a 16-bit file, there there is much less information space in which to embed a hidden data channel beneath the analog noise floor?

Well, the cost is given as 0.07 bit of DR per MQA's answers to Stereophile, provided you have a MQA decoder.
https://www.stereophile.com/content/mqa-questions-and-answers-bit-depth-mqa
That is a statement that, per the usual constraints, can't be verified anyway.
No mention of the DR loss on standard CDs, but @mansr 's estimate seems to be accurate (as a best case).

I don't think anyone disputes that you can cleverly repurpose bits in a 96kHz 24-bit files... but I don't really care about Hi-Res, the market has been shady enough (oversampling of old recordings sold at a higher price) that it doesn't need another layer of shadiness imho.

Now, back to people expecting (and being promised/paying for) CD 44.1 kHz 16-bit data. They could be a bit unhappy when they only get 13 or 13.5 bits of DR on their non-MQA devices out of a stream that has been covertly converted to MQA or if MQA CD becomes the norm. Or if they are forced into MQA CDs...

If MQA becomes the standard, customers with MQA hardware will only lose 0.07 bit of DR (per marketing) but will be paying more (indirectly yes, but definitely more). Customers without MQA hardware will lose at least 2.5 bits of DR.

How is that progress for customers?
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,447
Likes
4,805
Note @John Atkinson makes no claims as to the efficacy of the entire convoluted MQA process in bringing about any measurable parameter improvement in the delivery of digital audio to consumers.

I am sure the answer was given at some point, but since it is the least significant factor and certainly wasn't the intent tof the artist who created MQA, it has been scrambled and moved below our reading threshold.
 

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,948
Likes
2,617
Location
Massachusetts
MQA Goal: Extract royalties
Pitch to record labels: preserve the crown jewels
Pitch to audiophile press: reduced blurring and superior sound (and you wonder why we call them gullible).

But then, money was running low.
Pitch to Dorsey: control and money, the things he loves most.

Never fear, the defenders will explain why technically its not that bad, just pay the royalties.
Now, who exactly are the rubes ;)

- Rich
 

Sir Sanders Zingmore

Addicted to Fun and Learning
Forum Donor
Joined
May 20, 2018
Messages
970
Likes
2,003
Location
Melbourne, Australia
If you read their patents they describe some of the DRM capabilities but the company has since claimed they will never use them. Personally I don't trust them and am continuing to build my personally library with cheap used CD's. My guess is streaming is going this route (DRM, higher prices, less choice, etc.) whether it is MQA or something else.

If you believe they will never use them (or at lest never try), I have a bridge to sell you
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,670
Likes
38,760
Location
Gold Coast, Queensland, Australia
Keep buying those original early CDs from the thrift stores for a few cents, gentlemen.

Stockpile those rarities to sell for a fortune to the next generation of future hipsters who "rediscover" the purity of un-molested 16 bit audio made in olden days when motives were pure and technology improved with each step.

Trades will take place in dark alleys for the "pure stuff". The MQA police will be looking for stashes of CDs with perpetual playing rights attached. Elaborate stings will take place to get large collections out of circulation and off the secondary (royalty free) market.

Our dystopian music future. ;)
 

mieswall

Member
Joined
Nov 10, 2019
Messages
65
Likes
111
Forget the “3 bits” that people are mentioning. It’s a red herring (or maybe the sound of axes grinding). I already described in message https://www.audiosciencereview.com/...-music-on-tidal-to-test-mqa.22549/post-759747 how, with a digital audio recording of actual music, it is possible to create a hidden data channel in the least significant bits without losing resolution or “bits.” So forget about MQA for now and consider the following thought experiment (which has nothing to do with “deblurring,” “leaky” reconstruction filters, B-splines, etc):

Imagine that I have a 24-bit audio file of the music from which I extracted that room tone recording mentioned earlier, sampled at 2Fs (88.2kHz). I would like to create a version of that file that will play with a baseband sample rate (44.1kHz) in systems with antique D/A converters but also play at the original 88.2kHz sample rate in my big rig.

I take that 24-bit file and using a complementary pair of low- and high-pass digital filters, I split it into two 24-bit files: one containing content below 22.05kHz so that it can now be considered as having an effective sample rate of 44.1kHz; the other containing content from 22.05kHz to 44.1kHz. As long as the filters used are of a specific type, the band splitting will be transparent.

I examine the spectrum of the background analog noise in the baseband file and calculate that I can create a hidden data channel in the 5 LSBs (bits 19-24), which are 2 bits (12dB) below the lowest amplitude of the audio data. I then examine the spectrum of the 2Fs file. I find that, as expected, the ultrasonic content both has a self-similar spectrum that declines in amplitude with increasing frequency and is at a low level. The level is so low, in fact, that the actual quantization is close to 5 bits.

So, if I encrypt the 5-bit/2Fs data as pseudorandom noise with a spectrum identical to the background noise in the baseband file, I can bury those data in the hidden 5-bit data channel. I now have a single 24-bit file sampled at 44.1kHz that will playback with the same audio quality as the original file (other than the low-pass filtering at 22.05kHz).

For playback in the big rig, a flag that I have embedded in the file’s metadata tells the D/A processor that it has to extract and de-encrypt the 5-bit audio data in the hidden channel. It then upsamples those data to 2Fs, attenuates the data to the level in the original file – this pushes the 5-bit quantization noise below the original background noise floor – and adds the result to the baseband file that has also been upsampled to 2Fs.

In theory, I am playing back the 24-bit baseband file as if it were a 2Fs file with no loss of bandwidth or information or resolution or “bits.” (That would be the “thought experiment” equivalent of the “MQA Stereo original resolution” files in the 2L screenshot you included in your post.)

The devil, of course, lies in the details. How do I encrypt the low-bit-rate content between 22.05kHz and 44.1kHz so it resembles pseudorandom noise? I have no idea, even though I had discussions with the late Michael Gerzon about this back in the day. What if the starting point is a 16-bit file, where there is much less information space in which to embed a hidden data channel beneath the analog noise floor? (That is the “thought experiment” equivalent of the “MQA-CD” files in your 2L screenshot.) Again, I don’t know. What if the statistics of the original audio don’t conform with the self-similar spectrum that I am expecting? That, of course, is how GoldenOne “broke” the encoder.

But again, to talk about “losing 3 bits” or “truncating” the audio data is incorrect.

John Atkinson
Technical Editor, Stereophile

@John Atkinson: What a pleasure is to read a knowledgeable, not biased post here. Brilliant explanation in both posts of this nonsense of the 13-bits mantra of some. Thank you!.

That was mainly about encapsulating data in the noise band of the file. The other issue that I find less clear in MQA's articles is how they make use of the recovered space at upper region of the bandwidth and in ultrasonic, above the maximum amplitudes of music (tiny above 20 Khz). They go in depth in their papers explaining how you see lower amplitudes in music the higher you move in frequency (based on statistical analysis of real music, but I would say it also endorses this idea any fourier decomposition of harmonic content of any real instrument). They even graph an upper limit of capture of amplitudes in that famous "orange triangle of music" they use to show in their articles every time. But, even if they are aware that no information is to be captured in the ADC process above that dynamic envelope of music, that empty space cannot be used (at least below 24 Khz) without loosing compatibility with Redbook.

You demonstrated that, because of noise floor, less bits are necessary at lower bands, while perhaps close to the full 16 bits would be required in the vicinity of 12 Khz. But then, that part of the spectrum is well beyond the highest possible note and there is no instrument having big amplitude harmonics there or above. So, theoretically even more space could be regained for packing ultrasonic information... if not for the Redbook compatibility issue (or I may be missing something here, and that upper amplitude space could also be recovered in some way?)

My speculation: They may use that empty space to allow a set of special soft-slope filters (perhaps some kind of bezier or adaptive type) in order to reach the now higher Nyquist fr (48Khz or 96Khz) in a much softer way, so as not to shift phases of high frequency content and also completely avoiding aliasing.
While, say, 300-400db/octave antialiasing filter could be required in Redbook (which I understand is nearly impossible and so, some degree of aliasing must be allowed in Redbook), if instead they move this possible set of soft-slope filter way back in the audio band (without sonic penalty, as still there is no high amplitude of harmonics there), they could make room for those one or two octaves above 20 Khz (of the higher sampling), plus one or two others with this displacement well into the audible region. That, paired with the controlled slope of the filters used (and the parameters for those bezier curves may even be controlled with the statistical analysis of content prior to conversion), that may totally avoid the time smearing of harmonics, and consequently, the much better time response they achieve, as those are the main purposes of MQA. In fact, the anomalities shown in those tests in square waves reconstruction could be showing just that: the effect of these low pass filters when processing test tones instead of music.

MQA says that some two thousand possible processes are applied in the quantization, selected by the analysis of content prior to conversion. While some of them are obviously a function of the type of input they are processing (analog, high-res digital masters, low-res digital), it is more or less evident that no simple answers are possible here. Anyway, I would love to read what you think about the treatment of MQA of that high energy region in higher or ultrasonic frequencies: how they implement that "triangle of music" they intend to capture in the process.
 
Last edited:

Don Hills

Addicted to Fun and Learning
Joined
Mar 1, 2016
Messages
708
Likes
464
Location
Wellington, New Zealand
As I wrote, if the starting point is a 16-bit file, there there is much less information space in which to embed a hidden data channel beneath the analog noise floor? I don’t know how that is handled. But, of course, if the starting point is sampled at 44.1kHz or 48kHz, there are no ultrasonic data to embed in a hidden data channel. ...

I would really like to see an analysis of MQA encoding of 16/44.1 files. As you point out, there's no ultrasonic data to encode. There's also no > 16 bit content to save. To my simple mind, there's no audible benefit to MQA processing of 16/44.1, so the benefits must lie elsewhere.
 

TurbulentCalm

Member
Joined
Mar 18, 2021
Messages
82
Likes
198
Location
Australia
As I wrote, if the starting point is a 16-bit file, there there is much less information space in which to embed a hidden data channel beneath the analog noise floor? I don’t know how that is handled. But, of course, if the starting point is sampled at 44.1kHz or 48kHz, there are no ultrasonic data to embed in a hidden data channel.

John Atkinson
Technical Editor, Stereophile

John, thanks again for your reply and I’m still working through it but …

… if the starting point is sampled at much higher rates and also with an extra 8 bits sample, as with the 2L tracks, I’m still at a complete lose as to how MQA could shrink all that down to a MQA-CD and still be able to decode it back to its original format without considerable lose.

To me it’s like MQA has claimed that their music codec was actually able to produce infinite power because they had developed a software version of perpetual motion with above unity output.

John, I really think that if anything this thread has proven that MQA has never provided the information required by technically minded, yet sceptical thinking, audiophiles to have peace of mind that their audio is as good as it should be.

Along with licensing issues, special MQA approved DACs and the concern that CD quality FLAC might be replaced with MQA-CD encoded FLAC raise suspicion of what MQA future plans and impact might be to future Audio for all consumers.

I think those of us who value the quality of our audio data have a right to remain concerned about the impact of MQA and until MQA grows a pair and opens up about what they are really doing, we can only continue to test these claims through efforts such as @GoldenOne has done at the start of this thread.
 

Rusty Shackleford

Active Member
Joined
May 16, 2018
Messages
255
Likes
550
I'm really sorry about the term "luminaries", used in a hurry and with English as my second language. You may call them "brilliant engineers", "world class engineers", or just "smart guys". Please choose the one that is less disturbing to you. In the same fashion, what I called "hunch" you may better call it "hypothesis".

Audio Engineering Society, probably the world's most important organization in audio engineering, in almost 75 years of history has awarded 35 people with their Gold Medal Award. You may know some names among them: Georg Neumann, Willi Studer, Claude Shannon, Ray Dolby, Floyd Toole, Rudy Van Gelder. Among them it is also the name of Michael Gerzon, the "father" of most of the foundational patents regarding MQA, and I believe also postumely named in the MQA patent. He was a recognized genius at his time, that sadly died at the age of 50 because of a health condition.
The buddy of Gerzon was Peter Craven (the one that people here seems to believe has lost every remaining neuron in his head, with what surely must be a kind of contagious disease), and with whom he co-authored most of those patents. Both were the main core of the audio engineering department of Oxford University, one of the world's most renowned research institutions in audio.

The one in the center of this photo of the early 70's is Ray Dolby (*), surrounded by Craven and Gerzon in their twenties, already famous for their achievements in audio research. Dolby went to Oxford to discuss with them his patents about its surround sound systems, of which Gerzon and Craven would have been participants if the British government wouldn't have cut research funds at that time. By that time they had already made key research about noise shaping, digital systems analysis, and developed the ambisonics field recording technology and invented the first ambisonics microphone.
(*): another excuse: I called him "Thomas" instead of "Ray" in a previous post.

View attachment 126468

Once again, this is appeal to authority, not appeal to facts. Consumers aren’t paying to listen to their CVs. No one cares one whit whether a “luminary” or a rando online created a great technology. The theoretical brilliance of science is that ideas can (and should) be evaluated apart from the people who created them. It doesn’t matter what they did or didn’t do in the past.
 

Mountain Goat

Active Member
Forum Donor
Joined
Apr 10, 2020
Messages
188
Likes
295
Location
Front Range, Colorado
Keep buying those original early CDs from the thrift stores for a few cents, gentlemen.

Stockpile those rarities to sell for a fortune to the next generation of future hipsters who "rediscover" the purity of un-molested 16 bit audio made in olden days when motives were pure and technology improved with each step.

Trades will take place in dark alleys for the "pure stuff". The MQA police will be looking for stashes of CDs with perpetual playing rights attached. Elaborate stings will take place to get large collections out of circulation and off the secondary (royalty free) market.

Our dystopian music future. ;)

Now you're making me glad I kept those hundreds of CDs.
 

mieswall

Member
Joined
Nov 10, 2019
Messages
65
Likes
111
… if the starting point is sampled at much higher rates and also with an extra 8 bits sample, as with the 2L tracks, I’m still at a complete lose as to how MQA could shrink all that down to a MQA-CD and still be able to decode it back to its original format without considerable lose.

Because in ultrasonic band the content of music (just harmonics of increasingly lower magnitude) is NOT 24 bit deep, but as Atkinson's post says, 5 bits, and decreasing even more the higher you move in frequency. The rest is noise below, and empty bits above. And so, they don't need to pack 24 bits below the noise of the first 24Khz band, but instead much, much less (and btw, this is not a mysterious, hidden information; it is explained by MQA in their documents even with annoying detail).

Unless, of course, your signal is white noise of big amplitude or square wave, that instead DOES have very high amplitudes of harmonics in ultrasonic (and also in the high region of the sonic bandwidth). Which, btw, is the reason why Amir tests of DACs increase the band to ultrasonics when measuring square waves. That's why MQA performs bad with that type of signal: it is not intended to process that. One of the many reasons why these tests are flawed.

Btw, I can't figure out what's worse: if Archimago and GoldenEar didn't know that (almost unbelievable); or that they knew it and even then, they prepared a test file knowing the type of failures they would get.
 
Last edited:

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,670
Likes
38,760
Location
Gold Coast, Queensland, Australia
That's why MQA performs bad in that type of signal: it is not intended to process that. One of the many reasons why these tests are flawed.

Btw, I can't figure out what's worse: if Archimago and GoldenEar didn't know that (almost unbelievable); or that they knew it and even then, they prepared a test file knowing the type of failures they would get.

If the MQA process cannot deal with any waveform type or shape that fits within the bandwidth it supposedly functions within, then it is, by definition, broken itself. No amount of handwaving can hide that.

PCM can acurately describe white noise, pink noise, single sample impulses, square waves, sines, whatever you want within its rated range pretty much perfectly. Strap-on an MQA encoder and suddenly it's not fair? What a joke.

Bandwidth is not a problem in 2021. The MQA premise is obsolete, un-necessary, destructive and just not needed (or wanted).
 
Last edited:

mieswall

Member
Joined
Nov 10, 2019
Messages
65
Likes
111
If the MQA process cannot deal with any waveform type or shape that fits within the bandwidth it supposedly functions within, then it is, by definition, broken itself. No amount of handwaving can hide that.

PCM can acurately describe white noise, pink noise, single sample impulses, square waves, sines, whatever you want within its rated range pretty much perfectly. Strap-on an MQA encoder and suddenly it's not fair? What a joke.

Bnadwidth is not a problem in 2021. The MQA premise is obsolete, un-necessary, destructive and just not needed (or wanted).

And why do you want it to process any waveform?: Do you want to use MQA to decode the photos from the Perseverance sent form Mars? Is it MUSIC what this is intended to process. What matters to you? how it sounds, or how it measures with any possible waveform?
I you look at partial reasons, you will get wrong answers: they do it not only to pack all the relevant information (but not the irrelevant) more or less in the size of Redbook, but also to allow the use of filters that otherwise wouldn't have space to apply without smearing the data.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,190
Likes
1,533
Location
USA
And why do you want it to process any waveform?: Do you want to use MQA to decode the photos from the Perseverance sent form Mars? Is it MUSIC what this is intended to process. What matters to you? how it sounds, or how it measures with any possible waveform?
I you look at partial reasons, you will get wrong answers: they do it not only to pack all the relevant information (but not the irrelevant) more or less in the size of Redbook, but also to allow the use of filters that otherwise wouldn't have space to apply without smearing the data.

[Irrelevant note: I think your English as a second language is extraordinarily good. Better than many educated native speakers here in the US.]

I’m still missing the point of all of this defense of MQA. Yes, its implementation is technically interesting. So what? It provides no benefits anyone has mentioned to us as consumers of music. Only added costs and limitations. No additional proven increase in fidelity. Can anyone name a single consumer benefit in defense of MQA? I’ve read this entire thread, and the detractors do a good job pointing out problems, but the supporters, if I may call you and JA that, have only explained how the unwanted innovation might work.
 

mieswall

Member
Joined
Nov 10, 2019
Messages
65
Likes
111
[Irrelevant note: I think your English as a second language is extraordinarily good. Better than many educated native speakers here in the US.]

I’m still missing the point of all of this defense of MQA. Yes, its implementation is technically interesting. So what? It provides no benefits anyone has mentioned to us as consumers of music. Only added costs and limitations. No additional proven increase in fidelity. Can anyone name a single consumer benefit in defense of MQA? I’ve read this entire thread, and the detractors do a good job pointing out problems, but the supporters, if I may call you and JA that, have only explained how the unwanted innovation might work.

Thank you!. iPhone translator helps sometimes... :)

Because it is a very interesting thing what they are doing, and it is a shame that people trash it just because they are told this could be worse (without even hearing it!), just because reading these absurdities of lossyness, 13bits, bad rendered square waves, etc. Also because I think this encapsulation is a way to make a healthy music industry, allowing them to deliver the best possible quality and still have their assets protected, and I really, really love music; because with a tenth of the money I used to spend in music, I can now have an almost infinite library of even better quality (with Tidal, I hope that also with others soon); but most of all... because when rendered by hardware it sounds so freakingly good that sometimes (not always, of course) it is almost unbelievable.
 
Status
Not open for further replies.
Top Bottom