• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

John Atkinson

Active Member
Industry Insider
Reviewer
Joined
Mar 20, 2020
Messages
168
Likes
1,089
What’s the point? Why go through all this effort for a process that (at best) will provide no audible benefits at all?

I made no comment on what MQA does with its buried data channel. I was showing that when correctly implemented, embedding a buried data channel with real-world audio recordings does not reduce the original file's audio resolution.

John Atkinson
Technical Editor, Stereophile
 

Zensō

Major Contributor
Joined
Mar 11, 2020
Messages
2,753
Likes
6,766
Location
California
Suggesting that some of these plots might be clever "Photoshopping", truly insulting to the audience's intelligence.
And it makes them look quite unprofessional as well. If one didn’t already reject MQA on the lack of technical merits, watching that video would certainly bring into question their motives and put one off of the company.
 

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,822
I made no comment on what MQA does with its buried data channel. I was showing that when correctly implemented, embedding a buried data channel with real-world audio recordings does not reduce the original file's audio resolution.

Is this part of the "post" Shannon stuff? You said it, the entropy is increased, and you can't reverse it. What happens to a ordinary user that wants to simply purchase a 44.1/16 CD as it was released by the producer and artist. FLAC in this case averages 700k-800k BW. If you start with 44.1/16 what secrets does this buried data channel hold?
 

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
where you get the impression that 3 bits of "quality" are lost. The container still has the original bit depth.

My understanding, talking about MQA's "unfold" around Nyquist frequency... One single bit of information (in the file/stream) buys you one bit of the dynamic range either within one of the baseband (<24kHz) or "unfolded" (24+kHz ultrasonic) frequency bins. But not in both bands simultaneously - as the music information the bit carries is supposedly uncorrelated.

And I think I understand the difference between an initial sample in time domain, versus the dynamic range of Fourier spectrum bins. But the point still is, one bit cannot carry two bits of independent information, or can it?
 
Last edited:

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,678
Likes
38,779
Location
Gold Coast, Queensland, Australia
Without specifics, how is it not snake oil or handwaving?

The MQA recipe clearly contains equal parts snake oil and handwaving, with a touch of blind allegiance and celebrity endorsement thrown in for good measure.

Trouble is, it still tastes like mutton dressed up as lamb.
 

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,949
Likes
2,617
Location
Massachusetts
I am not sure where you get the impression that 3 bits of "quality" are lost. The container still has the original bit depth. But there is now a hidden data channel in which data encrypted as pseudorandom noise can be buried without reducing the original resolution of the audio data,

This is not snake oil or handwaving. Without getting into the specifics of what MQA does, creating a buried data channel takes advantage of the spectral nature of the analog noisefloor present on all music recordings.

I have made several choral recordings since the turn of the century. I always try to make the recording in a quiet hall and spend time chasing down and eliminating sources of noise before the sessions start. My microphones have low self-noise and I use low-noise microphone preamplifiers from Millennia Media. Nevertheless, there is always noise present in the recording.

View attachment 126313

As you can see from this graph, made from a 24-bit recording of the “room tone” in the Oregon church where I made some of these recording, the spectrum of the noise is not flat or “white.” Instead it is closer to pink. The peak level is close to -70dBFS in the low bass (thanks to distant traffic noise) and slopes down at around 24dB/decade to 1kHz and with a somewhat shallower slope in the treble.

An FFT-derived spectrum of dithered 16-bit silence with the same number of FFT bins would produce a flat spectrum with all the components lying around -130dBFS. As the music is always higher in level than the noise, you can see that the only part of the spectrum that would need to be encoded with >16 bits lies between 2kHz and 30kHz. 13 or even 12 bits would be sufficient in the bass.

What this means is that a 24-bit recording of music made in this church that peaks at 0dBFS has spectral space available below the analog noisefloor. If I encode low-bit-depth data of some kind as pseudorandom noise – much easier to write than do - and add it to the 6 or 7 least-significant bits of the original 24-bit audio file, I have created a buried data channel. As the spectrum of that buried data channel is identical to the noisefloor of the recording, I haven’t reducing the resolution of the music data and there is a negligible rise in the overall noisefloor. I haven’t truncated the original 24-bit data to 17 bits or 13 bits or whatever, as has been stated elsewhere in the thread.

You don’t get something for nothing, however. As the noise floor now includes real information, albeit in encrypted form, I have increased the entropy of the file. The data can’t, therefore, be compressed as much by FLAC etc, as the original data.

This is not a new concept. Alan Turing did something somewhat similar in WWII to allow encrypted communication between Winston Churchill and FDR. Turing encoded the voice message as, IIRC, 8-bit audio data then buried it in a recording of random noise. When the message was transmitted, anyone listening would just hear noise. Decoding the message depended on the receiving station having exactly the same recording of random noise. Subtracting the noise signal from the received transmission reconstructed the original voice recording. (For transmission the noise was played from a 78rpm disc and for the system to work, a copy of that disc had first need to be flown across the Atlantic.)

In the 1990s, the late Michael Gerzon worked with Peter Craven (now with MQA) on a similar subtractive dither scheme with a buried data channel intended to increase the resolution of digital audio recordings – see https://www.aes.org/e-lib/browse.cfm?elib=7964

John Atkinson
Technical Editor, Stereophile

The CD was designed to provide for 16 bits of dynamic range.
Those who don't upgrade to pay MQA royalties are presented with a lower quality file.
Those who want to process the music can be restricted by MQA.
This restrictive trend will continue, it seems, on TIDAL.

Your recording is interesting but not particularly pertinent, as it not all recordings.

Unless I missing something, there is no benefit here for MQA discussed.
It is not much of a selling point, if the noise added is not audible. It did not need to be there.
Why should an open system be replaced with a closed system and restrictions?

We know the 96kHz/18 bit can be smaller and MQA and has more dynamic range, allows for choice of filters, and no added noise.
It seems to come down to, do you believe that MQA created the best reconstruction filter and is it the best that will ever be.
Why cripple recordings, incur royalties, and add restrictions based on hearsay.
Actual measurements are not beneficial to the MQA pitch.

- Rich
 
Last edited:

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,949
Likes
2,617
Location
Massachusetts
The MQA recipe clearly contains equal parts snake oil and handwaving, with a touch of blind allegiance and celebrity endorsement thrown in for good measure.

Trouble is, it still tastes like mutton dressed up as lamb.

I'll take your word for, having never had mutton but I do like New Zealand lamb. Yummy.

- Rich
 

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,949
Likes
2,617
Location
Massachusetts
I made no comment on what MQA does with its buried data channel. I was showing that when correctly implemented, embedding a buried data channel with real-world audio recordings does not reduce the original file's audio resolution.

John Atkinson
Technical Editor, Stereophile

At approximately 6 dB per bit losing 3 bits drops 18 dB of usable range without paying MQA royalties.
I believe those bits cannot provide both amplitude and encoded/folded ultrasonic frequency data.

Here are my conclusions after reading many sources, including those provided from Stereophile.
  1. MQA does not reduce size, that could be obtained with an open 96kHz/18 bit file.
  2. MQA does not increase dynamic range, it consumes it to preserve provably inaudible frequencies.
  3. MQA incurs royalties.
  4. MQA reduces end-user choices for reconstruction filter.
  5. MQA can block additional/unsanctioned PEQ/REQ processing when decoded.
  6. MQA may have reconstruction filter that folks like. There are others out there, especially, when computers are used that can surpass it.
  7. MQA proclaimed timing benefits have not been proven audible.
Which of the above to you agree or disagree with?

This discussion feels like tennis where the ball served, then hit into anther court and proclaimed in bounds ;)

- Rich
 
Last edited:

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,678
Likes
38,779
Location
Gold Coast, Queensland, Australia
I rather like a mutton shank.

Slow cooked lamb shanks are one of our favourite winter meals. Yum. I don't imagine mutton shanks would be much different- maybe a lot bigger?
 

ebslo

Senior Member
Forum Donor
Joined
Jan 27, 2021
Messages
324
Likes
413
I am not sure where you get the impression that 3 bits of "quality" are lost. The container still has the original bit depth. But there is now a hidden data channel in which data encrypted as pseudorandom noise can be buried without reducing the original resolution of the audio data,

This is not snake oil or handwaving. Without getting into the specifics of what MQA does, creating a buried data channel takes advantage of the spectral nature of the analog noisefloor present on all music recordings.

I have made several choral recordings since the turn of the century. I always try to make the recording in a quiet hall and spend time chasing down and eliminating sources of noise before the sessions start. My microphones have low self-noise and I use low-noise microphone preamplifiers from Millennia Media. Nevertheless, there is always noise present in the recording.

View attachment 126313

As you can see from this graph, made from a 24-bit recording of the “room tone” in the Oregon church where I made some of these recording, the spectrum of the noise is not flat or “white.” Instead it is closer to pink. The peak level is close to -70dBFS in the low bass (thanks to distant traffic noise) and slopes down at around 24dB/decade to 1kHz and with a somewhat shallower slope in the treble.

An FFT-derived spectrum of dithered 16-bit silence with the same number of FFT bins would produce a flat spectrum with all the components lying around -130dBFS. As the music is always higher in level than the noise, you can see that the only part of the spectrum that would need to be encoded with >16 bits lies between 2kHz and 30kHz. 13 or even 12 bits would be sufficient in the bass.

What this means is that a 24-bit recording of music made in this church that peaks at 0dBFS has spectral space available below the analog noisefloor. If I encode low-bit-depth data of some kind as pseudorandom noise – much easier to write than do - and add it to the 6 or 7 least-significant bits of the original 24-bit audio file, I have created a buried data channel. As the spectrum of that buried data channel is identical to the noisefloor of the recording, I haven’t reducing the resolution of the music data and there is a negligible rise in the overall noisefloor. I haven’t truncated the original 24-bit data to 17 bits or 13 bits or whatever, as has been stated elsewhere in the thread.

You don’t get something for nothing, however. As the noise floor now includes real information, albeit in encrypted form, I have increased the entropy of the file. The data can’t, therefore, be compressed as much by FLAC etc, as the original data.
Is there any way you could get that "room tone" recording MQA-encoded and post the DFT of the encoded file so we can see what it actually does?
 

mieswall

Member
Joined
Nov 10, 2019
Messages
65
Likes
112
I've been in this hobby for 40 years and a professional engineer for 30 including 15 in audio (highly complex aspects including codecs) and have always left room for the audio press indulgences. This is the final straw. While there are flaws in goldenboys test methods, the overall conclusions that MQA reduces bit depth, fabricates ultrasonics and reduces the fidelity of unwrapped files seems beyond dispute. Like so much of the press today, the audio press thinks their job is to offer opinion but without the constraint of investigation, skepticism or even rudimentary validation. Anyone with even a modest engineering background could have ran these tests years ago. I'm not ready to call the audio press deceitful, I truly think that's unfair. Irresponsible seems fair and accurate. As gate keepers of audio quality they sat mute for years while quietly behind the scenes our long awaited lossless streaming files became butchered with no visibility. If they had any repute they wouldn't be doing damage control, but instead saying "thank you" and running with this to help get to the bottom of it. Shameful.
If you are an engineer working in audio for 15 years, I'm surprised you can't grasp what a mockery of science these tests are.

IMHO, if you are planning to test the behavior of something, the first thing you should do is... well, understand what that something does. Even with a slight, quick sight to the most basic of the dozens of papers or articles about MQA will tell you what this graph shows:

1- that the original noise in the red area below 24Khz is completely replaced with information. If your test is trying to evaluate a bit perfect match with that original noise (that is not there anymore), it is flawed by definition.

2- The quick read will also tell you that the algorithm also requires that red area (now filled with real information brought from ultrasonics) to be dithered, so it could be still read as noise by a non-MQA DAC. And this test omits that critical, basic procedure... as noted in MQA reply, that of course nobody read, or worse, understood. Then... garbage in -> garbage out.

3- That the algorithm is designed to process the signals found in the orange triangle, because that is the space music really occupies, as the red plot shows (before any add more mockery in this subject: this example of maximum amplitudes of a string quartet doesn't differ significantly of any other music, as any Fourier analysis of ANY instrument harmonics would tell you). Then, if your test is full of signals (not music) above that area, both in sonic bandwidth and in ultrasonic region, which is exactly what happens when you use square waves or white noise, either you don't have a clue of what you are measuring, or you did it on purpose to deceive your audience.

4- I could go on and on with several other facts about these "tests", but people here already made their mind. Obviously impervious to any argument.

5- so, if instead of trashing MQA because these "impartial" reviewers told you: "vote with your wallet and cancel your Tidal subscription", we could be arguing why the area above the triangle is not captured by MQA, if because of compatibility issues with Redbook that recovered space can't be used anyway. THEN we would be doing something useful here, and we will be discussing about exactly how that upper side of the triangle is built. Even the same test could have probably discovered it. My hunch: a bezier or b-spline filter allowing antialiasing filters (you would even see the curvature if plotted logarithmically and not flat like in this graph) so as not to smear phases of the signals quantized, which is the whole purpose of MQA. And the filter starting deep in the audible region as to give one or two more octaves for the slope before reaching the displaced Nyquist frecuency, now at 48 or 96 Khz, depending on the sampling, because there is nothing lossy about *music* if done this way.

But... it is much more funny to make a scandal of all this, while the scandal are the very tests used here.

A final comment: if you are interested in truth instead of sensationalism, and then find such a degree of anomalities in your experiment, my guess is that any scientist would at least question himself if the experiment is correct or there is something missing. But GoldenEar and Archimago chose the easier path...
And so, here we are: MQA was designed by some of the luminaries of the audio industry. Peter Craven, creator of some of the milestones in audio technology that every audio engineer knows, was working in noise shaping, time coherence issues or even lossless algorithms before most people here were even born. And then, all of the sudden, they all forgot a lifetime work and design an algorithm that any engineering student would best in days.... that... or these test were made by amateurs that hardly knew what they were doing.
The amazing thing is that professionals in audio, like Paul Gowan of PS Audio even cite this absurdity to criticize MQA. If I needed one more argument to advise my friends not to buy PS Audio products, this one is it.


espacios MQA-1.jpg
 
Last edited:

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
If you are an engineer working in audio for 15 years, I'm surprised you can't grasp what a mockery of science these tests are.

IMHO, if you are planning to test the behavior of something, the first thing you should do is... well, understand what that thing does. Even with a slight, quick sight to the most basic of the dozens of papers or articles about MQA will tell you what this graph shows:

1- that the original noise in the red area below 24Khz is completely replaced with information. If your test is trying to evaluate a bit perfect match with that original noise (that is not there anymore), it is flawed by definition.

2- The quick read will also tell you that the algorithm also requires that red area (now filled with real information brought from ultrasonics) to be dithered, so it could be still read as noise by a non-MQA DAC. And this test omits that critical, basic procedure... as noted in MQA reply, that of course nobody read, or worse, understand. Then... garbage in -> garbage out.

3- That the algorithm is designed to process the signals found in the orange triangle, because that is the space music really occupies, as the red plot shows (before any add more mockery in this subject: this example of maximum amplitudes of a string quartet doesn't differ significantly of any other music, as any Fourier analysis of ANY instrument harmonics would tell you). Then, if your test is full of signals (not music) above that area, both in sonic bandwidth and in ultrasonic region, which is exactly what happens when you use square waves or white noise, either you don't have a clue of what you are measuring, or you did it on purpose to deceive your audience.

4- I could go on and on with several other facts about these "tests", but people here already made their mind. Obviously impervious to any argument.

5- so, if instead of trashing MQA because these "impartial" reviewers told you: "vote with your wallet and cancel your Tidal subscription", we could be arguing why the area above the triangle is not captured by MQA, if because of compatibility issues with Redbook that recovered space can't be used anyway. THEN we would be doing something useful here, and we will be discussing about exactly how that upper side of the triangle is built. Even the same test could have probably discovered it. My hunch: a bezier or b-spline filter allowing antialiasing filters (you would even see the curvature if plotted logarithmically and not flat like in this graph) so as not to smear phases of the signals quantized, which is the whole purpose of MQA. And the filter starting deep in the audible region as to give one or two more octaves for the slope before reaching the displaced Nyquist frecuency, now at 48 or 96 Khz, depending on the sampling, because there was no music to be captured is done this way.

But... it is much more funny to make a scandal of all this, while the scandal are the very tests used here.

A final comment: if you are interested in truth instead of sensationalism, and then find such a degree of anomalities in your experiment, my guess is that any scientist would at least question himself if the experiment is correct or there is something missing. But GoldenEar and Archimago chose the easier path...
And so, here we are: MQA was designed by some of the luminaries of the audio industry. Peter Craven, creator of some of the stones in audio technology that every audio engineer knows, was working in noise shaping, time coherence issues or even lossless algorithms before most people here were even born. And then, all of the sudden, they all forgot a lifetime work and design an algorithm that any engineering student would best in days.... that... or these test were made by amateurs that hardly knew what they were doing.
The amazing thing is that professionals in audio, like Paul Gowan of PS Audio even cite this absurdity to criticize MQA. If I needed one more argument to advise my friends not to buy PS Audio products, this one is it.


View attachment 126350


So many passionate, even angry words. And none of them, none can address this. Where I come from, they call it hokum.
 
Last edited:

Jimbob54

Grand Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
11,098
Likes
14,755
And so, here we are: MQA was designed by some of the luminaries of the audio industry. Peter Craven, creator of some of the stones in audio technology that every audio engineer knows, was working in noise shaping, time coherence issues or even lossless algorithms before most people here were even born. And then, all of the sudden, they decided to sack all that off and design an unnecessary format that leaches from each aspect of the industry and laughed all the way to the bank.

FIFY
 

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,822
And so, here we are: MQA was designed by some of the luminaries of the audio industry. Peter Craven, creator of some of the stones in audio technology that every audio engineer knows, was working in noise shaping, time coherence issues or even lossless algorithms before most people here were even born.

Appeal to authority has little weight here, please show me an MQA CD that "unfolds" into a bit perfect copy of a 44.1/16 original (forget all the 96k or better stuff) and I will stand aside.

I still run my original shareware CoolEdit which IIRC has an original Fraunhofer MP3 encoder (c. 1998) and it does fairly well with "test" waveforms. Test waveforms in general have very low entropy and contain almost no information. Shannon was before I was born, the rest not so much.
 

mieswall

Member
Joined
Nov 10, 2019
Messages
65
Likes
112
So many passionate, even angry words. And none of them, none can address this. Where I come from, they call it hokum.

Well, that's what a true researcher would try to discover. My guess: As the noise floor in those test files was not dithered and couldn't be read as noise at some steps of the process, the algorithm enters a situation that it is not programmed for. As I said in the previous post: garbage in -> garbage out.

What is the alternative that you suggest? That the same prestigious engineers that made possible things like ambisonics, the same guys that Thomas Dolby routinely asked for advice almost 50 yrs ago when designing his sound systems for theaters, just because they woke up in the wrong foot (not one day but every single one of the 6 years they worked on this), and then just copy some information and put it there to deceive the world? That they would allow these kind of grotesque aliasing, not of some frequencies, but of a whole bandwidth, as somebody suggested in a previous post? And then they all start waiting ... 3... 2...1 for an amateur to discover that farce? Gosh....
 
Status
Not open for further replies.
Top Bottom