• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

HoweSound

Active Member
Joined
Apr 22, 2021
Messages
156
Likes
190
Location
BC, Canada
I

This is not snake oil or handwaving. Without getting into the specifics of what MQA does, creating a buried data channel takes advantage of the spectral nature of the analog noisefloor present on all music recordings.


John Atkinson
Technical Editor, Stereophile

Although my hearing acuity has diminished with age, my olfactory receptors are highly sensitive to the smell of bullsh*t...
 

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
What is the alternative that you suggest? That the same prestigious engineers that made possible things like ambisonics...

I've answered this very question before, here. :)

Seriously, I do not know - I was not there. And I have not seen the MQA decoder SDD, or source, not yet... But what I've seen, multiple times through my career, was that someone had a 'cool idea'... that did sound cool as a concept, and even was patentable, and was patented... But then it had to go through that hard labor of actually being 'reduced to practice' - by tying all knots and plugging all holes. And that's where some of cool concepts become 'real products', while some get reasonably and gracefully abandoned... Yet there also are those dragged into their sunset forever, for fiscal, ego or some other reasons.

I guess, time will tell which of those MQA is. Both, to us users and equally important to investors. But the way MQA is currently handling the situation - through unwarranted secrecy [of patented reverse-engineerable software], lack of proof-of-claims through test results, and everything eg here - is not helping either of the two.
 
Last edited:

samsa

Addicted to Fun and Learning
Joined
Mar 31, 2020
Messages
506
Likes
589
Although my hearing acuity has diminished with age, my olfactory receptors are highly sensitive to the smell of bullsh*t...

Personal insults aside (you're new here, so I'll cut you some slack), you picked the one statement which is uncontrovertibly true: using the 8 LSBs of a 24-bit PCM file for steganography is a perfectly-valid, audibly undetectable thing to do.

Almost everything else about MQA is bullsh*t, but not that.
 

scott wurcer

Major Contributor
Audio Luminary
Technical Expert
Joined
Apr 24, 2019
Messages
1,501
Likes
2,822
Then, if your test is full of signals (not music) above that area, both in sonic bandwidth and in ultrasonic region, which is exactly what happens when you use square waves or white noise, either you don't have a clue of what you are measuring, or you did it on purpose to deceive your audience.

What about the -60dB 1kHz sine wave? I found even the 20+yr. old MP3 encoders pretty good for sine waves.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
If you are an engineer working in audio for 15 years, I'm surprised you can't grasp what a mockery of science these tests are.

IMHO, if you are planning to test the behavior of something, the first thing you should do is... well, understand what that something does. Even with a slight, quick sight to the most basic of the dozens of papers or articles about MQA will tell you what this graph shows:

1- that the original noise in the red area below 24Khz is completely replaced with information. If your test is trying to evaluate a bit perfect match with that original noise (that is not there anymore), it is flawed by definition.

2- The quick read will also tell you that the algorithm also requires that red area (now filled with real information brought from ultrasonics) to be dithered, so it could be still read as noise by a non-MQA DAC. And this test omits that critical, basic procedure... as noted in MQA reply, that of course nobody read, or worse, understood. Then... garbage in -> garbage out.

3- That the algorithm is designed to process the signals found in the orange triangle, because that is the space music really occupies, as the red plot shows (before any add more mockery in this subject: this example of maximum amplitudes of a string quartet doesn't differ significantly of any other music, as any Fourier analysis of ANY instrument harmonics would tell you). Then, if your test is full of signals (not music) above that area, both in sonic bandwidth and in ultrasonic region, which is exactly what happens when you use square waves or white noise, either you don't have a clue of what you are measuring, or you did it on purpose to deceive your audience.

4- I could go on and on with several other facts about these "tests", but people here already made their mind. Obviously impervious to any argument.

5- so, if instead of trashing MQA because these "impartial" reviewers told you: "vote with your wallet and cancel your Tidal subscription", we could be arguing why the area above the triangle is not captured by MQA, if because of compatibility issues with Redbook that recovered space can't be used anyway. THEN we would be doing something useful here, and we will be discussing about exactly how that upper side of the triangle is built. Even the same test could have probably discovered it. My hunch: a bezier or b-spline filter allowing antialiasing filters (you would even see the curvature if plotted logarithmically and not flat like in this graph) so as not to smear phases of the signals quantized, which is the whole purpose of MQA. And the filter starting deep in the audible region as to give one or two more octaves for the slope before reaching the displaced Nyquist frecuency, now at 48 or 96 Khz, depending on the sampling, because there is nothing lossy about *music* if done this way.

But... it is much more funny to make a scandal of all this, while the scandal are the very tests used here.

A final comment: if you are interested in truth instead of sensationalism, and then find such a degree of anomalities in your experiment, my guess is that any scientist would at least question himself if the experiment is correct or there is something missing. But GoldenEar and Archimago chose the easier path...
And so, here we are: MQA was designed by some of the luminaries of the audio industry. Peter Craven, creator of some of the milestones in audio technology that every audio engineer knows, was working in noise shaping, time coherence issues or even lossless algorithms before most people here were even born. And then, all of the sudden, they all forgot a lifetime work and design an algorithm that any engineering student would best in days.... that... or these test were made by amateurs that hardly knew what they were doing.
The amazing thing is that professionals in audio, like Paul Gowan of PS Audio even cite this absurdity to criticize MQA. If I needed one more argument to advise my friends not to buy PS Audio products, this one is it.


View attachment 126350

Let’s assume for the moment that MQA works exactly as advertised, what do you believe the audible benefits would be as compared to a 16/44.1 CD?
 

JSmith

Master Contributor
Joined
Feb 8, 2021
Messages
5,224
Likes
13,479
Location
Algol Perseus
I don't imagine mutton shanks would be much different- maybe a lot bigger?
The Govt changed the definition of "lamb" a few years back... much of what you get now would be considered mutton under the old definition. Have you not noticed the huge legs of "lamb" in the supermarket now? ;)



JSmith
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,405
Likes
18,364
Location
Netherlands
Well, that's what a true researcher would try to discover. My guess: As the noise floor in those test files was not dithered and couldn't be read as noise at some steps of the process, the algorithm enters a situation that it is not programmed for. As I said in the previous post: garbage in -> garbage out.

What is the alternative that you suggest?

Well, you brought it up several times already:

My hunch: a bezier or b-spline filter allowing antialiasing filters (you would even see the curvature if plotted logarithmically and not flat like in this graph) so as not to smear phases of the signals quantized, which is the whole purpose of MQA

You're surely guessing and hunching a lot for somebody so convinced of a product's prowess. I've said it several times already, but will repeat once more: b-spline interpolation does not make for a good filter! It does not adhere to the sampling theorem and will leave you with a shitload of aliasing artefacts:

1619418651471.png


It very much resembles the Windows way of linear interpolation upsampling:

1619418762160.png


And this perfectly correlates to the image mirroring I showed earlier.

My guess about the original example that I showed, if you'll indulge me: the original was only a 44.1 kHz master, so there was nothing to "unfold". So all MQA can do is upsample it twice the rate. And we know it does that using b-spline. Hence the abominable result we could all see. Now normally with a higher sampled source, this would actually happen higher up in the spectrum, but would still be present, far less problematic though. In that case, the second part half of the spectrum would be compressed and stored in the lower bits of the file. @mansr can probably shed some more definite light on it.

B-spline interpolation is nothing new BTW, boutique brands like Wadia had this since the mid-'90s. By now they've abandoned it for a more classical (but no less insane implementation) using classical FIR filters.
 

sandymc

Member
Joined
Feb 17, 2021
Messages
98
Likes
230
I am not sure where you get the impression that 3 bits of "quality" are lost. The container still has the original bit depth. But there is now a hidden data channel in which data encrypted as pseudorandom noise can be buried without reducing the original resolution of the audio data,

This is not snake oil or handwaving. Without getting into the specifics of what MQA does, creating a buried data channel takes advantage of the spectral nature of the analog noisefloor present on all music recordings.

Sorry, but that is snake oil. Reduced to its essentials, your argument is that is that all 16-bit audio recordings only have 13 bits of real data, so it's ok to trash the last three bits by injecting pseudorandom noise that contains hidden data. And that in some way improves the audio quality. That's the definition of snake oil.
 

Hrodulf

Member
Joined
May 17, 2018
Messages
64
Likes
135
Location
Latvia
Let me pop in a bit regarding the usefulness of MQA.

It has never been about the listener. It doesn't deliver any audible benefit and the bandwidth shrinking "benefit" is nulled by improving delivery technology both in mobile and landline internet. Heck, as many have pointed out - if you can do youtube, you will be able to do 24/192 audio.

Where MQA provides a benefit is decreasing the running costs for streaming platforms. Server spooling is a major cost for Tidal as it doesn't have their own datacenters and as their user base grows the costs grow as well (and their users need more bandwidth compared to their lossy streaming competitors). Amazon and Apple has unlimited bandwidth as they have their own datacenters, Spotify is probably big enough to not care too much either. Tidal is miniscule, compared to them, so they buy servers from Amazon. In theory Qobuz and Deezer have the same problem, but I don't think they want to be Tidal lookalikes.

Maybe the above has been mentioned here, in that case - carry on!
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,405
Likes
18,364
Location
Netherlands
Sorry, but that is snake oil. Reduced to its essentials, your argument is that is that all 16-bit audio recordings only have 13 bits of real data, so it's ok to trash the last three bits by injecting pseudorandom noise that contains hidden data. And that in some way improves the audio quality. That's the definition of snake oil.

It's also funny how they trade one inaudible trade for another.. more noise to get more HF extension.

BTW, a question for the people that have actually worked in a studio mixing or mastering music: what is the process to mix or master sound above 20 kHz? How do you get it to sound just right? I can imagine this must be one of the hardest things to do ;)
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,449
Likes
4,818
Sorry, but that is snake oil. Reduced to its essentials, your argument is that is that all 16-bit audio recordings only have 13 bits of real data, so it's ok to trash the last three bits by injecting pseudorandom noise that contains hidden data. And that in some way improves the audio quality. That's the definition of snake oil.

JA is probably correct, in terms of the noise floor, as far as live recordings of stuff such as choirs in churches are concerned. However kind of skips the fact that mixing several such inputs can definitely end up using 16 bit or more. Not to mention studio recordings or synthesized music. So, yes, it obfuscates the potential (let's be generous here) MQA loss by using what seems to be an irrelevant argument in this context.

But the funny thing is that, if what he claims applies generally, this undermines the whole audiophile/measurement cathedral. If 13-bit audio is all we need anyway, he and many others (including this site) spent their lives chasing angels when measuring/comparing/listening to higher than 13-bit range stuff.

And the worst for me is that, beyond the utterly dishonest money grab that MQA is, JA might be right generally speaking, if only because the average domestic listening environment has such a high noise floor and the adaptive capability of our auditive system to rapid changes isn't stellar.

We don't have conclusive blind tests that MQA is better (obviously, there are no rational reasons it would be) but we don't have blind tests showing it is worse either, which would indicate it essentially doesn't matter. Oh, and when we have blind tests showing MP3 is preferred to CDs, we push it under the carpet on the grounds the audience isn't educated or their hearing has been "polluted" by long term MP3 listening.

Apologies: I must be in bad mood because both "lifted veils" and "audibility of 0.3dB" thingies rub me in the wrong way today. :mad:
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,727
Likes
38,928
Location
Gold Coast, Queensland, Australia
If 13-bit audio is all we need anyway, he and many others (including this site) spent their lives chasing angels when measuring/comparing/listening to higher than 13-bit range stuff.

The 12 bit non-linear quantization used by DAT manufacturers (Sony) back in the day for LP (Long Play 2x) mode was pretty decent. The difference between SP and LP was audible only on content with a lot of HF (and my ears were younger then).

1619426987302.png
 

sandymc

Member
Joined
Feb 17, 2021
Messages
98
Likes
230
JA is probably correct, in terms of the noise floor, as far as live recordings of stuff such as choirs in churches are concerned. However kind of skips the fact that mixing several such inputs can definitely end up using 16 bit or more. Not to mention studio recordings or synthesized music. So, yes, it obfuscates the potential (let's be generous here) MQA loss by using what seems to be an irrelevant argument in this context.

But the funny thing is that, if what he claims applies generally, this undermines the whole audiophile/measurement cathedral. If 13-bit audio is all we need anyway, he and many others (including this site) spent their lives chasing angels when measuring/comparing/listening to higher than 13-bit range stuff.

And the worst for me is that, beyond the utterly dishonest money grab that MQA is, JA might be right generally speaking, if only because the average domestic listening environment has such a high noise floor and the adaptive capability of our auditive system to rapid changes isn't stellar.

We don't have conclusive blind tests that MQA is better (obviously, there are no rational reasons it would be) but we don't have blind tests showing it is worse either, which would indicate it essentially doesn't matter. Oh, and when we have blind tests showing MP3 is preferred to CDs, we push it under the carpet on the grounds the audience isn't educated or their hearing has been "polluted" by long term MP3 listening.

Apologies: I must be in bad mood because both "lifted veils" and "audibility of 0.3dB" thingies rub me in the wrong way today. :mad:

Yes, I mostly agree with that. What annoys me about the JA/MQA argument is that either:
  1. JA/MQA are correct that there is nothing useful musically below 13 bits. But if that's true then all the hidden channel, etc, etc that MQA claim is the "secret sauce" is useless. And so most of MQA is snake oil.
  2. They are wrong that there is nothing useful musically below 13 bits. In which case MQA is entirely snake oil.
The whole of their argument seems to be an attempt to sow confusion around (1) and (2).
 

sandymc

Member
Joined
Feb 17, 2021
Messages
98
Likes
230
The 12 bit non-linear quantization used by DAT manufacturers (Sony) back in the day for LP (Long Play 2x) mode was pretty decent. The difference between SP and LP was audible only on content with a lot of HF (and my ears were younger then).

Yes - you can get pretty decent quality at 12-13 bits, if you use non-linear compression, as Sony did.

The thing is, MQA may possibly be a really good form of perceptual compression. Without openness from MQA no-one can really tell. But MQA is not what what has been claimed.
 

mansr

Major Contributor
Joined
Oct 5, 2018
Messages
4,685
Likes
10,705
Location
Hampshire
Well, you brought it up several times already:



You're surely guessing and hunching a lot for somebody so convinced of a product's prowess. I've said it several times already, but will repeat once more: b-spline interpolation does not make for a good filter! It does not adhere to the sampling theorem and will leave you with a shitload of aliasing artefacts:

View attachment 126401

It very much resembles the Windows way of linear interpolation upsampling:

View attachment 126403

And this perfectly correlates to the image mirroring I showed earlier.

My guess about the original example that I showed, if you'll indulge me: the original was only a 44.1 kHz master, so there was nothing to "unfold". So all MQA can do is upsample it twice the rate. And we know it does that using b-spline. Hence the abominable result we could all see. Now normally with a higher sampled source, this would actually happen higher up in the spectrum, but would still be present, far less problematic though. In that case, the second part half of the spectrum would be compressed and stored in the lower bits of the file. @mansr can probably shed some more definite light on it.

B-spline interpolation is nothing new BTW, boutique brands like Wadia had this since the mid-'90s. By now they've abandoned it for a more classical (but no less insane implementation) using classical FIR filters.
B-spline interpolation is a perfectly valid tool to use in some applications. Reconstruction filters for audio are not one of them.
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,405
Likes
18,364
Location
Netherlands
B-spline interpolation is a perfectly valid tool to use in some applications. Reconstruction filters for audio are not one of them.

Correct. I was however referring to how the specific example came to be? Because it only seems to contain a badly upsampled version of the original (if we can call it that) and has no signs of any additional compressed information present.
 

John Atkinson

Active Member
Industry Insider
Reviewer
Joined
Mar 20, 2020
Messages
168
Likes
1,089
Reduced to its essentials, your argument is that is that all 16-bit audio recordings only have 13 bits of real data, so it's ok to trash the last three bits by injecting pseudorandom noise that contains hidden data.

Unfortunately, you are misreading what I wrote. I have never written that "all 16-bit recordings have only 13 bits of real data," nor would I.

In the example I gave of a real-world recording, the noise level was sufficiently high in the very low bass that the music above that noise would not require all 16 bits. However, as I also wrote, the noise level decreased with increasing frequency, and at higher frequencies 16 bits would not be sufficient to capture the music signal.

John Atkinson
Technical Editor, Stereophile
 
Status
Not open for further replies.
Top Bottom