• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). Come here to have fun, be ready to be teased and not take online life too seriously. We now measure and review equipment for free! Click here for details.

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

tmtomh

Major Contributor
Joined
Aug 14, 2018
Messages
1,323
Likes
3,865
Isn't this a good example of exact issue we are having in using the word "lossy:? Downsampling is mathematically lossy with reference the available information. What you wrote implies that downsampling, despite discarding information, is not lossy because it satisfies a perceptual criterion.

It's mixing domains, goals, purposes. MQA did the same when they called their ability to deliver a digital file with known provenance to a streaming service or download site, and then through known hardware, "lossless".

Put it this way: In another, past thread here on another topic, I stated that the resampling process of taking a 96k source down to 44.1k changed the audio data, and therefore a 96k PCM file and a downsampled 44.1k PCM file could sound different because of the non-integer sample-rate conversion, and that the sonic difference could be seen in a difference file from trying to null-compare the two.

In response, our knowledgeable and currently thread-banned friend @mansr [edit: it was @danadam actually] told me I was mistaken because my method of trying to null-compare the 96k original with the 44.1k downsampled version couldn't work. Instead, he explained, I should downsample the 96k to 44.1k, then resample the 44.1k back to 96k and compare the two 96k files. When I did so, they nulled out 100% for all frequencies up to 22.05kHz, indicating that the audible-range information from the original 96k file could be perfectly reconstructed. So I had to admit I was mistaken in my initial claim, which I was happy to do since I had learned something - I didn't realize that non-integer resampling was still lossless; in other words the different Nyquist limits of course made a difference in the ultrasonics, but in the audible range the non-integer resampling was a non-issue in terms of the ability to perfectly reconstruct the original content that was originally in a higher sample rate.

My point is that I think it obscures more than it reveals to call the well-documented limits of human hearing "perceptual" in the same way that lossy codecs' compression algorithms are perceptual - and therefore it also obscures more than it reveals to call downsampling that perfectly preserves, bit for bit, the audible-range musical data, "lossy." By that logic, a 176.4k file created from a 352.8k original is lossy. Sure, there is a clear logic by which that claim can be made - but in order to employ that logic you have to stretch the term "lossy," in the context of sound reproduction for humans, to the point where it becomes meaningless (which is not what Amir is trying to do, but which is most certainly what many promoters of MQA have attempted and are continuing to attempt to do).

We can certainly debate the relative merits of various encoding and compression algorithms independently of questions of mathematical lossiness/losslessness, and I have absolutely no problem with doing so.

But to lump something as fundamental as frequency and sample rate into the same lossy bucket as perceptual encoding - to me that is a mixing of domains, and when it comes to the discussion of MQA, a mixing of goals and purposes as well. Amir says he can pass a blind test distinguishing 320k mp3 from lossless. He would never make any parallel claim that he could do so with two files that were bit-identical except for frequencies above 22kHz - nor, I think, would he or most others here be inclined to believe such a claim made by anyone else. Returning to my prior example, the difference file between the musical data in a PCM file and an mp3 file made from that PCM file will be audible. By contrast, the difference file between the data in a 96k PCM file and a 44.1k PCM file made from that 96k original will not be audible. That's a meaningful difference.

I think at some point this becomes a philosophical, perhaps even semantic, debate. But I think it is both practically and epistemologically improper to equate (implicitly or explicitly) downsampling and perceptual encoding under a simple heading of "lossy."
 
Last edited:

earlevel

Active Member
Joined
Nov 18, 2020
Messages
277
Likes
332
Yes, I have looked at this, but as discussed it seems like the triangle excludes some seemingly normal music, and as Amir pointed out, "may just be a marketing thing". In any case it's doesn't seem to be warning of what won't work, more like a rough description of expectations for the following explanation of how MQA encodes musical material.

In any case, I'll try to find more time to understand it better, just because it's interesting. Though I'm really more interested in the validity of "deblurring" and claims of improved sound over lossless hi-res. Because, for me, the compression isn't enough to be compelling, and the space savings mean less and less over time. If I really need space and cost savings, AAC does more of it, and probably the only times that would matter at all is possibly in mobile phone and car applications, where it's dam hard to hear the difference. And at home I might as well stream lossless. I'm skeptical on the time resolution thing, but 100% willing to entertain it.

(you are ninja?)
 

JSmith

Major Contributor
Joined
Feb 8, 2021
Messages
1,438
Likes
2,616
Location
Algol Perseus
you are ninja?
Yep... I work for a company called Shinobi and my role is Ninja Rep. :cool:
But I think it is both practically and epistemologically improper to equate (implicitly or explicitly) downsampling and perceptual encoding under a simple heading of "lossy."
Agree.
Really making some progress here guys! Good to see!
Yes now the bickering and trolling has stopped, it is good to see discussion come back to a more technical level.



JSmith
 

voodooless

Major Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
2,933
Likes
4,477
Location
Netherlands
I'm skeptical on the time resolution thing, but 100% willing to entertain it.

That one is very obviously just marketing. Time resolution is not bound to sample rate, only bit depth, at least when looking at the same band limited signal. So if they are reducing the effective bit depth, they are in fact reducing timing resolution. But yes, a higher sample rate can record a faster transient, however that also means it occupies a larger bandwidth, so will not be audible. But the time resolution between the samples is dependent on frequency and bit depth alone. I’m guessing as with other things, they’ve redefined the definition to suit their agenda.
 

BrEpBrEpBrEpBrEp

Active Member
Joined
May 3, 2021
Messages
201
Likes
241
Haha, sorry about that. Same as the other one, but anyway just for fun, a couple with the frequency scale showing, and from Apple Music for a little higher top end. The opening ~10 seconds of the song (top frequency line is 20k):

View attachment 133229

At about 1:30 into the song, that basically-noise-plus-squeal sound:

View attachment 133231

Like I said, just a comment on musical expectations, I think we've killed it. (PS—I have to listen to this at so much of a lower dB setting on my DAC than for typical tunes that it's ludicrous.)


Fair enough, thanks.

Now those are some killer samples, damn.
 

bennetng

Major Contributor
Joined
Nov 15, 2017
Messages
1,386
Likes
1,334
Just want to say I found one of the 2L DXD tracks highly fishy (2L-053):
2L-053_04_stereo-DXD.png

Pay attention that red and cyan are only 60dB apart. This track is one of the demo tracks with musical instruments generating harmonics beyond 30kHz, and therefore the images can be clearly identified.

If you compare the spectrogram above with my illustrations in this thread:
https://www.audiosciencereview.com/forum/index.php?threads/digital-filter-game.23795/post-800873

The short filter has 80dB suppression and the long one has 100dB. Looks like this DXD track is originated from an 88.2k source, played through a DAC with a weak filter, and recorded by another ADC at 352.8kHz.
 

voodooless

Major Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
2,933
Likes
4,477
Location
Netherlands
The short filter has 80dB suppression and the long one has 100dB. Looks like this DXD track is originated from an 88.2k source, played through a DAC with a weak filter, and recorded by another ADC at 352.8kHz.

Good luck deblurring that :facepalm: Obviously all of this could also have been done in the digital domain as well.

There is actually a second aliasing band just above 70 kHz as well. I guess we should call this artistic freedom?
 

earlevel

Active Member
Joined
Nov 18, 2020
Messages
277
Likes
332
That one is very obviously just marketing. Time resolution is not bound to sample rate, only bit depth, at least when looking at the same band limited signal. So if they are reducing the effective bit depth, they are in fact reducing timing resolution. But yes, a higher sample rate can record a faster transient, however that also means it occupies a larger bandwidth, so will not be audible. But the time resolution between the samples is dependent on frequency and bit depth alone. I’m guessing as with other things, they’ve redefined the definition to suit their agenda.
Well, I have to agree with you. I even wrote something about it a couple of months ago (I contend that it's not even limited by bit depth, when dithered). But I'll grant that they might have a different definition (probably also marketing-speak), so I'm happy to see exactly how they address the claim. I've seen others defining it on their behalf, where it's clearly bogus, though.
 

voodooless

Major Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
2,933
Likes
4,477
Location
Netherlands
Yes, I have looked at this, but as discussed it seems like the triangle excludes some seemingly normal music, and as Amir pointed out, "may just be a marketing thing".

Ahem.. credit where credit is due, please ;) I think I pointed this out first (at least in this thread).. took you guys long enough to figure this one out.;). And if I wasn't even the first, then Amir definitely wasn't. In that case please still credit the correct person (and I will retract my "claim") :cool:
 

adamd

Member
Joined
Sep 24, 2018
Messages
35
Likes
39
Just want to say I found one of the 2L DXD tracks highly fishy (2L-053):

If
Thanks, by way of parallel I would recommend that interested parties read Jim lesurf’s recent article on undecoded MQA carefully (bearing in mind a degree of ironic understatement and a working knowledge of the Hitchhiker’s Guide to the Galaxy). A concern has been expressed about the effect of filtering down to the red book compatible file having regard to the need to make the process reversible. Do the 2L examples have something about them which might not make them a good test of this?

I’m not sure that a quick reading of the article is likely to allow a reader to see what the author’s eyebrows are doing.
 

voodooless

Major Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
2,933
Likes
4,477
Location
Netherlands
Thanks, by way of parallel I would recommend that interested parties read Jim lesurf’s recent article on undecoded MQA carefully.

Have my towel and bathrobe ready.. but can't find the article? Google search only showed a reference to this thread.
 

pozz

Data Ordinator
Forum Donor
Editor
Joined
May 21, 2019
Messages
3,993
Likes
6,405
@Raindog123 @tmtomh I had the use of language in mind. AFAIK, MQA claims that the unfolding process makes their file an efficient container of several sample rates and bit depths. So the comparison has to be done across the folded and unfolded versions with several PCM files, which means it will always show a difference in the MQA file due to the changes the sample structure. Characterizing that difference requires assigning some kind of framework, defined either as preserving information or some other factor. MQA understood this very well. They moved the criterion from preserving all the information to preserving only the musically useful information, from just giving you a file in their container to certifying its contents, from creating a decoder for playback to defining how the hardware should function. All to ensure as little variation in playback as possible and justify use of the word lossless. Or to claim that the goods are better than lossless. It repeats the old topic of fidelity, but instead of discussing an event, its recording and reproduction, they focus on the file, its transmission and playback. Instead of using the positive term fidelity they use the negative term loss. So instead of pursuing the highest fidelity they are attempting to assure the least loss. It's like an auditing exercise. Kind of a commercialization of the traditional audiophile mindset.

I think there's a strong risk, looking at how this thread's been going, that we'll end up missing the assumptions and working principles of MQA (edit: regardless of what they are) and be unable to adequately characterize their product. It's pretty clear that the video that started this thread is lacking in that regard. With gear testing you often know the function of DUT ahead of time (unless we're dealing with tweaks), but MQA is a new territory for us.
 
Last edited:

Mulder

Active Member
Joined
Sep 2, 2020
Messages
261
Likes
316
Location
Gothenburg, Sweden
Highresaudio is a German online store where you can buy, among other things, MQA files for download. I found the following text on their FAQ which I found interesting.

"First of all, MQA is a lossy codec! The MQA albums that we currently offer are all „MQA-Authenticated“. That means, that we know the origin of the studio master and all files have passed our strict and professional quality control process in order to guaranteed YOU genuine and native Studio Masters.

These MQA albums will remain in our online store. All others that we can’t check and verify its source, we will not offer.

As soon as we have an MQA encoder and quality control software to analysis the MQA encodes, we will offer MQA again. This is something that we are very peculiar and exceptional about - in your interest. Especially if you pay hard earned money.

We are in a very sensible and delicate niche music market. Over the past seven years we have established a very good market position, created a new business for the music industry and artists and customers that cherish the best audible sound reproduction. We moved the music and HiFi-industry into a new business domain, with very little support from anyone. Our USP is that we guarantee (and this is not just said and done) your customers, nothing but the true, native and original source. We can analysis and verify any other audio codec (with MusicScope even DSD and DXD). For MQA is nothing available to assure that the customer is getting our „promise“. We are in the first and front row, selling music and technology to a new and established customer, that truly expects nothing but the real thing!"
 

rkbates

Active Member
Forum Donor
Joined
Jul 24, 2020
Messages
116
Likes
125
Location
Down Under
Generally I’m in full agreement with you and have defended most of your points (until bailing out due to thread SNR dropping through the floor), but I have a very small quibble here.

When I was a Tidal customer, I used my DAC’s streaming app (my DAC is MQA enabled and its streaming app can be given your login details, and then you can instruct it to pull an audio stream directly from Tidal or Qobuz) and I got the ”total solution”, as you call it, for MQA decoding without Roon or the Tidal app.

I’m flabbergasted that MQA is called “broken” (or worse) in this thread. MQA sounds just fine. Anyone thinking that MQA sounds bad should listen to Blue Maqams by Anouar Brahem/Dave Holland/DeJohnette/Django Bates in MQA, on Tidal, and explain exactly how it’s “broken”. It’s not. It sounds stunning.
At last - I knew if I stuck with this thread something useful would fall out - Blue Maqams is definitely worth a listen
 

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
525
Likes
750
In response, our knowledgeable and currently thread-banned friend @mansr told me I was mistaken because my method of trying to null-compare the 96k original with the 44.1k downsampled version couldn't work. Instead, he explained, I should downsample the 96k to 44.1k, then resample the 44.1k back to 96k and compare the two 96k files. When I did so, they nulled out 100% for all frequencies up to 22.05kHz, indicating that the audible-range information from the original 96k file could be perfectly reconstructed.
Umm... it might have been me :) https://www.audiosciencereview.com/...ersies-concerns-and-cautions.2407/post-456146
 

awdeeoh

Member
Joined
Feb 24, 2020
Messages
68
Likes
28
2. Just out of technical interest is there any way to tell if lets say Qobuz started to stream raw MQA in place of CD quality if you don't have a MQA DAC or Roon?

You can tell by just how it sounds.
 
Status
Not open for further replies.
Top Bottom