• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,763
Likes
8,122
I could see some clipping in the Eagles Hotel Califfornia album -- MQA 192. I didn't unpack it, but I doubt that would change anything. Nonetheless, I don't believe the MQA encoder avoids clipping.

If memory serves, one of @GoldenOne ’s findings in the video was that MQA does not avoid clipping.
 

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,351
Likes
1,857
says:

View attachment 133112

It is clearly stated: All the salient music-related information exists inside the Orange triangle, which MQA protects very striictly.
The upper orange line has a -2dB/kHz falling slope.
So e.g. the peak level at 24 kHz is not allowed to exceed at least -48 dB.

The paper "A Hierarchical Approach for Audio Capture, Archive, and Distribution", by Bob Stuart shows a similar picture

View attachment 133113
Here we have a picture of the peak levels represented by an orange line with a slope of -1dB/kHz. According to it the max. allowed level at 24 kHz would be -30 dB. This clearly exceeds the allowed MQA level.

So does natural music already hurt the MQA conditions?

Ah yet more BS, by BS, for BSers, coming from another of his award winning peer reviewed papers. He is truly s(p)oiling us.
 

pjug

Major Contributor
Forum Donor
Joined
Feb 2, 2019
Messages
1,776
Likes
1,562
This actually looks pretty good between DXD and MQA. The average spectrum isn't bad at all, and seems to start to diverge around 24kHz or so.

For fun and amusement, try the FFT Scrubber plot in DeltaWave after you match MQA and DXD/17 files. Scrubber uses a short span (400ms) FFTs and lets you move around the file sequentially, as fast as you want, or jump to random points in it, so you are more likely to catch short-lived or occasional differences that might be hidden by the average spectrum view.

Also, the spectrum of delta and delta of spectra plots make it easier to see the differences between the two files.
Yes I agree the MQA looks good. I'm also surprised that the 17/96 gave this result. This after downsamplig the DXD from 353 to 96 and aligning time. The MQA had to be aligned time-wise too, so the software is really good.
 

pjug

Major Contributor
Forum Donor
Joined
Feb 2, 2019
Messages
1,776
Likes
1,562
Fully acknowledging the full authority of @amirm and the mods to make such decisions, I would respectfully echo this request.
He would be an asset in this discussion. I would really like to see his MQA files run through DeltaWave. If you look at the curves he showed, the MQA and DXD spectra look like the might track a little better than mine do. MansR might be able to show I gave MQA a bad shake!
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
He would be an asset in this discussion. I would really like to see his MQA files run through DeltaWave. If you look at the curves he showed, the MQA and DXD spectra look like the might track a little better than mine do. MansR might be able to show I gave MQA a bad shake!
Definitely. However, he has an unhelpful habit of accusing people of being paid MQA shills. Other than that, he is great - very knowledgeable and experienced engineer.
 
Last edited:

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
we should all remain clear that downsampling to CD resolution is not lossy, because the 44.1kHz sample rate has a Nyquist frequency that's above the limit of human hearing.
Isn't this a good example of exact issue we are having in using the word "lossy:? Downsampling is mathematically lossy with reference the available information. What you wrote implies that downsampling, despite discarding information, is not lossy because it satisfies a perceptual criterion.

It's mixing domains, goals, purposes. MQA did the same when they called their ability to deliver a digital file with known provenance to a streaming service or download site, and then through known hardware, "lossless".
 

samsa

Addicted to Fun and Learning
Joined
Mar 31, 2020
Messages
506
Likes
589
I have always considered PCM format to be highly wasteful. As a person who has spent decades optimizing technology, it seems like such a poor solution. Going from 44.1 kHz to 88.2 kHz doubles the data rate yet there is hardly any musical information to be gained from that doubling.

PCM is wasteful. That's what lossless (e.g. FLAC) compression is for. Doubling the sample rate doesn't double the size of a FLAC-compressed file.
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
PCM is wasteful. That's what lossless (e.g. FLAC) compression is for. Doubling the sample rate doesn't double the size of a FLAC-compressed file.
FLAC is excellent at moderate resolution. However, a modern DXD master (which some audiophiles increasingly demand) is heavily noise shaped, with really high, near full level noise mountain from 50KHz to 150KHz (@amirm shown this in his hires explainer video). And FLAC can't compact it a lot, because it's random noise.

MQA is certainly not perfect, but it makes some reasonable choices what to encode and what to leave behind. As higher resolution files are both ultrasonically noisy and not well compactible, I think some data extraction is not an unreasonable approach.
 
Last edited:

tomchris

Active Member
Joined
May 14, 2021
Messages
210
Likes
415
Location
Denmark
As long as we are being pedantic :), MacOS was built on Mach OS variant of Unix, not BSD.
macOS certainly has BSD components - the XNU kernel is an odd composition of the Mach/NeXTSTEP micro-kernel and FreeBSD/NetBSD network stack, filesystems, user-space utilities, etc. Can't do very much with just a (micro-)kernel...

Well, as long as we are being pedantic. MacOS, previously named Mac OS X, is based on OPENSTEP which is based on NextStep (later stylized NeXTstep, NeXTStep, NeXTSTEP and ending with NEXTSTEP) which again is based on a UNIX-derived BSD using the monolithic version of the Mach kernel (2.5):p

I miss my NeXTcube system.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,639
Likes
240,739
Location
Seattle Area
A note on MP3 and other lossy codecs: nulling techniques are not proper to evaluate it and certainly not across the whole file. Artifacts with lossy codecs vary hugely from frame to frame. And what is taken out is immaterial if it is psychoacoustically correct.
 

Music1969

Major Contributor
Joined
Feb 19, 2018
Messages
4,673
Likes
2,848
MQA is THE solution against loudness war ;).

No, THE solution to loudness wars is ALL streaming services enabling loudness normalization AND not enabling users to change/disable...

No need for MQA at all ;-)
 

ebslo

Senior Member
Forum Donor
Joined
Jan 27, 2021
Messages
324
Likes
413
PCM is wasteful. That's what lossless (e.g. FLAC) compression is for. Doubling the sample rate doesn't double the size of a FLAC-compressed file.
It doesn't double, no, but the increase in size is disproportionate to the extra signal. Some results were posted earlier that showed FLAC compressing the extra bandwidth of [edit] extended sample rate [/edit] 24-bit content more poorly than I expected because of the relatively high number of low-order bits containing uncorrelated noise. A suggestion was made to simply reduce the bit depth to something more representative of the noise floor, and compare the resultant quality/size with MQA. That has been explored a bit further in the last page or two, with interesting and encouraging results.

edit: clarified wording
 
Last edited:

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
Isn't this a good example of exact issue we are having in using the word "lossy:? Downsampling is mathematically lossy with reference the available information. What you wrote implies that downsampling, despite discarding information, is not lossy because it satisfies a perceptual criterion.

It's mixing domains, goals, purposes. MQA did the same when they called their ability to deliver a digital file with known provenance to a streaming service or download site, and then through known hardware, "lossless".


It is quite simple actually… In communication theory, the lossiness of a codec is defined through comparing the data at the input of this codec’s encoder module with the data at the output of its decoder. Additional up- or downsampling, while very well possible, has little to do with this - the codec itself and its lossiness definition - it is simply a separate operation within the end-to-end data-processing flow (that by itself can be either lossy or lossless)…

So, assessing a MQA codec (the algorithm-design and implementation) one has to compare the data entering the MQA encoder: If it is a 24/96, than it has to be compared with the produced output that must be at the same 24/96. If the input to the encoder was 24/192, the output to compare it to has to be 24/192…. The same with CD, or hi-res PCM waveforms. Again, the “codec function” is responsible for “encoding and decoding” data, not for changing its format! And if there are additional up- or downsampling steps - either to convert the master source, or to adjust the format to the output DAC's operational bit depth or sample frequency - so be it, but they are not counted toward the “codec lossiness” analysis.
 
Last edited:

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
It is quite simple actually… In communication theory, the lossiness of a codec is defined through comparing the data at the input of this codec’s encoder module with the data at the output of its decoder. Additional up- or downsampling, while very well possible, has little to do with this - the codec itself and its lossiness definition - it is simply a separate operation within the end-to-end data-processing flow (that by itself can be either lossy or lossless)…

So, assessing a MQA codec (the algorithm-design and implementation) one has to compare the data entering the MQA encoder: If it is a 24/96, than it has to be compared with the produced output that must be at the same 24/96. If the input to the encoder was 24/192, the output to compare it to has to be 24/192…. The same with CD, or hi-res PCM waveforms. Again, the “codec function” is responsible for “encoding and decoding” data, not for changing its format! And if there are additional up- or downsampling steps - either to convert the master source, or to adjust the format to output DACs operational bit depth or sample frequency - so be it, but they are not counted toward the “codec lossiness” analysis.
I think one needs to keep in mind that MQA will change the input subtly by design, so even if its encode/fold/transmit/unfold/decode/render pipeline is perfect, one would still detect changes. It should never null out perfectly against a DXD master.

MQA defies easy characterization.... neither fish nor fowl... Their paradigm, lofty as it sounds, is analogue to analogue.

Perhaps if the MQA encoding/testing project becomes a reality, we should ask the mastering engineer to make two versions - one with "deblurring" and one without (assuming this is under operator's control). The latter should be a good vehicle for learning about the codec properties.

That version *should* null out well against DXD master up to ~40KHz or so. And we will likely find out that it doesn't perfectly (not mathematically lossless), but after 120dB or so it just doesn't matter - our equipment is not good enough to reproduce that miniscule a difference, so the process becomes definitionally lossless to the listener.

The practical threshold for this effectively lossless behavior will vary from system to system, but in most cases will be substantially less as our system's SNRs are often not that great. Above number is for system like mine, with pretty much lowest distortion/noise amplification (Emotiva dual stacked monos) and lowest distortion speakers (Eminent Technology LFT-8Bs).
 
Last edited:

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,520
Likes
4,358
A note on MP3 and other lossy codecs: nulling techniques are not proper to evaluate it and certainly not across the whole file. Artifacts with lossy codecs vary hugely from frame to frame. And what is taken out is immaterial if it is psychoacoustically correct.

And THAT is what I mean by a “lossy masking algorithm”. Cheers
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,520
Likes
4,358
It's a matter of context. If MQA is marketed as "lossless" then it is fair to evaluate it as such; since it obviously fails these tests, the term "broken" is accurate within that context. They have since backed off from that claim somewhat, at least to the point of plausible deniability, so I will, going forward, hold it to standards appropriate for a lossy codec.

False marketing breaks nothing except promises. The actual products may work to a high standard. They do not break when the marketing dept lies.

It is disingenuous to suggest otherwise.
 

Newman

Major Contributor
Joined
Jan 6, 2017
Messages
3,520
Likes
4,358
This is an interesting point.
How do we guard against false negatives that might come about because the person being tested has an expectation bias against hearing a difference?

Avoid audiophiles. :)
 

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
All, for what it's worth... Here is a couple of messages I sent to Amir and the Mods last Fri. @amirm responded positively, but I'll let him to speak for himself here and now... Even if we will not go with a new 'refreshed' thread, I think answering the questions at the very bottom (after some constructive discussion, and identifying additional ones) eg in a form of a 'FAQ', and generating our - ASR's - 'position statement' (based on the proposed quantitative, points-based pros-cons trade) would be very helpful...

Yet another mqa thread...
"Morning Gents,
As we all seem to be rather [beyond-]tired of all this MQA-arguing crap... yet do want to get to the bottom truth of it (whatever that is)... How about we set up a "scientific" MQA thread. That will be announced up-front to be 'free from non-technical' context - personal attacks, emotions, car analogies, repetitions... - that would be removed without justification or discussion.

The thread will be curated by an open-minded technical person. Who can have a pre-formed opinion, yet above all is (a) open-minded and (b) familiar with and loyal to the scientific/engineering 'truth finding' process/methodologies.

The goal will be define a clear [living?] list of 'burning questions' and attempt to objectively answer those. With the ultimate goal being to quantitatively formulate [an opinion on] whether/to what degree the MQA is of use to end-users. Both technically and socio-economically.

You might say that we do not need yet another MQA thread, and it might be true. But it also can be a chance to put the discussion on the right 'scientific' tracks, and to manage it that way (something as I understand you too have tried to achieve recently) What say you?"

...

"...we could start by discussing and/or finalizing positions on (1) what is bit perfect (an easy one), (2) what is lossless (mathematical vs practical), (3) ultrasonics and 96/192’s square vs MQA’s triangle (vs 44/48’s no ultrasonics) spectral profile, (4) filters and temporal deblurring (and how MQA-exclusive are those), (5) comparison tests to date (measurements and controlled listening). Plus whatever the community would recommend to add to the list.

At the end, we can even try to assign relative importance weights and score MQA vs PCM (eg 16/44 and 24/96) in each category, and get aggregate scores - in attempt to answer the ‘who is better’ question…. Who knows, it might even turn into a neat AES paper. :)

Just need to figure out logistics and commitments…"
 
Last edited:

JSmith

Master Contributor
Joined
Feb 8, 2021
Messages
5,217
Likes
13,451
Location
Algol Perseus
Surely there is some objective way to say at what point MQA fails.
https://bobtalks.co.uk/a-deeper-look/appendix-3-the-musical-triangle/#

1622609245831.png



JSmith
 
Status
Not open for further replies.
Top Bottom