• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,962
Likes
2,629
Location
Massachusetts
Not sure why you expected that particular MQA track when streamed via the TIDAL HiFi quality (16bit/44.1kHz only resolution) connection to be bit-perfect with the original MQA track (as streamed via the Masters Quality connection). The original MQA track (undecoded) is hi-res 24bit/44.1kHz, with an original MQA sample rate of 88.2kHz!

The TIDAL HiFi quality version could still be sourced from the original MQA track, though not bit-perfect but truncated to16-bits and very likely still recognised as MQA when played to a full decoding MQA DAC.

Agreed, access to the original HD LPCM track is required to be certain.
IF there is not HI-Res audio master, then it is reasonable to assume that difference are artificial, still you need the Redbook master to know for sure.

Will MQA be cooperating in such analysis?

- Rich
 

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
The resolution of each point is just dependent on the FFT bin size i'd set it to which i've just got at 65536.

Thanks. So, to clarify, it first calculates two individual FFT spectra with a sliding window of 65536 samples (just as a FFT would) of the two compared files (in dBFS, for each respective file); and then calculates & plots the delta for each point... Correct? And, does it apply any decoding of the input file (or relies on some pre-processing to do it - eg, the core unfold for MQA)?
 
Last edited:

muslhead

Major Contributor
Forum Donor
Joined
May 28, 2020
Messages
1,572
Likes
1,787
Come on folks, it's funny right?
If you don't think so, then you might need a mirth booster shot :p:)

There have been some great laughs on this thread.

- Rich
do you include mieswall's responses in this list?
 

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,962
Likes
2,629
Location
Massachusetts
do you include mieswall's responses in this list?

Link please?
Edit: never mind, I get it.

I think it was @mansr that posted this one, that was also hilarious (though not directly OT):
Austrailian.jpg


- Rich
 
Last edited:

ebslo

Senior Member
Forum Donor
Joined
Jan 27, 2021
Messages
324
Likes
413
Thanks, I think that is what @GoldenOne has done.
Clearly, this is not a representative of music but DACs, for example, must be tested with such files to make sure they are handled without incident.
Is this reasonable?

- Rich
TLDR - The test is valid, but the correct result may be unexpected without some understanding of the mathematical model behind PCM. A PCM "impulse" represents a normalized sinc function, not an impulse. Similar math applies to the edges of a square wave.

Impulses and square waves are mathematical constructs that exist in neither the analog domain nor in PCM encoding. They can be approximated by both up to the limits of the hardware (analog) or sample rate (PCM).

The Wikipedia Nyquist sampling theorem introduction gives a great explanation of the true meaining of PCM samples. The sampling theorem is the mathematical basis for PCM, basically the mathematical model showing that sampling works, and the constraints under which it works. In this model, each sample represents a normalized sinc function, and the sum of the normalized sinc functions represented by all the samples gives the reconstruction. So when we see "pre-ringing" or "time smearing" in an impulse (or square wave) response of a reconstruction filter, it is not a deficiency or artifact of the filter; it is the correct reconstruction according to the mathematical model that defines PCM encoding.

edit: spelling
 
Last edited:

sandymc

Member
Joined
Feb 17, 2021
Messages
98
Likes
230
So I really do not know what Tidal is doing at the moment.

After their recent announcement that they'd be moving MQA to a new tier, and offering an actual price tier for just hifi/lossless, I decided to check if the files being served had changed at all. Or if it was still just the same file with MQA flagging removed.

Firstly I checked Sam Smith's "Too Good at Goodbyes", and was pleasantly surprised to see that now there is no longer a bit-perfect match between the master and hifi tier. A 100dB null now.

However, the fact that even the very upper frequency parts are a VERY close match to the MQA version had me feeling suspicious.
I then compared the 'hifi' version of the track to the qobuz redbook version and....not a match. And in fact we see the same high frequency difference as we'd seen in previous comparisons of non-MQA vs MQA.


Ok, this doesn't make much sense. Let's pick something that there wasn't ever a hires master for, just to be sure it's not something to do with that.
Biffy Clyro - Many of Horror

This was only ever a 16 bit 44.1khz master, and the MQA file is as expected, sourced from 44.1khz.
So let's compare the "hifi" and "Master" versions

Absolutely bitperfect. Same file. Again, just the "Hifi" version has MQA flagging removed

So why don't we compare to the Qobuz version:

Nope, not the same.

I thought it'd be interesting to mention this. I don't know what on earth has happened to the Sam Smith release on Tidal, but it seems that it has been changed to no longer have a bitperfect match to the MQA file, but it is also not the same as the lossless version and remains INCREDIBLY similar to the MQA one.
I've not tested other tracks yet so I don't know if it was just that track/album, or if this is happening to other tracks, but regardless, it doesn't make much sense. I don't wish to come across as arrogant but it does seem odd that the specific album I'd used in my video has had a source-file change/alteration just a week or so after the video came out. And to one which does not match other services/sites.

Do you have any way to check whether unflagged files have the watermark? Aka will an MQA decoder recognize them? Because if it doesn't then even if you have an MQA decoder, you'd be getting a 13 bit file. Which would be really, really bad. And certainly not Hi-fi.
 

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,962
Likes
2,629
Location
Massachusetts
As I wrote a couple of days ago, noise is noise (if the spectral statistics are identical).

John Atkinson
Technical Editor, Stereophile

What does this statement mean "if the spectral statistics are identical"?
If you are implying that there is case where noise matters or cannot be determined, then is it parenthetical?

Lastly, doesn't the believe that MQA preserves music and not noise require some testing or is this an article of faith?

- Rich
 
U

UKPI

Guest
TLDR - The test is valid, but the correct result may be unexpected without some understanding of the mathematical model behind PCM. A PCM "impulse" represents a normalized sinc function, not an impulse. Similar math applies to the edges of a square wave.

So when we see "pre-ringing" or "time smearing" in an impulse (or square wave) response of a reconstruction filter, it is not a deficiency or artifact of the filter; it is the correct reconstruction according to the mathematical model that defines PCM encoding.

edit: spelling

You explained it well.

By the way, I created a 16 bit 44.1kHz stereo PCM signal whose both channels contain a representation of an impulse signal after going through a linear phase brickwall reconstruction filter. One channel has a delay of approximately 1.031 microseconds compared to the other. One will easily find this difference by upsampling the file to 44100*22=970200Hz sampling rate and comparing the position of the resulting samples. This difference can go much smaller, (to picoseconds range) but I just stopped at this difference because Audacity (a free audio editing program) doesn't support resampling a track to over 1MHz sampling rate. That would make it difficult for people with less technical knowledge to check this out by themselves.

Obviously, this difference is smaller than the value of necessary time resolution mentioned in the concluding remarks of that convention paper for MQA (10 microseconds).

EDIT: Fixed typo error, added info about the filter.
 

Attachments

  • Impulse_microdelay.zip
    13.9 KB · Views: 100
Last edited by a moderator:

AudioExplorer

Member
Joined
Sep 13, 2020
Messages
33
Likes
13
Sometime back, I tried to do a blind test on myself with MQA as follows. I created a series of playlists on Tidal with two versions of the same song. One version was MQA and the other was the non-MQA one. The way it is displayed on my phone in the view for a playlist, it didn't show which entry was MQA and which wasn't. Then, to ensure that I don't know which one was the non-MQA one (since I created the playlists), I did one of two things. When I could get my son to find some time, I asked him to change the order of the two entries in each playlist. Alternately, I would just keep swapping the order on my phone while I was doing something else (e.g., watching TV) so that I couldn't remember how many times I did this swap. This way I could create playlists with two versions of the same track where I didn't know which one was MQA.

With this set up, each test would be to play the two tracks and make sure I don't look at the phone before I try to do my identification. It turned out that for some tracks, I really couldn't tell a difference. On one or two tracks I did manage to identify MQA 8 out of 10 times if I recall correctly. But, I would not claim for sure that I could provably tell the difference. I would need to do a lot more testing for that than I have time for now. But, I just thought I would share how I tried to do the test in case others have tried something similar or probably much better. I am aware of Archimago's test files, but in my case I was looking specifically to find a way to test what was on Tidal.
 

John Atkinson

Active Member
Industry Insider
Reviewer
Joined
Mar 20, 2020
Messages
168
Likes
1,089
So why go to so much effort to replace noise in the audible FR, with noise from the higher non-audible FR?

Because, as I wrote in postings to this forum, in an audio recording with a noise floor higher than the quantization floor, it is possible to create a hidden data channel without degrading the resolution of the original data. This is called "steganography" - see https://www.researchgate.net/publication/45949372_Steganography-The_Art_of_Hiding_Data

John Atkinson
Technical Editor, Stereophile
 

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,962
Likes
2,629
Location
Massachusetts
Because, as I wrote in postings to this forum, in an audio recording with a noise floor higher than the quantization floor, it is possible to create a hidden data channel without degrading the resolution of the original data. This is called "steganography" - see https://www.researchgate.net/publication/45949372_Steganography-The_Art_of_Hiding_Data

John Atkinson
Technical Editor, Stereophile

I get that, but this measurement from Stereophile seems to show the hiding in the noise introduced noise.
I am not saying it was inaudible but the recording was degraded in the audible range, was it not?

MQANoise.jpg


I am not technical in this area, but how could hiding data in the noise, increase noise in the audible range and claim that it is hidden?

By ASR standards, a DAC that did this to a DXD file would be rate lower than one that did not. It certainly, would not get a swinging panther.

- Rich
 

Raindog123

Major Contributor
Joined
Oct 23, 2020
Messages
1,599
Likes
3,555
Location
Melbourne, FL, USA
But after 1800 comments people keep saying MQA is garbage... because that useless noise is not the same anymore... This would be understood even by a K-12 kid if not as biased as almost everybody seems to be in this thread.

I can't speak for all 'people' here, so speaking for myself... I find the MQA technology a lie and garbage because of >this<. Namely, again and again I keep asking three straight, logical, relevant questions. And in response, get numerous circular piles-and-piles of either "let me tell you how great MQA idea is" or "how dare ya'll to test it so incorrectly!" Without any 'correct' tests as an alternative...

For the record, I think I understand the concept behind the MQA idea - I believe it was a neat idea conceived by audio's clever minds to solve the then-bandwidth-issue. However (a) they never succeeded to properly implement the idea and (b) the problem itself went away (eg, I've stated it >here<.)

So, based on the above, in my eyes I declare that the MQA is a scam. This is based on (1) the existing body-of-data comparing performance of MQA and other hi-res open formats (eg, 24/48 PCM); and (2) MQA's and Tidal's marketing - continuously misleading us consumers (eg, about losslessness, superior quality, artist's intentions). Now, I gladly retract my statement, if/when proofs - of losslessness, superiority, etc. - will be offered.

Here, I put my good name to it - Raindog.
 

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,962
Likes
2,629
Location
Massachusetts
Have they ever said this? Either way, it is not correct.

That may be an incorrect interpretation but here the article:
MQA Tested Part 2: Into the Fold | Stereophile.com

Quoted from MQA in the article
"[T]he MQA encoder estimates the 'triangle' and a suitable guard band and can encode that in an L fold. In an L fold, the upper octave uses a specific 2-band predictor which gives a very good waveform estimate. That is then losslessly buried according to mastering choice. . . . The remaining 'touchup' signal is packed below the noise-floor."
It's a weird statement, that lossy encoding is lossless encoded and then that lossless encoding is a mastering choice.
Again, I could be just to dumb, but it sounds like word-salad.

The L-Fold is lossless encoded, should we comforted that MQA data is lossless stored on a computer. :facepalm:

And from the author
To sum up the losslessness issue: In its folding and unfolding, MQA distinguishes between music-correlated data and noise, tries hard to retain the music-correlated data, but sensibly worries much less about preserving the noise bit by bit. This allows MQA to achieve their goal of preserving the benefits of high-resolution data without the burden of large, weighty swaths of pointless noise.

Good boy MQA, you tried hard to " retain the music-correlated data", here's a trophy ! :p
If it fails, the pointless noise was terrible music anywy.

Gone are the days of Redbook, where it reliably stored the music and gave it back to you without all this dataism and artistic judgements. :)

- Rich
 

gatucho

Member
Joined
Dec 16, 2020
Messages
46
Likes
149
As I wrote a couple of days ago, noise is noise (if the spectral statistics are identical).

John Atkinson
Technical Editor, Stereophile

Here, however, the problem is the "identical spectra statistics" issue. Has it been demonstrated that MQAs 3bit noise has this property?

On the other hand, statistics can in fact be misleading. If you take 10sec of slightly modulated noise and add it to 5min of white-ish noise and calculate the "spectral statistics" of the whole thing they will mostly ignore the 10sec section. And 10sec of a track is a lot to be wrong.

On the other hand, has it been demonstrated that 3bits noise or more is a rule in well recorded music (with modern studio treatment)? (I'm really asking, it may had been mentioned already but I missed it)
 

RichB

Major Contributor
Forum Donor
Joined
May 24, 2019
Messages
1,962
Likes
2,629
Location
Massachusetts
Here, however, the problem is the "identical spectra statistics" issue. Has it been demonstrated that MQAs 3bit noise has this property?

On the other hand, statistics can in fact be misleading. If you take 10sec of slightly modulated noise and add it to 5min of white-ish noise and calculate the "spectral statistics" of the whole thing they will mostly ignore the 10sec section. And 10sec of a track is a lot to be wrong.

On the other hand, has it been demonstrated that 3bits noise or more is a rule in well recorded music (with modern studio treatment)? (I'm really asking, it may had been mentioned already but I missed it)

I patiently wait for responses to questions about this post and the DXD/MQA music measurements that demonstrate that the MQA playback did not hide noise, it created it.

This must be the 4'th time I have posted this Stereophile measurement, I don't think this is a trick question if we are looking at the data.
What is going on?

- Rich
 
Status
Not open for further replies.
Top Bottom