• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). Come here to have fun, be ready to be teased and not take online life too seriously. We now measure and review equipment for free! Click here for details.

MQA Deep Dive - I published music on tidal to test MQA

Status
Not open for further replies.

AdamG247

MQA Fight Club Mgr. 1st rule of MQA fight club….
Moderator
Forum Donor
Joined
Jan 3, 2021
Messages
1,110
Likes
1,635
@AdamG247 if I were a moderator here, this is who I would have a good long one-on-one conversation with... We all might occasionally hurt someone with our manners of speech, but passive-aggressive instigation in every post is too much, no?
I don’t know whom you are talking about? However, let me suggest you just send a pm or hit the report button and say your piece there rather than a public flogging. I’m all out of fresh flogs BTW!
 

Clavius

Member
Forum Donor
Joined
Jul 28, 2020
Messages
8
Likes
38
Location
Stockholm Sweden
We done,argue with someone else, bye bye
I can access it to,that's why I ask him,wasn't trying to be mean.
Im sorry, but the lack of basic grammar skills, complete lack of anything resembling a point and the panicky frequency of posting reeks of bottom sludge shill trolling. Can we please continue with the informed conversation without the disruption?
 

Jimbob54

Major Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
5,905
Likes
6,660
Why is everyone so, um, on edge today? We are a family here seeking enlightenment and to drink from the fountain of knowledge. It’s why I’m here. Hey who forgot to wash out the communal Knowledge Mug? ;)
Sunday PM is peak bickering time. I can only imagine hangovers plus the Sunday night / Monday morning work fear kicking in
 
Joined
Nov 10, 2019
Messages
51
Likes
76
Worst part is, I'm still wondering just what their actual position is. The best I can muster in summary they seem to be the typical "don't knock it till you try it", but I'm not seeing anything apparent where they can use as evidence in support of the claims MQA purports it can deliver. With the other guy being also a DSD proponent claiming MQA isn't as good, but close, yet much better than redbook.

I'm guessing they also hold to the idea that all current demonstrations presented by OP are failures simply in virtue of the assertion that the encoder can detect 'non-real music' (begging questions about the threshold in which it is able to do so, and even how, if I were to insert a picosecond's worth of DSP, and it should produce identical encoder failure to where the entire music track gets polluted as OP's did even though he didn't have the entire track be simply test tones - remember, their red flag mechanism at least was good enough to detect that when he first tried to pass off pure test tones).

It's just a really comedic situation because I see a few nitpicks about certain here-or-there aspects (like the focus about square waves, yet leaving the entire rest of the arguments OP makes unaddressed). But I'm not actually seeing or hearing what their position even is. It simply seems to be an assertion that OP's experiment isn't conclusive of any and all concerns, thus MQA is great and delivers! Yet the company cucking back and forth in the shadows and for years not providing the simple access required to establish veracity of their product's claims -- that's somehow deemed as the more rational position, or at least the side that is rationally the one that should be defended. Especially hilarious given the usual scandals with corporations that eventually lead to shit like Tidal removing true lossless files as a result of MQA's slow encroachment. Like EVEN IF we had no technical evaluations of their MQA files ever (like imagine it was some magical file format that couldn't even be examined like some quantum anamoly), MQA's behavior alone is enough to not grant them benefit of the doubt.

In principal, if you had no idea what MQA does or how, agnosticism is the only sensibly rational position you can take at best. In practical reality, because MQA's behavior is such garbage, you have far more merit for hard skepticism of their claims, and dismissal of them actually - for as long as they keep cowering in the shadows and simply pontificating like lawyers in a court room where evidence isn't being disclosed to the other side the moment anyone like OP raises good reasons to assume their guilt in fraudulent contexts. Also practically concerning in the real world, where somehow MQA themselves concerned themselves of the ordeal to such a degree, they decided to act, and are somehow able to pressure the publisher and Tidal themselves in having someones work removed. If this wasn't a video (which at first it wasn't), they wouldn't seemingly care. But all of a sudden, they start responding..

Straight up perplexing at this point trying to surmize what it is these MQA defenders hold to. No idea if they're shills or whatever (doubt it personally), but they could stand to at least not use the post-hoc rationalization MQA themselves is using for their defense. It's hilarious when the hypocrisy kicks in, claiming OP's trying to wrongfully get his readers against MQA - yet MQA is pulling this takedown bullshit and as always, making nonsense claims that are dubious at best, but even if truly believed, would still be an instantiation of MQA trying to sway people to their ideas, and hiding whenever someone wants to properly have their claims tested.

I simply cannot understand for example how a file with some test tones present (with the rest being music) can entirely pollute the whole file, and this be an argument as to why the encoder failed and isn't mean't for OP's sort of submission to Tidal, but yet rationalize that the failure of the encoder to prevent such an issue in the audible band and music-portions of the file; as somehow a position that stands as a 'good' defense MQA itself.
The are several interesting points in your post.

I focused my criticism in GoldenOne's measurements (but could be applied also to Archimago's ones) on the issue of square wave response, because it is a very easy to understand subject about the discrepancies between the measuring method and what MQA is trying to achieve. Of course there are a lot of others. In fact, there are several other findings very interesting, like the sine-wave reconstruction that, as I see it, give clues of an interesting sigma-delta like process in MQA, instead necessarily showing "flaws" in the conversion.

I believe MQA really messed up things using terms lightly terms like "lossless" without a clear explanation about the context in which that term is applied. And then, one of MQA main contributions (the very conception that compressing and packaging *music* could be better done with a purposely made algorithm instead of a generic one) is ironically the main source of criticism: that allegedly "lossy" processing.

Unlike any generic compression algorithm that must assume that the whole input data may be significant, and so in need to preserve untouched the all of that data, one key aspect of what imho MQA does is to recognize that music data in a file is just a "slice" of the incoming information (can't find a better word that "slice", pardon my limited English). That is the data intended to be preserved in a "lossless" way (and Meridian is an historical expert in lossless compression, probably we all agree on this); for the rest of the "data" (in music optics, just garbage), there is no need of resemblance to original input, and so that space could better used for other purposes. In MQA and its folded-packaging process, that use is to gain space to store there, at the file size of a standard Redbook (in fact, somewhat larger because is 24-bit depth), the relevant information captured from supra-aural frecuencies.

Then, if you are measuring "noise" in the 0-24 kHz space (as GoldenOne is doing), of course the MQA file will be absolutely discrepant from the original source information. It has to be different, because of the very nature of the process. There simply no chance that a real information, brought from higher frecuencies and dithered to be seen as noise by non-MQA DAC's, to have any similarity with the original noise that was originally in that space. Then, what's the point of comparing both? If GoldenOne graphs show discrepancies in this, all what they are showing is that a standard compression algorithm has nothing to do with MQA; not that MQA is missing something...

The above was about noise floor. But there is a upper threshold also in what's relevant information to be captured. Here the point of square waves becomes obviously relevant. Music, by one of its most fundamental definitions, has dimishing amplitudes the higher you go in frequencies (each higher harmonic of an instrument is of lower amplitude of the previous one; and the fundamental of even the highest pitch ends at 3 or 4 Khz). You may need, say, 15 bits to capturing all the dynamics happening at 30 Hz, but only a fraction of that at 15 Khz (4, 5 bits perhaps?). What MQA does is, again, regain that wasted bit-depth to store relevant information coming from upper folds. There is simply no way to have an exact copy of a square wave (full of high amplitude signals at higher frecuencies, falling outside the threshold of music) in the MQA file , because it is not intended to process that. If GoldenOne or Archimago understood this very basic issue, they wouldn't have designed those measurement procedures that would obviously perform very bad. It HAS to behave different, that's the whole point of MQA!. Once again, what this measurement is showing is that MQA is NOT A COMPRESSION ALGORTITHM, IS A MUSIC-PACKAGING ALGORITHM.... What the test should try to prove is if MQA is lossless or not in the window of information it is trying to process, not in the whole possible space of a PCM file... And I would bet that in that window, the window where music is found and not noise, it is as lossless as any other known lossless scheme.

That "slice" or "window" of relevant information, btw, is not static, but dependent on the content to be packaged. In the limited functionality (that's specifically said in MQA answer) on the process applied for uploaded indie files, one of the options is that the file could be "electronic music". I don't know the internals of the differences, but it is obvious that this options is expecting a higher amplitude of harmonics in the treble region; higher, but not sudden bursts of tones as required for a proper reconstruction a square wave or pulse tests (note: I'm not dealing here with the time domain issues, that's completely different subject)... Yes, that assumption could be problematic with a very limited kind of music throwing persistent high frequency energy (only possible with artificially generated tones), but for the 99.99% of the rest of music content, it is perfectly good... and lossless in its content.
You ask, why some troublesome tones can mess the whole file: I think it is obvious: the MQA process probably define conversion parameters in advance given the statistical content of the file to be converted; but it is not a continuously adaptive process I guess. As in every other subject, there is a balance among flexibility and usability. MQA should have opted for usability... suitable for 99.99% of the possible content it may be presented.

Now, you point about a very interesting issue in all this: how does MQA assumes what is relevant from what is not? I think it is already explained in MQA papers (perhaps not as clear as we may want), but it is not difficult to envisage even if you don't read them: Any Fourier decomposition of the music signal gives a clear statistic of the amplitudes you need to capture. But even without it (as in analog-to-digital capture in real time), you know in advance that ALL instruments (with except of some tones of synthesizers) have lower amplitude harmonics. That's a fact. The precise slope of the upper side of the triangle of capture is presumably adapted given the analysis of the content to be converted. I think this is the easiest and more unquestionable of all subjects in this matter. But one that GoldenEar conveniently "forgets" in his tests.

Now, we the "trolls" (as some of of you have called us) are not necessarily saying that MQA is perfect; it can be perfectly possible that DSD, for example, achieves as good or better results Tham MQA (at the cost of a much more difficult to stream file size, that is). At least from my part, all what I'm trying to note is that a test file to measure how "lossy" or not, how exact or not a compression algorithm MQA is, is completely wrong in this case. We are not dealing here with generic information, but a Taylor-made system designed to process music. But I DO say this: To my ears, MQA files, when render by hardware, sound fantastic, and much, much better than rebook. I think if you love music (and not only measurements) you should all, even for your own interests, seriously try MQA in proper conditions prior of throwing stones at it, just because you were told it "losses" information... Then if you do find it bad, I would gladly understand the criticism. But 9 out o 10 people here are basing their opinions not on what they listen, but in their belief that MQA losses information. When the thing is, it transforms noise, but leave music untouched, imho.

As I said in my first post in this thread: if you measure a F1car in a bumpy off-road track, you will get awful results; if you measure a splendid Hummer for it 0-60 Mph acceleration, it will also get awful results, only of a different type. Archimago and GoldenOne measurements and conclusions are completely, awfully wrong, seen with this lens.
 
Last edited:

Raindog123

Addicted to Fun and Learning
Joined
Oct 23, 2020
Messages
637
Likes
1,104
Location
Melbourne, FL, USA
if you measure a F1car in a bumpy off-road track, you will get awful results; if you measure a splendid Hummer for it 0-60 Mph acceleration, it will also get awful results, only of a different type. Archimago and GoldenOne measurements and conclusions are completely, awfully wrong, seen with this lens.
So, all else aside, where are MQA’s “good” results? Can we see any, in any accepted form - performance measurements, blind comparison listening tests?
 

levimax

Addicted to Fun and Learning
Joined
Dec 28, 2018
Messages
755
Likes
1,077
Location
San Diego
The are several interesting points in your post.

I focused my criticism in GoldenOne's measurements (but could be applied also to Archimago's ones) on the issue of square wave response, because it is a very easy to understand subject about the discrepancies between the measuring method and what MQA is trying to achieve. Of course there are a lot of others. In fact, there are several other findings very interesting, like the sine-wave reconstruction that, as I see it, give clues of an interesting sigma-delta like process in MQA, instead necessarily showing "flaws" in the conversion.

I believe MQA really messed up things using terms lightly terms like "lossless" without a clear explanation about the context in which that term is applied. And then, one of MQA main contributions (the very conception that compressing and packaging *music* could be better done with a purposely made algorithm instead of a generic one) is ironically the main source of criticism: that allegedly "lossy" processing.

Unlike any generic compression algorithm that must assume that the whole input data may be significant, and so in need to preserve untouched the all of that data, one key aspect of what imho MQA does is to recognize that music data in a file is just a "slice" of the incoming information (can't find a better word that "slice", pardon my limited English). That is the data intended to be preserved in a "lossless" way (and Meridian is an historical expert in lossless compression, probably we all agree on this); for the rest of the "data" (in music optics, just garbage), there is no need of resemblance to original input, and so that space could better used for other purposes. In MQA and its folded-packaging process, that use is to gain space to store there, at the file size of a standard Redbook (in fact, somewhat larger because is 24-bit depth), the relevant information captured from supra-aural frecuencies.

Then, if you are measuring "noise" in the 0-24 kHz space (as GoldenOne is doing), of course the MQA file will be absolutely discrepant from the original source information. It has to be different, because of the very nature of the process. There simply no chance that a real information, brought from higher frecuencies and dithered to be seen as noise by non-MQA DAC's, to have any similarity with the original noise that was originally in that space. Then, what's the point of comparing both? If GoldenOne graphs show discrepancies in this, all what they are showing is that a standard compression algorithm has nothing to do with MQA; not that MQA is missing something...

The above was about noise floor. But there is a upper threshold also in what's relevant information to be captured. Here the point of square waves becomes obviously relevant. Music, by one of its most fundamental definitions, has dimishing amplitudes the higher you go in frequencies (each higher harmonic of an instrument is of lower amplitude of the previous one; and the fundamental of even the highest pitch ends at 3 or 4 Khz). You may need, say, 15 bits to capturing all the dynamics happening at 30 Hz, but only a fraction of that at 15 Khz (4, 5 bits perhaps?). What MQA does is, again, regain that wasted bit-depth to store relevant information coming from upper folds. There is simply no way to have an exact copy of a square wave (full of high amplitude signals at higher frecuencies, falling outside the threshold of music) in the MQA file , because it is not intended to process that. If GoldenOne or Archimago understood this very basic issue, it wouldn't have designed those measurement procedures that would obviously perform very bad. It HAS to behave different, that's the whole point of MQA!. Once again, what this measurement is showing is that MQA is NOT A COMPRESSION ALGORTITHM, IS A MUSIC-PACKAGING ALGORITHM.... What the test should try to prove is if MQA is lossless or not in the window of information it is trying to process, not in the whole possible space of a PCM file... And I would bet that in that window, the window where music is found and not noise, it is as lossless as any other known lossless scheme.

That "slice" or "window" of relevant information, btw, is not static, but dependent on the content to be packaged. In the limited functionality (that's specifically said in MQA answer) on the process applied for uploaded indie files, one of the options is that the file could be "electronic music". I don't know the internals of the differences, but it is obvious that this options is expecting a higher amplitude of harmonics in the treble region; higher, but not sudden bursts of tones as required for a proper reconstruction a square wave or pulse tests (note: I'm not dealing here with the time domain issues, that's completely different subject)... Yes, that assumption could be problematic with a very limited kind of music throwing persistent high frequency energy (only possible with artificially generated tones), but for the 99.99% of the rest of music content, it is perfectly good... and lossless in its content.
You ask, why some troublesome tones can mess the whole file: I think it is obvious: the MQA process probably define conversion parameters in advance given the statistical content of the file to be converted; but it is not a continuously adaptive process I guess. As in every other subject, there is a balance among flexibility and usability. MQA should have opted for usability... suitable for 99.99% of the possible content it may be presented.

Now, you point about a very interesting issue in all this: how does MQA assumes what is relevant from what is not? I think it is already explained in MQA papers (perhaps not as clear as we may want), but it is not difficult to envisage even if you don't read them: Any Fourier decomposition of the music signal gives a clear statistic of the amplitudes you need to capture. But even without it (as in analog-to-digital capture in real time), you know in advance that ALL instruments (with except of some tones of synthesizers) have lower amplitude harmonics. That's a fact. The precise slope of the upper side of the triangle of capture is presumably adapted given the analysis of the content to be converted. I think this is the easiest and more unquestionable of all subjects in this matter. But one that GoldenEar conveniently "forgets" in his tests.

Now, we the "trolls" (as some of of you have called us) are not necessarily saying that MQA is perfect; it can be perfectly possible that DSD, for example, achieves as good or better results Tham MQA (at the cost of a much more difficult to stream file size, that is). At least from my part, all what I'm trying to note is that a test file to measure how "lossy" or not, how exact or not a compression algorithm MQA is, is completely wrong in this case. We are not dealing here with generic information, but a Taylor-made system designed to process music. But I DO say this: To my ears, MQA files, when render by hardware, sound fantastic, and much, much better than rebook. I think if you love music (and not only measurements) you should all, even for your own interests, seriously try MQA in proper conditions prior of throwing stones at it, just because you were told it "losses" information... Then if you do find it bad, I would gladly understand the criticism. But 9 out o 10 people here are basing their opinions not on what they listen, but in their belief that MQA losses information. When the thing is, it transforms noise, but leave music untouched, imho.

As I said in my first post in this thread: if you measure a F1car in a bumpy off-road track, you will get awful results; if you measure a splendid Hummer for it 0-60 Mph acceleration, it will also get awful results, only of a different type. Archimago and GoldenOne measurements and conclusions are completely, awfully wrong, seen with this lens.
Besides the L2 web site do you have any links to the exact same mastering of music with and without MQA processing? Do you have an example of what you know to be the exact same mastering of a song with and without MQA processing that demonstrates "much better than Redbook" sound quality? I can't tell any difference on the L2 tracks.
 

levimax

Addicted to Fun and Learning
Joined
Dec 28, 2018
Messages
755
Likes
1,077
Location
San Diego
Hmmm? AFAIK, Uncle Bob was responsible for the compression scheme used in DVD-A. The original encryption scheme was the work of others.
I was under the impression that the MLP scheme (Bob Stuarts) was both compression and copy protection for hi-res audio which was in addition to the standard DVD encryption scheme and the reason the record companies agreed to allowing hi-res lossless files out of their vaults and onto DVD-A discs. Looking at both links it is not clear to me if this is correct or not. I know the DVD encryption was cracked before DVD-A encryption was which is why I think they are seperate.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
33,010
Likes
111,510
Location
Seattle Area
I was under the impression that the MLP scheme (Bob Stuarts) was both compression and copy protection for hi-res audio which was in addition to the standard DVD encryption scheme and the reason the record companies agreed to allowing hi-res lossless files out of their vaults and onto DVD-A discs.
Definitely not. The copy protection system actually came from IBM. It is the same scheme used in SD cards by the way (CPRM). A later version is used in Blu-ray.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
33,010
Likes
111,510
Location
Seattle Area
Any system that operates in a backward compatible way with baseband audio (i.e. 16 bit/44.1) but wants to encode high resolution audio, MUST by definition assume there is not a lot of useful information in ultrasonic to represent. There is just no way to pack 2X data in 1X space. HDCD was the first system to try such backward compatibility by cramming 20 bits in 16 bits. It relied on the fact that the full 20 bit dynamic range was not needed on a sample by sample basis. Indeed, it turned out that you needed just a few kilobits/second worth of data to shift the dynamic range of 16 bits up and down to match what 20 bits may need. That information was then encoded in the 16 bit data and randomized to act like dither.

MQA follows the same scheme, assuming that not much useful info is above audible band. As noted by @mieswall, artificial signals that violate this practical consideration violate this assumption and hence cannot be encoded by MQA. It likely throws its hand up as the OP found in one case, and encodes something screwy otherwise.

For this reason, such test signals are never used with lossy codecs like MP3 and such. Sine waves would be the limit of what I would throw at them.

BTW, I like to compliment OP with the massive effort he put in to get the data he got. It is very good effort. It is just that you can't test lossy codecs, specially layered ones like MQA in this manner.
 
Status
Not open for further replies.
Top Bottom