I
This is not snake oil or handwaving. Without getting into the specifics of what MQA does, creating a buried data channel takes advantage of the spectral nature of the analog noisefloor present on all music recordings.
John Atkinson
Technical Editor, Stereophile
What is the alternative that you suggest? That the same prestigious engineers that made possible things like ambisonics...
Although my hearing acuity has diminished with age, my olfactory receptors are highly sensitive to the smell of bullsh*t...
Then, if your test is full of signals (not music) above that area, both in sonic bandwidth and in ultrasonic region, which is exactly what happens when you use square waves or white noise, either you don't have a clue of what you are measuring, or you did it on purpose to deceive your audience.
If you are an engineer working in audio for 15 years, I'm surprised you can't grasp what a mockery of science these tests are.
IMHO, if you are planning to test the behavior of something, the first thing you should do is... well, understand what that something does. Even with a slight, quick sight to the most basic of the dozens of papers or articles about MQA will tell you what this graph shows:
1- that the original noise in the red area below 24Khz is completely replaced with information. If your test is trying to evaluate a bit perfect match with that original noise (that is not there anymore), it is flawed by definition.
2- The quick read will also tell you that the algorithm also requires that red area (now filled with real information brought from ultrasonics) to be dithered, so it could be still read as noise by a non-MQA DAC. And this test omits that critical, basic procedure... as noted in MQA reply, that of course nobody read, or worse, understood. Then... garbage in -> garbage out.
3- That the algorithm is designed to process the signals found in the orange triangle, because that is the space music really occupies, as the red plot shows (before any add more mockery in this subject: this example of maximum amplitudes of a string quartet doesn't differ significantly of any other music, as any Fourier analysis of ANY instrument harmonics would tell you). Then, if your test is full of signals (not music) above that area, both in sonic bandwidth and in ultrasonic region, which is exactly what happens when you use square waves or white noise, either you don't have a clue of what you are measuring, or you did it on purpose to deceive your audience.
4- I could go on and on with several other facts about these "tests", but people here already made their mind. Obviously impervious to any argument.
5- so, if instead of trashing MQA because these "impartial" reviewers told you: "vote with your wallet and cancel your Tidal subscription", we could be arguing why the area above the triangle is not captured by MQA, if because of compatibility issues with Redbook that recovered space can't be used anyway. THEN we would be doing something useful here, and we will be discussing about exactly how that upper side of the triangle is built. Even the same test could have probably discovered it. My hunch: a bezier or b-spline filter allowing antialiasing filters (you would even see the curvature if plotted logarithmically and not flat like in this graph) so as not to smear phases of the signals quantized, which is the whole purpose of MQA. And the filter starting deep in the audible region as to give one or two more octaves for the slope before reaching the displaced Nyquist frecuency, now at 48 or 96 Khz, depending on the sampling, because there is nothing lossy about *music* if done this way.
But... it is much more funny to make a scandal of all this, while the scandal are the very tests used here.
A final comment: if you are interested in truth instead of sensationalism, and then find such a degree of anomalities in your experiment, my guess is that any scientist would at least question himself if the experiment is correct or there is something missing. But GoldenEar and Archimago chose the easier path...
And so, here we are: MQA was designed by some of the luminaries of the audio industry. Peter Craven, creator of some of the milestones in audio technology that every audio engineer knows, was working in noise shaping, time coherence issues or even lossless algorithms before most people here were even born. And then, all of the sudden, they all forgot a lifetime work and design an algorithm that any engineering student would best in days.... that... or these test were made by amateurs that hardly knew what they were doing.
The amazing thing is that professionals in audio, like Paul Gowan of PS Audio even cite this absurdity to criticize MQA. If I needed one more argument to advise my friends not to buy PS Audio products, this one is it.
View attachment 126350
The Govt changed the definition of "lamb" a few years back... much of what you get now would be considered mutton under the old definition. Have you not noticed the huge legs of "lamb" in the supermarket now?I don't imagine mutton shanks would be much different- maybe a lot bigger?
Well, that's what a true researcher would try to discover. My guess: As the noise floor in those test files was not dithered and couldn't be read as noise at some steps of the process, the algorithm enters a situation that it is not programmed for. As I said in the previous post: garbage in -> garbage out.
What is the alternative that you suggest?
My hunch: a bezier or b-spline filter allowing antialiasing filters (you would even see the curvature if plotted logarithmically and not flat like in this graph) so as not to smear phases of the signals quantized, which is the whole purpose of MQA
I am not sure where you get the impression that 3 bits of "quality" are lost. The container still has the original bit depth. But there is now a hidden data channel in which data encrypted as pseudorandom noise can be buried without reducing the original resolution of the audio data,
This is not snake oil or handwaving. Without getting into the specifics of what MQA does, creating a buried data channel takes advantage of the spectral nature of the analog noisefloor present on all music recordings.
Sorry, but that is snake oil. Reduced to its essentials, your argument is that is that all 16-bit audio recordings only have 13 bits of real data, so it's ok to trash the last three bits by injecting pseudorandom noise that contains hidden data. And that in some way improves the audio quality. That's the definition of snake oil.
Sorry, but that is snake oil. Reduced to its essentials, your argument is that is that all 16-bit audio recordings only have 13 bits of real data, so it's ok to trash the last three bits by injecting pseudorandom noise that contains hidden data. And that in some way improves the audio quality. That's the definition of snake oil.
If 13-bit audio is all we need anyway, he and many others (including this site) spent their lives chasing angels when measuring/comparing/listening to higher than 13-bit range stuff.
JA is probably correct, in terms of the noise floor, as far as live recordings of stuff such as choirs in churches are concerned. However kind of skips the fact that mixing several such inputs can definitely end up using 16 bit or more. Not to mention studio recordings or synthesized music. So, yes, it obfuscates the potential (let's be generous here) MQA loss by using what seems to be an irrelevant argument in this context.
But the funny thing is that, if what he claims applies generally, this undermines the whole audiophile/measurement cathedral. If 13-bit audio is all we need anyway, he and many others (including this site) spent their lives chasing angels when measuring/comparing/listening to higher than 13-bit range stuff.
And the worst for me is that, beyond the utterly dishonest money grab that MQA is, JA might be right generally speaking, if only because the average domestic listening environment has such a high noise floor and the adaptive capability of our auditive system to rapid changes isn't stellar.
We don't have conclusive blind tests that MQA is better (obviously, there are no rational reasons it would be) but we don't have blind tests showing it is worse either, which would indicate it essentially doesn't matter. Oh, and when we have blind tests showing MP3 is preferred to CDs, we push it under the carpet on the grounds the audience isn't educated or their hearing has been "polluted" by long term MP3 listening.
Apologies: I must be in bad mood because both "lifted veils" and "audibility of 0.3dB" thingies rub me in the wrong way today.
The 12 bit non-linear quantization used by DAT manufacturers (Sony) back in the day for LP (Long Play 2x) mode was pretty decent. The difference between SP and LP was audible only on content with a lot of HF (and my ears were younger then).
B-spline interpolation is a perfectly valid tool to use in some applications. Reconstruction filters for audio are not one of them.Well, you brought it up several times already:
You're surely guessing and hunching a lot for somebody so convinced of a product's prowess. I've said it several times already, but will repeat once more: b-spline interpolation does not make for a good filter! It does not adhere to the sampling theorem and will leave you with a shitload of aliasing artefacts:
View attachment 126401
It very much resembles the Windows way of linear interpolation upsampling:
View attachment 126403
And this perfectly correlates to the image mirroring I showed earlier.
My guess about the original example that I showed, if you'll indulge me: the original was only a 44.1 kHz master, so there was nothing to "unfold". So all MQA can do is upsample it twice the rate. And we know it does that using b-spline. Hence the abominable result we could all see. Now normally with a higher sampled source, this would actually happen higher up in the spectrum, but would still be present, far less problematic though. In that case, the second part half of the spectrum would be compressed and stored in the lower bits of the file. @mansr can probably shed some more definite light on it.
B-spline interpolation is nothing new BTW, boutique brands like Wadia had this since the mid-'90s. By now they've abandoned it for a more classical (but no less insane implementation) using classical FIR filters.
That seems to be an accurate summary of your writing process.As I said in the previous post: garbage in -> garbage out.
B-spline interpolation is a perfectly valid tool to use in some applications. Reconstruction filters for audio are not one of them.
TIDAL & LG Partner to Deliver Music Streaming Through Your TV
Reduced to its essentials, your argument is that is that all 16-bit audio recordings only have 13 bits of real data, so it's ok to trash the last three bits by injecting pseudorandom noise that contains hidden data.