• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA creator Bob Stuart answers questions.

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,493

Anyone here a statisticians capable of doing some probability work on whether he truly believes in what he's saying, or just lip service for his product offering?
 
OP
Tks

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,493
Can you blame him? These are the sorts of folks that I would imagine you need to schedule time for, need to tell them exactly what the questions will be so they can be somewhat vetted and such.

The man is talking about how camera lens resolution can't be extrapolated by the output pixel count(which is like saying you can't tell if something is sweet using your tongue), and then relating it to how we don't have a definition for "high resolution" in audio (because if you ask people, each person will give you their own answer) thus they are at the cutting edge of determining what this ought to be or something... Then goes on with "woke" talk about how high resolution has various standards (as if normal people don't understand this).

And it just devolves from there.
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,437
Likes
4,686
On the first point, photographic analogy

The resolution of a 'perfect' lens, in general, depends on its physical aperture and of course the wavelength being imaged. "Resolution" is actually just another word for the size of the diffraction pattern of a point source. There are, obviously, constraints on what the quality of a lens or mirror should be, in terms of surface and curve inaccuracies, to avoid adding diffraction (usually expressed as a fraction of the wavelength being observed).

So if one takes a 200 mm (physical aperture) lens observing at 550nm, its maximum resolution is given by the following formula (based on the Airy disk radius)

angular resolution in radians = 1.22 (0.00000055/0.20) = 0.00000275 radians
converting for arc seconds we get approximately 0.57 arc seconds.

Alternatively, we can use the more practical Dawes limit formula which yields

115.8/200 = 0.579 arc seconds.

wavelength and physical lens diameter are the only fundamental factors determining resolution, which is why we always try to increase telescope diameters, why radio telescopes are so large, why large physical aperture photo lenses are more resolving, etc... In fact, we don't even have to use the surface of the lens at all (unless gathering power is an issue) and can use two small lenses separated by a large base, that's what makes interferometry work and, incidentally, gives us "wormhole images"

Ideally, you want to oversample that 0.58 arc sec per Nyquist, which tells us the ideal sensor pixel should subtend an arc of 0.29 arc sec. How we achieve that is by adjusting focal length. So let's say that we have a 7.5 um pixel, using 206265 as the number of arc seconds in a radian, what is the ideal focal length to capture the full resolution of the 200mm physical aperture lens? It is given by the following formula

(206265*0.0000075)/0.29 = 5.33 meters or 5330 mm or in more photographic terms a F/D of 26.65

In a telescope, that is quite easy to achieve: you simply design or adjust the telescope to it has the ideal focal length for the camera you wish to use (unless the goal is not resolution, such as when widefield are needed).

In terms of photography, this is always a matter of compromise. Fixed pixel size, varying physical aperture and focal length, gathering power a big issue...

So, yes, the MQA guy is correct in his analogy that the number of pixels doesn't tell you anything about ultimate image resolution. Think what you want of them, they are competent people and won't be caught saying stupid things from an engineering point of view.

And, yes, of course, in practice, there are a lot of other things to consider of course, such as the ability to gather light (also dependent on the size of the photon bucket - the lens, diffraction introduced by the eventual diaphragm, read noise for small pixel, random noise from the source, thermal noise in sensor, etc...) For telescopes, the main problem is our atmosphere btw, very few places on earth have an atmosphere stable enough to even reach the limit of our small 200 mm telescope (hence adaptive optics/artificial stars/telescopes in space)

How well this translates to audio is another matter ofc...

On the second point, "accidental processing"

The guy is also correct, as we all know or have experienced. That definitely means that an integrity mechanism is welcome. It's a side benefit of the verification/encapsulation mechanism. One could say something like "TCP finally comes to audio" Of course, a simpler, free and open standard could have been designed for that purpose... Maybe it wasn't because most of the underlying technology results in an error-free upper layer though ;)

Lastly, the guy doesn't talk at all about the most interesting aspects of MQA (controversial or not). That interview really has a low SNR. Darko is so much out of his depth there, unable to even formulate a question, it is painful (or entertaining) to watch :)
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,451
Likes
36,880
On the first point, photographic analogy

The resolution of a 'perfect' lens, in general, depends on its physical aperture and of course the wavelength being imaged. "Resolution" is actually just another word for the size of the diffraction pattern of a point source. There are, obviously, constraints on what the quality of a lens or mirror should be, in terms of surface and curve inaccuracies, to avoid adding diffraction (usually expressed as a fraction of the wavelength being observed).

So if one takes a 200 mm (physical aperture) lens observing at 550nm, its maximum resolution is given by the following formula (based on the Airy disk radius)

angular resolution in radians = 1.22 (0.00000055/0.20) = 0.00000275 radians
converting for arc seconds we get approximately 0.57 arc seconds.

Alternatively, we can use the more practical Dawes limit formula which yields

115.8/200 = 0.579 arc seconds.

wavelength and physical lens diameter are the only fundamental factors determining resolution, which is why we always try to increase telescope diameters, why radio telescopes are so large, why large physical aperture photo lenses are more resolving, etc... In fact, we don't even have to use the surface of the lens at all (unless gathering power is an issue) and can use two small lenses separated by a large base, that's what makes interferometry work and, incidentally, gives us "wormhole images"

Ideally, you want to oversample that 0.58 arc sec per Nyquist, which tells us the ideal sensor pixel should subtend an arc of 0.29 arc sec. How we achieve that is by adjusting focal length. So let's say that we have a 7.5 um pixel, using 206265 as the number of arc seconds in a radian, what is the ideal focal length to capture the full resolution of the 200mm physical aperture lens? It is given by the following formula

(206265*0.0000075)/0.29 = 5.33 meters or 5330 mm or in more photographic terms a F/D of 26.65

In a telescope, that is quite easy to achieve: you simply design or adjust the telescope to it has the ideal focal length for the camera you wish to use (unless the goal is not resolution, such as when widefield are needed).

In terms of photography, this is always a matter of compromise. Fixed pixel size, varying physical aperture and focal length, gathering power a big issue...

So, yes, the MQA guy is correct in his analogy that the number of pixels doesn't tell you anything about ultimate image resolution. Think what you want of them, they are competent people and won't be caught saying stupid things from an engineering point of view.

And, yes, of course, in practice, there are a lot of other things to consider of course, such as the ability to gather light (also dependent on the size of the photon bucket - the lens, diffraction introduced by the eventual diaphragm, read noise for small pixel, random noise from the source, thermal noise in sensor, etc...) For telescopes, the main problem is our atmosphere btw, very few places on earth have an atmosphere stable enough to even reach the limit of our small 200 mm telescope (hence adaptive optics/artificial stars/telescopes in space)

How well this translates to audio is another matter ofc...

On the second point, "accidental processing"

The guy is also correct, as we all know or have experienced. That definitely means that an integrity mechanism is welcome. It's a side benefit of the verification/encapsulation mechanism. One could say something like "TCP finally comes to audio" Of course, a simpler, free and open standard could have been designed for that purpose... Maybe it wasn't because most of the underlying technology results in an error-free upper layer though ;)

Lastly, the guy doesn't talk at all about the most interesting aspects of MQA (controversial or not). That interview really has a low SNR. Darko is so much out of his depth there, unable to even formulate a question, it is painful (or entertaining) to watch :)
So what's the relationship between angular optical resolution and high sample rate audio? Oh and should I mention MQA is faux high sample rate audio?
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,437
Likes
4,686
So what's the relationship between angular optical resolution and high sample rate audio?

Neither are well understood by the average customer and can be used for flaky analogies. Draw well-crafted parallels between two poorly understood things and the consumer thinks he understands the issue better when, in fact, he is more confused than ever and you have avoided the hard question ;)

Oh and should I mention MQA is faux high sample rate audio?

The guy could blame it on labels "hey, we only provide the tools" and be done with it. But yes, that was a question worth asking. Don't expect that with Darko though since his business is (any) product placement and advertorials.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Annoying though the video is, am I the only person who is glad that he's still there as a line of continuity between the heyday of hi-fi and the present day?

He's not your average garage-based audio enthusiast: he links back to when hi-fi was something special and a real business; he's been associated with really nice aesthetic design - I imagine his house is *very* tasteful, and I can just imagine the kinds of swanky offices and desks he's occupied over the years. I'll bet his car cost a lot more than mine.
1558620114464.png
 
Last edited:

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,437
Likes
4,686
I imagine his house is *very* tasteful,

Now, don't be jealous. It's basically a small fashion shooting studio. Photography is usually A-, lighting B+, storytelling depends a bit on the material. The goal is precisely to make the audience react that way. And a magical invisible hand moves the furniture around and select flowers that, of course, never die. It's basically Vogue for audio components. Or what PewDiePie is to teens, but for middle-aged men with disposable income. ;)

But, true, high production value in the context.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,303
Likes
233,681
Location
Seattle Area
Very annoying though the video is, am I the only person who is glad that he's still there as a line of continuity between the heyday of hi-fi and the present day?
I do. I still remember when I met him at a high-end dealer show back in 1983 or so. He had brought his Philips modified CD player and went through a ton of slides which had a bunch of signal processing in it. Quite unusual for a sales/presentation.

An anti-CD guy I knew asked him what he liked about CD versus LP. His answer: "when you listen to piano, there is something solid and grounded that is much better than a turntable." To this day, I notice the same.
 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,437
Likes
4,686
I do. I still remember when I met him at a high-end dealer show back in 1983 or so. He had brought his Philips modified CD player and went through a ton of slides which had a bunch of signal processing in it. Quite unusual for a sales/presentation.

In a way, his evolution tells us everything we need to know about the current state of the hifi market. :(
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
Anyone here a statisticians capable of doing some probability work on whether he truly believes in what he's saying, or just lip service for his product offering?

I went through three phases regarding MQA. At first, I liked it because the MQA-encoded tracks sounded "truer" to me than the CD. Then, I heard and saw well-founded criticisms: basically, MQA is equivalent to 20-bit 120 KHz PCM, so why not just use the PCM? And finally, I dug deeper into it, and understood what it is: this is a modern lossy perceptual codec for 24-bit 192 KHz PCM.

The 24/192 has been the most common standard for studio masters for about 12 years now. It turns out it is a bit excessive, because human perception limit is well-approximated by the 20/120. Another significant MQA creators insight was that the usable, noise-restricted dynamic range of real-life music is very different at different frequencies, which opened another avenue for PCM compression.

Yet another thing I like about MQA is that they claim to address the time-domain distortions, perceived as blur, introduced by ADCs and recording consoles. They are tight-lipped on specifics, yet I can imagine it could be DSP algorithms removing pre-ringing and post-ringing. That's the part I don't fully understand, as there isn't enough information, yet I like their intent.

In one of his other interviews, Bob Stuart directly answers the question about why they did it: to make sure the analog sound is transmitted in a right way to the end customer, to finish his life's work. He was born in 1948, so MQA can indeed end up being his last significant work. He also explains why they need to charge for MQA: development and certification cost money.

He emphasized that their goal is not to make tons of money, but rather make enough money for supporting the MQA ecosystem. I personally believe him. And apparently, so do peer reviewers of the Audio Engineering Society magazine, the latest issue of which contains very technical articles written by MQA creators, with tons of references to relevant peer-reviewed research.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,451
Likes
36,880
I went through three phases regarding MQA. At first, I liked it because the MQA-encoded tracks sounded "truer" to me than the CD. Then, I heard and saw well-founded criticisms: basically, MQA is equivalent to 20-bit 120 KHz PCM, so why not just use the PCM? And finally, I dug deeper into it, and understood what it is: this is a modern lossy perceptual codec for 24-bit 192 KHz PCM.

The 24/192 has been the most common standard for studio masters for about 12 years now. It turns out it is a bit excessive, because human perception limit is well-approximated by the 20/120. Another significant MQA creators insight was that the usable, noise-restricted dynamic range of real-life music is very different at different frequencies, which opened another avenue for PCM compression.

Yet another thing I like about MQA is that they claim to address the time-domain distortions, perceived as blur, introduced by ADCs and recording consoles. They are tight-lipped on specifics, yet I can imagine it could be DSP algorithms removing pre-ringing and post-ringing. That's the part I don't fully understand, as there isn't enough information, yet I like their intent.

In one of his other interviews, Bob Stuart directly answers the question about why they did it: to make sure the analog sound is transmitted in a right way to the end customer, to finish his life's work. He was born in 1948, so MQA can indeed end up being his last significant work. He also explains why they need to charge for MQA: development and certification cost money.

He emphasized that their goal is not to make tons of money, but rather make enough money for supporting the MQA ecosystem. I personally believe him. And apparently, so do peer reviewers of the Audio Engineering Society magazine, the latest issue of which contains very technical articles written by MQA creators, with tons of references to relevant peer-reviewed research.

There aren't any algorithms for removing pre and post ringing. Though he wants you to think something like that. MQA uses some rather shallow filters which have less ringing, but they also have rather high aliasing and imaging as a result. Besides the ringing isn't really ringing. It is an expected result of bandwidth limiting. Ringing only comes from illegal test signals that would be filtered out in an AD to DA conversion process.

The ultrasonics that are usually low in level are encoded into the noise of the lower frequencies, but the encoding is lossy like MP3. MQA can unfold with some of the information that was in a 96/24 signal, but unfolds into higher indicated sample rates contain only noise and upsampling. They've none of the information at all above that in 96 khz sampling. Its all smoke and mirrors and marketing. Oh and if you don't decode it you only get about 12 or 13 bits of information. It plays and acts like a CD's worth, but it isn't.
 

CRKebschull

Member
Forum Donor
Joined
Feb 7, 2019
Messages
87
Likes
94
Location
Germany
Yet another thing I like about MQA is that they claim to address the time-domain distortions, perceived as blur, introduced by ADCs and recording consoles. They are tight-lipped on specifics, yet I can imagine it could be DSP algorithms removing pre-ringing and post-ringing. That's the part I don't fully understand, as there isn't enough information, yet I like their intent.

This is unfortunately bunk. MQA brings absolutely nothing to the table.
The myth of MQA is well debunked at all levels on a technical level in Archimago's Detailed Analyses.
The MQA patent diagram from 2013 is revealing of the "MQA process" which reduces resolution while increasing file size.
The deblurring is actually the exact opposite, as further shown in the detailed analysis of the Dragonfly MQA filter impulse response.
 

dmac6419

Major Contributor
Joined
Feb 16, 2019
Messages
1,246
Likes
770
Location
USofA
This is unfortunately bunk. MQA brings absolutely nothing to the table.
The myth of MQA is well debunked at all levels on a technical level in Archimago's Detailed Analyses.
The MQA patent diagram from 2013 is revealing of the "MQA process" which reduces resolution while increasing file size.
The deblurring is actually the exact opposite, as further shown in the detailed analysis of the Dragonfly MQA filter impulse response.
I don't listen to anything CA ''oh they change their name'' have to say or recommend.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
They are tight-lipped on specifics
If I had *nothing* of any merit, I too would say "I don't want to go into too much detail..." or "That is proprietary I'm afraid..." :)
 
OP
Tks

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,493
If I had *nothing* of any merit, I too would say "I don't want to go into too much detail..." or "That is proprietary I'm afraid..." :)

It’s amazing how the human instinct has a baseline level of smell equal amongst nearly everyone to detect when something is off.

When people talk that way where the most interesting details about something are simply left out by saying things like “don’t want to talk much about that”, it instantly becomes a stench many of us all familiar with. The stench of someone trying to pull some bullshit over you.

The whole topic of contention is contingent on those “details” you seem to brush away as insignificant, whenin fact they are integral to the selling point of the product you’re peddling..

Let’s say MQA is literally everything they bill it to be. The proponents of it from the folks developing it, and companies supporting it AUTOMATICALLY start off on the wrong foot with the secrecy around some questions about it. When you start off like that, it’s like coming to a job interview in pajamas or something, you’ve simply left a horrible impression and now I don’t want to bother because I have to be bothered seeking out information, and wasting my time trying to sell myself because some information was left privy to the inner ring of the company.

When I have to do that, I instantly am turned off, and bias against you from the start.

Why companies do this can only be two sensible explanations:

Morons, or charlatans.
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
This is unfortunately bunk. MQA brings absolutely nothing to the table.
The myth of MQA is well debunked at all levels on a technical level in Archimago's Detailed Analyses.
The MQA patent diagram from 2013 is revealing of the "MQA process" which reduces resolution while increasing file size.
The deblurring is actually the exact opposite, as further shown in the detailed analysis of the Dragonfly MQA filter impulse response.

I find the Archimago's analysis to be far from detailed. All the "terrible" artifacts he mentions are plainly explained as engineering tradeoffs in the AES magazine article, which is detailed. They are similar in nature to DSD artifacts, yet are less pronounced. Testing with 20 dB sine wave at 0dBFS is not convincing: I think most of us would find challenging pointing to a piece of real-life music containing such signal component.

There is confusion going on regarding where the de-blurring happens. According to the creators of MQA, it happens before the encoding, leave alone the filter at the decoding stage evaluated by the critics. Apologies for sounding harsh, yet this confusion demonstrates the level of misunderstanding by some of the MQA critics.

What MQA creators claim is this: when they have access to a high-quality master, such as 192/24 digital, or a studio analog tape, and they know what equipment was used to record that master, they can reconstruct the original analog signal with a level of precision higher than the master itself, and then encode it in MQA, which in this case is supposed to be sonically more faithful to the original analog sound than the master.

As I already mentioned, the details of the de-blurring process are proprietary know-how, so I have to be agnostic about its merits. From general principles, I believe it can be beneficial for de-blurring microphone-straight-to-tape records made long ago. From the same principles, it has to be useless for most of modern music, significant part of which doesn't even have an analog source.

Regarding the Archimago's claim that MQA introduces group delay of 40 microseconds at 18 KHz on a 44.1 KHz sample. Did he measured it on a 192 KHz sample? According to MQA authors, it is supposed to be double of the sampling rate reciprocal. At 44.1 KHz, it is 22.7 * 3 = 45.4 microseconds: indeed close to what Archimago mentions. At 192 KHz, it is 5.2 *2 = 10.4, which is close to the design goal of 10 microseconds for the first version of MQA.

What's interesting, in the AES magazine article the creators depict 384 KHz as the desired sampling rate for masters. At this rate, the group delay would be 2.6 * 2 = 5.2 microseconds, which is in line with the neurophysiological measurements. What they also mention, and which I haven't seen critics discussing yet, the testers of MQA versions were mastering engineers, including some "living legends".

So, in many regards MQA is just like MP3 and AAC - a lossy perceptual codec - with DRM layer on top. The biggest difference is that the compression scheme is optimized for 192/24 masters (384/24 in the second version), instead of 44.1/16 that MP3 and closely related AAC targeted. It is definitely not the "best"and "end-all" format, yet not a part of an evil money-grabbing scheme either.
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
Let’s say MQA is literally everything they bill it to be. The proponents of it from the folks developing it, and companies supporting it AUTOMATICALLY start off on the wrong foot with the secrecy around some questions about it.
...
Why companies do this can only be two sensible explanations:

Morons, or charlatans.

Or a trade secret, or a know-how. Those things are not patentable, for instance because they were publicly disclosed in research articles, or are based on expired patents. Yet publicly disclosing that they are used in a specific product would be unwise.
 
OP
Tks

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,493
Or a trade secret, or a know-how. Those things are not patentable, for instance because they were publicly disclosed in research articles, or are based on expired patents. Yet publicly disclosing that they are used in a specific product would be unwise.

Sure, it would be unwise for criminalistic enterprise. I agree.

In the same way many companies now leech off the backs of the open source community, by taking software adding the rest of what they had in mind in-house and compiling into binaries and never suffer scrutiny. These companies aren’t actually making their own architectures with every single piece of system-running software.

They know they can get away with this most of the time because the open source community has no serious financial power to defend their licenses in legal realms.

But going back to the MQA topic.. this whole “unwise” to disclose such things, is claimed to be unwise and justified only from the most staunch supporters of the free market system. The justification being; to put it briefly “Come on dude, you’re really gonna hate on a company trying to make a living for itself and it’s employees? Do you want them to starve, really??”

The logic doesn’t hold up, because it can then be used to defend any sort of off-putting practice, or behavior. And because of that, it can’t be used to defend any.

If details about MQA won’t be revealed, then they need to be put to the fire: the fire of benchmarks and measurements. If they can pass that - then we could excuse some of the cat piss stench of secrecy.

It’s the same test we give to people who claim “I don’t listen to graphs, I listen to music”.

Okay then pal, let’s put your ears to the test as we give you the blindfold so you can prove to us without graphs - just how well your ears can differentiate between this music in this format vs the other.
 

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,759
Likes
9,433
Location
Europe
[..]The ultrasonics that are usually low in level are encoded into the noise of the lower frequencies, but the encoding is lossy like MP3. MQA can unfold with some of the information that was in a 96/24 signal, but unfolds into higher indicated sample rates contain only noise and upsampling. They've none of the information at all above that in 96 khz sampling. Its all smoke and mirrors and marketing. Oh and if you don't decode it you only get about 12 or 13 bits of information. It plays and acts like a CD's worth, but it isn't.
If I'm not wrong the MQA supporters tells us that playing an MQA coded disc without MQA decoding does not make the sound worse compared to a standard 44/16 CD. Since this means that 13 bits of real information sound as good as 16 bits there clearly is no need for 24 bits to get better SQ.
 
Top Bottom