• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

MQA creator Bob Stuart answers questions.

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
Okay then pal, let’s put your ears to the test as we give you the blindfold so you can prove to us without graphs - just how well your ears can differentiate between this music in this format vs the other.

Sure. That's the right way to go. I didn't do MQA vs 192/24 comparison, as I couldn't readily find records encoded in both formats that were surely made from the same master. The MQA creators claim that they did such comparisons, recruiting well-known mastering engineers. Putting names of real people in a peer-reviewed article in a major publication very likely proves that this is not an empty claim.

I did do MQA vs 44.1/16 comparisons, of 1970s-era records which I know by heart, and found MQA to be more pleasant to listen to, in a way very similar to how listening to 192/24 master of a recording I captured and mixed myself is more pleasant than to downsampled 44.1/16 or 48/16 rendering of it.
 
Last edited:

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
If I'm not wrong the MQA supporters tells us that playing an MQA coded disc without MQA decoding does not make the sound worse compared to a standard 44/16 CD. Since this means that 13 bits of real information sound as good as 16 bits there clearly is no need for 24 bits to get better SQ.

This is bunk. Some MQA supporters are too optimistic. Three bits do make difference for certain types of music. 24 bits are not really necessary though. All the research shows that the required number is between 20 and 21 bits. 24 is just a convenience, being three 8-bit computer bytes.

I absolutely agree with the notion that many MQA promoters bend the reality, some of them quite a bit. Same thing happens to any new technology: think about the CD, or MP3 - we know now that they are not the "best" and "end-all" formats, yet at the time of their introduction they were billed to be such.
 

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,833
Likes
9,573
Location
Europe
I did do MQA vs 44.1/16 comparisons, of 1970s-era records which I know by heart, and found MQA to be more pleasant to listen to, in a way very similar to how listening to 192/24 master of a recording I captured and mixed myself is more pleasant than to downsampled 44.1/16 or 48/16 rendering of it.
It is very unlikely that a recording made in the 1970s contains any content above 20 kHz at all (except tape hiss). Using MQA to store the tape hiss doesn't make sense to me. Of course I accept that you feel this recording as more pleasant to listen to but I don't think that any supersonic audio signals are the cause for this. Maybe the tape hiss works like dither, or the original recording was not properly mastered to CD.
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
It is very unlikely that a recording made in the 1970s contains any content above 20 kHz at all (except tape hiss). Using MQA to store the tape hiss doesn't make sense to me. Of course I accept that you feel this recording as more pleasant to listen to but I don't think that any supersonic audio signals are the cause for this. Maybe the tape hiss works like dither, or the original recording was not properly mastered to CD.

This is not about the frequency range. It is about accurate timing.

At 44.1 KHz, sampling step is 22.7 microseconds. At 192 KHz, it is 5.2 microseconds. Human hearing system time resolution, as reported by different authors, is within 3-8 microseconds, the average reported value being close to 5 microseconds.

Human hearing system determines location of a sound source using several mechanisms. If timing is not accurate, then the location estimated from steady frequencies differs from location estimated from short transients, resulting in perceptual blurring of the sound source.

This effect makes no noticeable difference if the structure of recorded music is simple: pop idols shall not worry. I would even go as far as to say that the pop music characteristics evolved in part matching the demand for maintaining decent quality on most of inexpensive audio gear.

If, however, we have something like a large symphonic orchestra with 135 instruments, accurate capture of timing makes the difference between hearing a recording of a masterpiece and suffering from onslaught of seemingly uncorrelated noise.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,344
Location
Alfred, NY
This is not about the frequency range. It is about accurate timing.

At 44.1 KHz, sampling step is 22.7 microseconds. At 192 KHz, it is 5.2 microseconds. Human hearing system time resolution, as reported by different authors, is within 3-8 microseconds, the average reported value being close to 5 microseconds.

One has zero to do with the other. Timing precision is NOT a function of sample rate. This is a very common misunderstanding.
 

LTig

Master Contributor
Forum Donor
Joined
Feb 27, 2019
Messages
5,833
Likes
9,573
Location
Europe
This is not about the frequency range. It is about accurate timing.

At 44.1 KHz, sampling step is 22.7 microseconds. At 192 KHz, it is 5.2 microseconds. Human hearing system time resolution, as reported by different authors, is within 3-8 microseconds, the average reported value being close to 5 microseconds.
I'm sorry to say but the sampling step is not the limit for accurate timing, this is just plain wrong. As a proof one can create a 44/16 wav file with two channels where one channel contains a 20 kHz sinus burst and the other channel contains the same signal shifted in time by 1 microsecond. Then play the file and look with a scope. You will see both bursts with a difference of 1 microsecond between both channels.
 

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,195
Likes
16,920
Location
Central Fl
The biggest difference is that the compression scheme is optimized for 192/24 masters (384/24 in the second version), instead of 44.1/16 that MP3 and closely related AAC targeted. It is definitely not the "best"and "end-all" format, yet not a part of an evil money-grabbing scheme either.
If it's not about the money then what is it about.
De-blurring was a process that a one time was claimed to be separate from the compression scheme. OK, so offer de-blurring by itself and if people find it of value they'll buy it. The compression is a answer to a non-existent problem. The small amount of bandwidth it saves when balanced against the data losses over a PCM 24/96 stream doesn't compute, it's not worth it. Most people say it sounds different but is that better or worse, maybe it's the compression losses heard?
The record companies have jumped on board cause it gives them what they wanted from DRM all along, no public access to a uncompromised digital copy of the original master tape. MQA is not Master Quality Authenticated, it's something else, a bastardized version of the original master done with the excuse of bandwidth savings for streaminng, a savings not needed in 2014 and all but laughable today when compared against the video streaming going on.
It's all about the money, and cutting off public access to bit perfect copies of the original masters
Just say NO. :)


The man is talking about how camera lens resolution can't be extrapolated by the output pixel count(which is like saying you can't tell if something is sweet using your tongue), and then relating it to how we don't have a definition for "high resolution" in audio (because if you ask people, each person will give you their own answer) thus they are at the cutting edge of determining what this ought to be or something... Then goes on with "woke" talk about how high resolution has various standards (as if normal people don't understand this).
This has been a sad corruption of the truth, again done in the name of the almighty dollar.
If CD 16/44 is standard res, then high res has to be something greater. But not just on the distribution end, but from the mic forward. In short a high res offering should have started as a digital recording at maybe 24/96 or better and never sampled to less than that. You can't take a master analog tape done in 1960, transfer it to 24/192 and call it a "high resolution recording". That's a scam put forth simply in the name of $, once again being able to resell all the old catalogs of music with a promise of better SQ. There's nothing on those tapes that can't be captured at Red Book.
Once again the public gets screwed in the name of record labels profits. :mad:
 

Daverz

Major Contributor
Joined
Mar 17, 2019
Messages
1,309
Likes
1,475
I'm sorry to say but the sampling step is not the limit for accurate timing, this is just plain wrong. As a proof one can create a 44/16 wav file with two channels where one channel contains a 20 kHz sinus burst and the other channel contains the same signal shifted in time by 1 microsecond. Then play the file and look with a scope. You will see both bursts with a difference of 1 microsecond between both channels.

Google found a nice short article on this:

https://science-of-sound.net/2016/02/time-resolution-in-digital-audio/

"So for a worst case of a 10 Hz sine with -60dBFS magnitude quantized at 24 Bits resolution, we get into a range of around 4 microseconds resolution. Which is already pretty coarse, but you wouldn’t hear that sound anyway. For a more realistic 100 Hz sine at -20 dBFS we are in the range of 4 nanoseconds. By the way, the samplerate doesn’t even show up in the equations!"

It's useful to emphasize that last point: the error in reconstructing a band-limited analog input is due to quantization, not the sample rate. The timing error (I surmise) for a particular input signal would be the maximum amount you can shift the signal in time before the quantized representation would change.

The trouble with PCM is that it seems simple, but our intuitions about it (at least for those not well versed in DSP) can be very misleading. And there is double trouble when marketers are happy to exploit those incorrect intuitions and engineers who know better look the other way.
 

Sergei

Senior Member
Forum Donor
Joined
Nov 20, 2018
Messages
361
Likes
272
Location
Palo Alto, CA, USA
One has zero to do with the other. Timing precision is NOT a function of sample rate. This is a very common misunderstanding.

Please clarify. Are you saying that sampling at 1 Hz can provide the same timing precision as sampling at 1 MHz?

I'm sorry to say but the sampling step is not the limit for accurate timing, this is just plain wrong. As a proof one can create a 44/16 wav file with two channels where one channel contains a 20 kHz sinus burst and the other channel contains the same signal shifted in time by 1 microsecond. Then play the file and look with a scope. You will see both bursts with a difference of 1 microsecond between both channels.

You are using a scope and a frozen graph of the signal amplitude. The hearing system doesn't have such luxury. It can't "see" even instantaneous value of a signal, leave alone a high-resolution recording of the signal over time. Apologies in advance if you don't need a nano-lecture to follow. In any case, it may be educational for other readers.

Right in the inner hair cell, we have a rather crude rectifier, chained with non-linear integrator, with the result of the integration being non-linearly decaying over time. Once the result of the integration exceeds a threshold of one of the synapses connecting the inner hair cell body to an auditory nerve fiber, the fiber spikes. Then the fiber has to rest, for about 2 ms, and won't spike again until its readiness is restored, even if the result of the integration grossly exceeds the threshold. There are several fibers with varying thresholds attached to an inner hair cell. They work together in a manner described by Volley Theory, successfully encoding signals with frequencies higher, and intensity range wider, than a single fiber could.

For signals with low frequency (definitely for the ones below 1 KHz) and constant or very slowly changing amplitude, the hearing system can figure out the pitch and intensity of the signal by correlating the timing of the consecutive spikes in fibers with varying thresholds, predominantly happening at certain phases of the sinusoid: this mechanism is called Phase Locking.

For signals over 5 KHz, this scheme falls apart, because the fibers can no longer track the sinusoid with sufficient time resolution. Instead, the hearing system has to determine the pitch using the Place Coding (place refers to the most excited location on the basilar membrane). In between 1 KHz and 5 KHz, the hearing system employs both mechanisms, correlating their outputs.

So, for frequencies below 1 KHz, I accept you argument. The hearing system can indeed detect the phase shift and sort of "see" the phase difference between signals arriving to the right and left ear. For frequencies above 5 KHz, this doesn't work: both ears "tell" the brain that they detect a signal of that frequency, and report the intensities difference, yet this is too crude information to detect direction to the sound source with high accuracy.

Evolutionary, detection of direction to a sound source emitting a quickly decaying burst of high frequency is very important: a common use case involves a predator or an enemy stepping on a dry tree branch. When this happens, the hearing system processes this as a transient instead of as a steady tone. As we know from Fourier theory, a transient transforms into a wide range of frequencies. Many auditory fibers spike at about the same time, and brain gets a different kind of signal, with left and right ear now reporting the times of the offsets of the signal. From this inter-aural time difference, which can be resolved with precision of about 5 microseconds, the hearing system determines the direction to the sound source with significantly higher precision.

Now imagine that instead of hearing the sound directly, you have headphones on, fed by microphones attached to the headphones, through a chain of AD to DA conversions with 44.1 sampling rate. Depending on the initial amplitude and decay function of the burst, direction to the sound source, as well as on the specifics of the AD and DA conversions, you may either (A) don't hear the burst at all, (B) only hear it in left ear, (C) only in right ear, (D) hear in both ears with the same onset time, or (E) hear it in both ears with onset times quantized by the sampling rate.

The higher the sampling rate, up to the neurophysiological limit, the more probable that the outcome will be (E). So, using a sampling rate in excess of what the Nyquist Theorem would suggest helps with resolving direction to the sound source, thus allowing to place higher number of distinguishable sound sources into the angular extent of a mix. Once again, this doesn't matter much for pop music and typical audiophile benchmarking music fragments. Matters a lot to enjoyment of music with finer spacial and temporal structure.
 

Daverz

Major Contributor
Joined
Mar 17, 2019
Messages
1,309
Likes
1,475
Please clarify. Are you saying that sampling at 1 Hz can provide the same timing precision as sampling at 1 MHz?

For band-limited signals, yes. The bit depth being equal, and for a signal band-limited to 0.5 Hz (your example, not mine!), the 1 MHz sample rate provides no more timing precision than the 1 Hz sample rate. The same is true for for a signal band-limited to 22050 Hz: a 768 kHz sample rate provides no more timing precision than a 44.1 kHz sample rate.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,511
Likes
25,344
Location
Alfred, NY
Please clarify. Are you saying that sampling at 1 Hz can provide the same timing precision as sampling at 1 MHz?

I don't say it, the basic math says it. And it's easy for you to demonstrate to yourself.

Or you can watch Monty Montgomery explain and experimentally demonstrate.

Your particular misconception is dealt with starting at about 20:45, but the entire video is worth watching.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,658
Likes
240,918
Location
Seattle Area
Please clarify. Are you saying that sampling at 1 Hz can provide the same timing precision as sampling at 1 MHz?
That's right. Sample rate does not determine the timing resolution. However, higher sample rates allow wider bandwidth and if you take advantage of that, then the higher frequencies will improving timing.

Your hearing system is not perceiving digital samples one by one. It perceives a continuous waveform post reconstruction filter. In that sense, the sample rate is immaterial.

The timing resolution depends on how much resolution (bit depth) you have relative to frequency of interest. The higher the frequency, the better precision you need to determine it accurately. The minimum timing is = 1/ (2*pi()*f*A*(2^b-1). "f" is the frequency of interest; A is the amplitude; and b is the number of bits. Note the absence of sampling rate in the formula.

For CD at 22.05 kHz and 16 bits this becomes 110 picoseconds which is far smaller than the sample rate of 44.1 kHz (22 microseconds).
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,759
Likes
37,612
The very first thing about MQA that told it likely was a farce was when they demo'd it live. They compared it to MP3 files. Which is ridiculous. Obviously there were two comparisons interested parties wanted to hear. MQA vs CD and MQA vs 192/24. If it was as good as claimed, you'd be dying to get those opportunities to compare those to audiences. Instead they always obfuscated by not letting that happen. Only reason to do that is if you are trying to pull a fast one.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,759
Likes
37,612
The next thing about MQA is the business of perceptually lossless. Read it isn't lossless. It is lossy. BTW, to correct an earlier statement. The people who created MP3 never said it was transparent. They worked to get pretty good quality at very low bit rates. But they never said it was transparent especially at those low rates. No one knowledgeable claims that now.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,759
Likes
37,612
The next thing is the business of provenance. Claiming they could retro-actively go back and correct for errors in old gear. Theoretically perhaps possible. Yet all they've done is use I forget 3 or 4 filter types. Despite questions they assured us they could do that. They haven't. It was all a hocus pocus don't look over there look over here con job. That is without the question of taking a master that has multiple levels of non-linear processing done to them. Just no way to recover such files from that. Nor have they taken any steps to do such a thing at all. More hocus pocus.

One could go on, the folding origami, and all that. Every piece of it falls apart upon examination.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,759
Likes
37,612
Sure. That's the right way to go. I didn't do MQA vs 192/24 comparison, as I couldn't readily find records encoded in both formats that were surely made from the same master. The MQA creators claim that they did such comparisons, recruiting well-known mastering engineers. Putting names of real people in a peer-reviewed article in a major publication very likely proves that this is not an empty claim.

I did do MQA vs 44.1/16 comparisons, of 1970s-era records which I know by heart, and found MQA to be more pleasant to listen to, in a way very similar to how listening to 192/24 master of a recording I captured and mixed myself is more pleasant than to downsampled 44.1/16 or 48/16 rendering of it.
You can do some comparisons from 2L.

http://www.2l.no/hires/

BTW, I've missed where they had an article with mastering engineers doing such comparisons. Can you show where it is?
 

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,321
Location
Albany Western Australia
I. It is definitely not the "best"and "end-all" format, yet not a part of an evil money-grabbing scheme either.
I think I would disagree with you there. This is clearly a land grab. Clearly the intent was to make the format as ubiquitous as possible. Also remember Meridian have form here. MLLP. Luckily that failed to gain any traction beyond DVD and we have free FLAC which makes it redundant.

Licencing propriety formats is nothing new or wrong in principle. However we have a format here that is lossy and is not required for bandwidth or sound quality reasons. To use it you need new hardware. It has potential impact to restrict access to "proper" master quality recordings and potential for DRM. It provides sub cd quality for those that haven't bought into the hardware.

It is of no benefit to anyone except of course MQA and maybe the record companies.
 
Last edited:

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,321
Location
Albany Western Australia
Just a note regarding the alleged benefits for reducing bandwidth requirements and cost for streaming providers. Most people are happy with MP3 quality streaming. Those that arent (us) are a small subset of the market. That subset are willing to pay a premium for access to the uncompressed streams. As such the cost reducing argument holds no water. The customer will pay.
 
Top Bottom