• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Bruno on MQA

Purité Audio

Master Contributor
Industry Insider
Barrowmaster
Forum Donor
Joined
Feb 29, 2016
Messages
9,154
Likes
12,404
Location
London
18 hrs ·
This isn't a prelude to suddenly becoming active on FB but I felt I had to share this.

Yesterday there was an AES session on mastering for high resolution (whatever that is) whose highlight was a talk about the state of the loudness war, why we're still fighting it and what the final arrival of on-by-default loudness normalisation on streaming services means for mastering. It also contained a two-pronged campaign piece for MQA. During it, every classical misconception and canard about digital audio was trotted out in an amazingly short time. Interaural timing resolution, check. Pictures showing staircase waveforms, check. That old chestnut about the ear beating the Fourier uncertainty (the acoustical equivalent of saying that human observers are able to beat Heisenberg's uncertainty principle), right there.

At the end of the talk I got up to ask a scathing question and spectacularly fumbled my attack*. So for those who were wondering what I was on about, here goes. A filtering operation is a convolution of two waveforms. One is the impulse response of the filter (aka the "kernel"), the other is the signal.
A word that high res proponents of any stripe love is "blurring". The convolution point of view shows that as the "kernel" blurs the signal, so the signal blurs the kernel. As Stuart's spectral plots showed, an audio signal is a much smoother waveform than the kernel so in reality guess who's really blurring whom. And if there's no spectral energy left above the noise floor at the frequency where the filter has ring tails, the ring tails are below the noise floor too.

A second question, which I didn't even get to ask, was about the impulse response of MQA's decimation and upsampling chain as it is shown in the slide presentation. MQA's take on those filters famously allows for aliasing, so how does one even define "the" impulse response of that signal chain when its actual shape depends on when exactly it happens relative to the sampling clock (it's not time invariant). I mentioned this to my friend Bob Katz who countered "but what if there isn't any aliasing" (meaning what if no signal is present in the region that folds down). Well yes, that's the saving grace. The signal filters the kernel rather than vice versa and the shape of the transition band doesn't matter if it is in a region where there is no signal.
These folk are trying to have their cake and eat it. Either aliasing doesn't matter because there is no signal in the transition band and then the precise shape of the transition band doesn't matter either (ie the ring tails have no conceivable manifestation) or the absence of ring tails is critical because there is signal in that region and then the aliasing will result in audible components that fly in the face of MQA's transparency claims.

Doesn't that just sound like the arguments DSD folks used to make? The requirement for 100kHz bandwidth was made based on the assumption that content above 20k had an audible impact whereas the supersonic noise was excused on the grounds that it wasn't audible. What gives?

Meanwhile I'm happy to do speakers. You wouldn't believe how much impact speakers have on replay fidelity.

________
* Oh hang on, actually I started by asking if besides speculations about neuroscience and physics they had actual controlled listening trials to back their story up. Bob Stuart replied that all listening tests so far were working experiences with engineers in their studios but that no scientific listening tests have been done so far. That doesn't surprise any of us cynics but it is an astonishing admission from the man himself. Mhm, I can just see the headlines. "No Scientific Tests Were Done, Says MQA Founder".
 

Soniclife

Major Contributor
Forum Donor
Joined
Apr 13, 2017
Messages
4,510
Likes
5,437
Location
UK
what the final arrival of on-by-default loudness normalisation on streaming services means for mastering
I don't care about MQA, but I'd like to know what this bit means, it sort of sounds like the beginning of the end for the loudness war.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,250
Likes
17,182
Location
Riverview FL

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
I do like the way BP takes no prisoners. I naturally feel inclined to believe his side of things on this subject. It's interesting that he feels able to speak out on these things, because being in the biz, you could imagine that he might end up having to implement MQA on his gear anyway!
 
D

Deleted member 65

Guest
18 hrs ·
This isn't a prelude to suddenly becoming active on FB but I felt I had to share this.

Yesterday there was an AES session on mastering for high resolution (whatever that is) whose highlight was a talk about the state of the loudness war, why we're still fighting it and what the final arrival of on-by-default loudness normalisation on streaming services means for mastering. It also contained a two-pronged campaign piece for MQA. During it, every classical misconception and canard about digital audio was trotted out in an amazingly short time. Interaural timing resolution, check. Pictures showing staircase waveforms, check. That old chestnut about the ear beating the Fourier uncertainty (the acoustical equivalent of saying that human observers are able to beat Heisenberg's uncertainty principle), right there.

At the end of the talk I got up to ask a scathing question and spectacularly fumbled my attack*. So for those who were wondering what I was on about, here goes. A filtering operation is a convolution of two waveforms. One is the impulse response of the filter (aka the "kernel"), the other is the signal.
A word that high res proponents of any stripe love is "blurring". The convolution point of view shows that as the "kernel" blurs the signal, so the signal blurs the kernel. As Stuart's spectral plots showed, an audio signal is a much smoother waveform than the kernel so in reality guess who's really blurring whom. And if there's no spectral energy left above the noise floor at the frequency where the filter has ring tails, the ring tails are below the noise floor too.

A second question, which I didn't even get to ask, was about the impulse response of MQA's decimation and upsampling chain as it is shown in the slide presentation. MQA's take on those filters famously allows for aliasing, so how does one even define "the" impulse response of that signal chain when its actual shape depends on when exactly it happens relative to the sampling clock (it's not time invariant). I mentioned this to my friend Bob Katz who countered "but what if there isn't any aliasing" (meaning what if no signal is present in the region that folds down). Well yes, that's the saving grace. The signal filters the kernel rather than vice versa and the shape of the transition band doesn't matter if it is in a region where there is no signal.
These folk are trying to have their cake and eat it. Either aliasing doesn't matter because there is no signal in the transition band and then the precise shape of the transition band doesn't matter either (ie the ring tails have no conceivable manifestation) or the absence of ring tails is critical because there is signal in that region and then the aliasing will result in audible components that fly in the face of MQA's transparency claims.

Doesn't that just sound like the arguments DSD folks used to make? The requirement for 100kHz bandwidth was made based on the assumption that content above 20k had an audible impact whereas the supersonic noise was excused on the grounds that it wasn't audible. What gives?

Meanwhile I'm happy to do speakers. You wouldn't believe how much impact speakers have on replay fidelity.

________
* Oh hang on, actually I started by asking if besides speculations about neuroscience and physics they had actual controlled listening trials to back their story up. Bob Stuart replied that all listening tests so far were working experiences with engineers in their studios but that no scientific listening tests have been done so far. That doesn't surprise any of us cynics but it is an astonishing admission from the man himself. Mhm, I can just see the headlines. "No Scientific Tests Were Done, Says MQA Founder".

A bit confused, is the above your words or actual quotes from someone named Bruno?
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,888
Likes
16,684
Location
Monument, CO

Don Hills

Addicted to Fun and Learning
Joined
Mar 1, 2016
Messages
708
Likes
464
Location
Wellington, New Zealand
Not all people are "as accustomed to public speaking" as you are. Hopefully a video of the session will appear in due course.

... Wow. Reading the comments and replies to that post, it's a who's who of heavyweights.
... That is, people with enormous depth of technical knowledge that I respect.
 
Last edited:

NorthSky

Major Contributor
Joined
Feb 28, 2016
Messages
4,998
Likes
945
Location
Canada West Coast/Vancouver Island/Victoria area
I don't know, all this MQA stuff gives me the vertigo domino effect.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
I don't care about MQA, but I'd like to know what this bit means, it sort of sounds like the beginning of the end for the loudness war.
I always get a bit nervous when they start talking about such things. Spotify's version of 'normalisation' is, in fact, dynamic range compression - so not what you might be thinking of as an end to the loudness war! No problem if it can be turned off of course - and it can be at the moment - but I worry that if 99% of users are happy with it, and it makes the UI too complicated, that they might remove it from the UI. I read somewhere that it can't be turned off in the browser version.
 
Last edited:

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
The backstory is if the players have loudness normalization then there would be no need for loudness compression in the content. Yeh right....
Well I can see that the incentive for 'loudness wars' style compression/clipping/distortion would be reduced because smart algorithms would detect that the recording would be subjectively 'loud' and turn it down appropriately, regardless of the absolute peak or average signal level.

But I am not sure that streaming 'normalization' quite meets the definition of the term. In Spotify's case it doesn't: it is applying a form of dynamic range compression that is clearly audible. Fine for many situations, no doubt, but not something that a purist like me could ever listen to..!

I can imagine why they do it that way. The demand for normalization is probably really a demand for dynamic range reduction, even within tracks themselves. e.g. the listener in a car or on the bus who finds they can't hear a quiet classical passage and turns the volume up, only to be blasted out of their skin either when the loud bit starts, or when the next track starts - particularly if they are on shuffle play. In effect, it is the same 'problem' (a first world one) and DN compression solves it satisfactorily for the majority of listeners.
 

Don Hills

Addicted to Fun and Learning
Joined
Mar 1, 2016
Messages
708
Likes
464
Location
Wellington, New Zealand
If dynamic range compression / normalisation by services and devices becomes ubiquitous, forums like GS will be awash with posts discussing the best ways to game the system...
 

oivavoi

Major Contributor
Forum Donor
Joined
Jan 12, 2017
Messages
1,721
Likes
1,939
Location
Oslo, Norway
I also find it interesting that he mentions that he's doing speakers now - not a word about amplifiers. Either he thinks that amps are a solved problem by now, or he's come to the conclusion that speakers are vastly more important than amps.
 

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,414
Location
Seattle Area, USA
If dynamic range compression / normalisation by services and devices becomes ubiquitous, forums like GS will be awash with posts discussing the best ways to game the system...

GET YOUR DIRTY HANDS OFF MY BUSS COMPRESSOR YOU MUSIC CENSORING STANDARDS GESTAPO.

Or something similar...
 

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,414
Location
Seattle Area, USA
I also find it interesting that he mentions that he's doing speakers now - not a word about amplifiers. Either he thinks that amps are a solved problem by now, or he's come to the conclusion that speakers are vastly more important than amps.

Well, yes...amps are a mostly solved problem, and speakers are vastly more important than amps (these days).

Is that really in doubt?
 

oivavoi

Major Contributor
Forum Donor
Joined
Jan 12, 2017
Messages
1,721
Likes
1,939
Location
Oslo, Norway
Well, yes...amps are a mostly solved problem, and speakers are vastly more important than amps (these days).

Is that really in doubt?

No, no rational doubt about that. But Putzeys is most famous for designing amps, and have previously spoken about sonic differences between amps in a very unqualified manner. I therefore find it interesting that he doesn't mention amps at all when he contrasts the MQA mumbo jumbo with things that actually make a difference.
 
OP
Purité Audio

Purité Audio

Master Contributor
Industry Insider
Barrowmaster
Forum Donor
Joined
Feb 29, 2016
Messages
9,154
Likes
12,404
Location
London
I also find it interesting that he mentions that he's doing speakers now - not a word about amplifiers. Either he thinks that amps are a solved problem by now, or he's come to the conclusion that speakers are vastly more important than amps.
Of course speakers are hugely more significant ,amps low distortion solid state anyway are done and dusted and have been for thirty years.

Keith
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,633
Likes
240,674
Location
Seattle Area
No, no rational doubt about that. But Putzeys is most famous for designing amps, and have previously spoken about sonic differences between amps in a very unqualified manner. I therefore find it interesting that he doesn't mention amps at all when he contrasts the MQA mumbo jumbo with things that actually make a difference.
In that case I don't know about him asking for double blind tests of MQA if he has not done the same for amplifiers.
 
Top Bottom