• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

AP Mastering: "Nyquist theorem debunked: why 44.1kHz sounds bad"

fieldcar

Addicted to Fun and Learning
Joined
Sep 27, 2019
Messages
852
Likes
1,316
Location
Milwaukee, Wisconsin, USA
I just saw this suggested video on youtube from the channel AP mastering. I shouldn't be surprised as he put out another video where he was highly critical of some of ASR's highest rated speakers and saying that one of those terrible single driver monitors are great while ignoring multi-tone performance.

This time, he seems to have an axe to grind with the legendary Monty Montgomery video and how it has rightly been used to defend digital audio. His biggest gotcha's are kind of ridiculous and it appears that he just parrots the texas instrument's marketing material for hi-res DACs and ADCs.

Amplitude above and below the 0 line (1 out of 65,536 and the negative): That's -90dBfs. OK, not a big deal unless you plan to set your gain to +80dB on an incredibly quiet recording.

Another is that quantization error and aliasing exists, therefore delta sigma/oversampling with slow or no filters are the magic cure. While he's technically correct, he fails to quantify how insignificant this quantization error is. 16/44.1 without dither is good for -98dB or 0.00126% THD without dither and -120dB@3KHz with a modern shaped dither.

The funny thing is, most audio ADC's have a noise floor right at this 100dB mark and each track has a summing of this ambient and hardware related noise floor. This noise summing is far more significant than the influence of quantization error generated by 16/44.1. I do still believe that in the music producing world, there is certainly some value in 24/48K tracks, especially trying to keep that noise floor summing down. But, the final master can be 16/44.1K without any audible loss in quality.

What are your thoughts?

 
"Nyquist theorem debunked"

Didn't watch, but I assume no mathematical proofs offered...

I think we can ignore this one...

This guy is just an example of how there are too few consequences for BS in clickbait titles.
 
Last edited:
"Foundational theoretical underpinning of conversion DEBUNKED: why every CD sounds bad" should be the hint that this guy is a clickbait peddling clown
It takes about 2 minutes in Audacity to discover for yourself that if you do something approximating sinc reconstruction of his allegedly "beating" signal that allegedly fades in and out, no such beating or anything untoward occurs. (provided it's sufficiently far from Nyquist of course)

He is right in that many tech laypeople who bring up the sampling theorem don't really understand it at all, and that just having the samples doesn't mean you're capturing or reconstructing the sampled signal in the manner the math prescribes. But any decent off-the-shelf converter not from the stone age will do a fine job of this, even at 44.1khz.
 
Last edited:
"Nyquist theorem debunked"

Didn't watch, but I assume no mathematical proofs offered...

I think we can ignore this one...

This guy just an example of how there are too few consequences for BS in clickbait titles.
We get digital stairsteps at 0:12. And of course rabbit ear quotes on "Nyquist theorem". I really don't see another point in watching than macabre interest. Which I have so let's see how this rips.
 
Looking at the AP mastering website, I have to say it does not instill any confidence. There is always a reason for a picture of the actual mastering suite being absent.
 
What are your thoughts?
That I'm not going to gift him my click. On what is quite obvioulsly going to be typical boring nonsense. We've heard it all before : ad-infinitum.

Nor should anyone else.
 
No idea about the video, but starting with "Nyquist" and then focusing on bit depth/quantization in the commentary here about the video doesn't make me think anyone is really following the scent.
 
and then focusing on bit depth/quantization in the commentary here
I think the commentary here is focussing on what is said in the video. It is the title of the video that highlights Nyquist, but the content then goes on to parrot all the usual myths about quantisation and discrete time all together.

I know this and I've not even watched the damn thing.
 
AI generated summary (as I 'aint watching that);
Modern Audio Sampling Techniques
The Delta Sigma approach to digital audio sampling, involving oversampling and averaging, is far superior to the outdated SAR method, allowing for less noise and greater dynamic range.
Delta Sigma sampling can achieve high-quality signals with just one bit of resolution through averaging multiple samples over time.
Noise Reduction and Frequency Response
Delta Sigma modulators can shift noise to higher frequencies, enabling cleaner audio in the desired frequency band through low-pass filtering.
The Delta Sigma method eliminates the need for steep filters used in SAR, which can produce terrible and resonant sounds.
Industry Standards and Applications
Delta Sigma sampling is widely used in modern audio interfaces and converters, becoming the industry standard for digital audio equipment.
Nyquist Theorem Misconceptions
The commonly taught Nyquist theorem is outdated and inaccurate, as it fails to account for zero crossings and phase shifting of signals.
Last line is the kicker... zero crossings will be reconstructable as long as the signal is sampled above the Nyquist rate and the phase information can be reconstructed correctly as well (assuming no aliasing and ideal sampling). The signal's frequency content can be accurately captured and reconstructed as long as the sampling rate is sufficient.


JSmith
 
I think the commentary here is focussing on what is said in the video. It is the title of the video that highlights Nyquist, but the content then goes on to parrot all the usual myths about quantisation and discrete time all together.

I know this and I've not even watched the damn thing.

If one comes for the Nyquist, why do they then stay for the quantization? (Whittaker probably deserves a mention if we're lugging Shannon into the discussion.) If I was going to bring this nonsense to broader attention (thus generating clicks that some above wisely refuse to give out), I'd simply say the clickbait title shows the dude doesn't understand the subject matter and leave it at that.

With that said, 44.1 is cutting it a bit close or we wouldn't have to oversample, interpolate, and dither so much. DSD and similar "1 bit" approaches address the issues better, in my opinion. (In music production, 32 bit float or more time-honored hi-rez formats make sense for the same reason that your pocket calculator uses a lot more precision than it shows. On that subject: don't try to tell me IEE 754 is good enough for anyone if you haven't written software that handles $millions of other people's money ...)
 
(assuming no aliasing and ideal sampling).

"The road to hell is paved with good intentions." :)

The problem is likely non-existent with real-world gear when dealing with straightforward recordings like live acoustic ensembles playing in front of an ORTF pair.

But when someone piles 80 channels of this stuff up and then applies stupendous numbers of plugins--that can very well bring some of the demons up from the numeric depths.
 
Amplitude above and below the 0 line (1 out of 65,536 and the negative)

Clarification:

That's 1 out of +32767 or -1 out of -32768, to be specific, for the data digits.
 
If one comes for the Nyquist, why do they then stay for the quantization? (Whittaker probably deserves a mention if we're lugging Shannon into the discussion.) If I was going to bring this nonsense to broader attention (thus generating clicks that some above wisely refuse to give out), I'd simply say the clickbait title shows the dude doesn't understand the subject matter and leave it at that.

With that said, 44.1 is cutting it a bit close or we wouldn't have to oversample, interpolate, and dither so much. DSD and similar "1 bit" approaches address the issues better, in my opinion. (In music production, 32 bit float or more time-honored hi-rez formats make sense for the same reason that your pocket calculator uses a lot more precision than it shows. On that subject: don't try to tell me IEE 754 is good enough for anyone if you haven't written software that handles $millions of other people's money ...)
Of course IEEE 754 is good enough…if you are using decimal128 on a z/Architecture machine :)
 
Clarification:

That's 1 out of +32767 or -1 out of -32768, to be specific, for the data digits.
Right! Thanks for adding.

Yeah. Another day, another troll. I just feel that he's almost begging for @amirm to make a video with actual repeatable and objective proof against everything he has said. We could even do some simple nulling experiments to prove everything pretty easily.
 
Of course IEEE 754 is good enough…if you are using decimal128 on a z/Architecture machine :)

Hah, this actually came after my days of handling said $millions in the electric utility biz. I had to use BCD math libraries and sometimes a HP-12C when hand-checking things.
 
"The road to hell is paved with good intentions." :)

The problem is likely non-existent with real-world gear when dealing with straightforward recordings like live acoustic ensembles playing in front of an ORTF pair.

But when someone piles 80 channels of this stuff up and then applies stupendous numbers of plugins--that can very well bring some of the demons up from the numeric depths.
I would imagine this very common with any sort of live recording, especially orchestration. I wonder how many mics they use on those John William recordings. It's gotta be a nightmare to deal with.
 
Back
Top Bottom