• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Let's develop an ASR inter-sample test procedure for DACs!

I proved it - I liked that song while not knowing it was distorting, I like it the same knowing it's distorting.
On a forum mostly dedicated to the accurate and neutral reproduction of recorded music, I don't see how your arguments can be convincing? :confused:

Of course, distortion, weird frequency responses/EQs and other playback issues won't stop you from enjoying your music. I know I can still enjoy good songs on a crappy Bluetooth speaker. Some people even have their speakers wired out-of-phase and don't know what they've missed until someone (or themselves) notices and corrects it.
And yes, distortion can sound pleasant in certain cases too. That's pretty well documented. However with ISPs the distortion is random and unpredictable, so I fail to see how it could be of any benefit.
 
What could change is better awareness of the issue for all users and audio enthusiasts.

If a DAC has a digital volume control, it would be nice to have a line in a review that says "be careful, you can get extra distortion from ISPs past -3dB on the volume control".
If a DAC has an analogue volume control, it would be nice to have a line in a review saying "be careful, you can get extra distortion from ISPs. Be sure to reduce the volume in your media player by x dB to experience it cleanly".
With a CD player, it would be nice to have a line in a review that says "be careful, the output can clip significantly on many commercial releases, with no way to prevent this" or "this unit handles ISPs very well".
This should be seen only as "technical nerd info" of sorts. Audibility of clipped IS-overs is very (very!) questionable as far I can see it (would be worth some blind testing), except for the most extreme cases.
 
Audibility of clipped IS-overs is very (very!) questionable as far I can see it (would be worth some blind testing), except for the most extreme cases.
If an oversampled hard-clipper plug-in can properly simulate a clipping DAC, then setting up a blind test, sharing files and results is right up my street :)

Would that be a good enough approximation in your opinion?
 
If an oversampled hard-clipper plug-in can properly simulate a clipping DAC, then setting up a blind test, sharing files and results is right up my street :)

Would that be a good enough approximation in your opinion?
I'd probably do it like this, manually, for a given music track with known (or well-estimated) IS-overs peak values, let's say below +6dBFS:
- convert to 24bit (if required) and scale down by 6dB (0.5x)
- upsample to 8x or 16x, with a precision very sinc'ish upsampler, producing the reference track
- hard-clip (or soft-clip) a copy at 0dBFS, producing the test track
- downsample both to 4x so that the DAC internal filter is still irrelevant. User DACs will be happy, as most can do 176.4kHz
 
Last edited:
On a forum mostly dedicated to the accurate and neutral reproduction of recorded music, I don't see how your arguments can be convincing? :confused:

Of course, distortion, weird frequency responses/EQs and other playback issues won't stop you from enjoying your music. I know I can still enjoy good songs on a crappy Bluetooth speaker. Some people even have their speakers wired out-of-phase and don't know what they've missed until someone (or themselves) notices and corrects it.
And yes, distortion can sound pleasant in certain cases too. That's pretty well documented. However with ISPs the distortion is random and unpredictable, so I fail to see how it could be of any benefit.
Convincing of what? That I don't care about IS peaks? Well, take my word for it!
And what's this "dedicated to the accurate and neutral reproduction of recorded music" thing? Do you really care about accurate and neutral reproduction of heavily limited/clipped (read: destroyed!) signals?
Doubt it...
 
user @ayane posted a music file which contains +6dBFS IS-overs. A quick 4x upsampling with Adobe Auditions the highest peak looks like this:
View attachment 322246

This is a signal very close to fs/2 with sort of a "break" sample inserted (basically causing a 180degree phase flip in that near-Nyquist frequency), readily observed in the original sample stream and with the effect already visible in Auditions's upsampled graphical representation:
View attachment 322251
It's trivial to come up with examples. Here's an example using Multitone. Red are the PCM samples connected by straight lines, blue is the reconstructed waveform using sinc interpolation (DeltaWave can do the same):
index.php
 
You'll see that the specification used for true peak allows for overshoots of up to +/-0.55dB, depending on the oversampling factor implemented in the limiter/meter.
Why use true peak limiting though, as wouldn't it better to prevent high true peak overshoot in the first place by using oversampling? e.g. 4x oversampling with a small lookahead time of 0.1ms.


JSmith
 
...This means that sometimes we try to educate artists, labels and whoever we work with that there's no need to blindly push things for no good reason...
however, in that/this industry - it's like trying to herd feral cats... I got tired of complaining about it long ago... it's fruitless...
 
Fulltime Mastering Engineer here.

There's been some discussion about maximum possible inter sample peaks.

The worst I've measured was +12.5 dB using the Meters in iZotope RX. This was on a remix for a very well known artist on a giant hit song about a year ago.

I never decided if the true peaks were in fact that egregious or if the signal in some way tripped up the metering algorithm.
 
Last edited:
Fulltime Mastering Engineer here.

There's been some discussion about maximum possible inter sample peaks.

The worst I've measured was +12.5 dB using the Meters in iZotope RX. This was on a remix for a very well known artist on a giant hit song about a year ago.

I never decided if the true peaks were in fact that egregious or if the signal in some way tripped up the metering algorithm.
I may be wrong about this, but I didn't think +12.5 db were possible for intersample overs. Perhaps up sampling the song in question and looking at the peaks would be instructive if something is tripping up the iZotope meters.
 
Why use true peak limiting though, as wouldn't it better to prevent high true peak overshoot in the first place by using oversampling? e.g. 4x oversampling with a small lookahead time of 0.1ms.
Oversampling really does help. Most people use oversampled limiters these days. However, depending on the DSP/implementation, they are sometimes not enough, even with lookahead. Worse, they can sometimes produce even more overshoot, because any non-linear process produces harmonics (or distortion, depending on how you look at it). See the first table quoted in post #19.

I may be wrong about this, but I didn't think +12.5 db were possible for intersample overs. Perhaps up sampling the song in question and looking at the peaks would be instructive if something is tripping up the iZotope meters.
Please read the first post. I've linked an article with a mathematical demonstration of why ISPs have no theoretical maximum value.
 
Please read the first post. I've linked an article with a mathematical demonstration of why ISPs have no theoretical maximum value.
Okay, but if the samples are alternating at 1 and -1 that is at half the sample rate. That basically isn't going to happen. In fact you have to stop the response even with a perfect infinite brickwall filter to slightly less than half the sample rate.
 
I dug out the file to refresh, this is the mix untouched. Currently has 3.5 million plays on Spotify :)
This is ridiculous. That file is peaking at ~ +12dBFS — I mean sample peaks. You certainly need to check the source of your file. I’m not sure that 32-bit floating point files are distributed in the music industry, so someone definitely got the source file and applied some DSP on it (probably positive gain and maybe other stuff, who cares), and sent out the resulting floating-point file for people to be fooled.

Btw il you normalize that file to 0, inter-samples are at ~ +0,3dBFS.
 
That basically isn't going to happen.
Yes, this is a theoretical demonstration, so of course you won't come across this in your music collection.
However, I have posted examples of commercially released (and successful!) music where ISPs reach almost +5dBTP. Wouldn't you like to be able to choose a CD player that you know can handle these extreme cases with ease? :)
 
Yes, this is a theoretical demonstration, so of course you won't come across this in your music collection.
However, I have posted examples of commercially released (and successful!) music where ISPs reach almost +5dBTP. Wouldn't you like to be able to choose a CD player that you know can handle these extreme cases with ease? :)
Yes, but it is rare to encounter more than 3 or 3.5 db. I think I've found a couple that were right at 6 db. So we don't need the idea of more headroom than that. As stated in the other thread about inter-sample overs you create a raised noise floor by leaving excessive headroom. And a subtle point, the Shannon-Nyquist situation does require limiting bandwidth to less than half the sample rate though it is usually stated as half sample rate. Filters are far enough from theoretical perfection in practice it isn't an issue by itself.

Another example is the test I first saw in Stereophile where you play white noise to see the shape of the reconstruction filter. Jurgen Reis of MBL suggested it. His idea was to use -4 dbFS for that test signal. Pure white noise can have peaks of over +10 db. However, you can look at results yourself. As you lower it from 0 dbFS clipping becomes less and less common. By -4dbFS it might occur once in several minutes. So for a short test that is enough headroom. It also would be enough to make intersample over issues very uncommon and when they occur likely inaudible.
 
Yes, but it is rare to encounter more than 3 or 3.5 db. I think I've found a couple that were right at 6 db. So we don't need the idea of more headroom than that. As stated in the other thread about inter-sample overs you create a raised noise floor by leaving excessive headroom.
I do understand your point (and I'd be plenty happy if 3dB of headroom for SRC/decoding was already a standard), but don't think it's reasonable, from a digital point of view, to treat the least significant bit with the same importance as the most significant bit. That extra headroom could very well be optional/user-selectable.

And a subtle point, the Shannon-Nyquist situation does require limiting bandwidth to less than half the sample rate though it is usually stated as half sample rate.
Yes, filters are a bottleneck. But to really accept all the logical consequences of the sampling theorem would be to provide sufficient headroom for every subsequent process after sampling itself, wouldn't it? How could a DA outputting clipped values be considered acceptable?
 
Last edited:
How could a DA outputting clipped values be considered acceptable?
All engineering is about compromises.
- IS-overs is a garbage in --> garbage out scenario (larger than 3dB peaks require very broken signal input)
- clipped IS-over are inaudible anyway and notably with the broken input they require to happen
- while a DAC(-chip) could easily allow for 12dB or more of headroom for IS-overs this would compromise SNR/DNR by the same amount and severly complicate analog design
- compromised SNR/DNR is audible
==> no reasons to design for unclipped IS-overs. We should be grateful that some DAC-chips like AK4493 have ~3dB of headroom which is plenty
 
Back
Top Bottom