• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Help explain intersample overs, please?

Here, let me help now. What I just did is took a very popular CD track from the 2000's, and interpolated it accurately by 8.

First, I plotted the interpolated values in red. Then I plotted the ORIGINAL values (with sample repeat so they would cover the data that wasn't bigger) over that in green.

Now, any red that's above 1 or below -1 is an actual ISP. No error, no muss, no fuss, just an ISP that many DAC's simply can not handle. Those who persist in arguing they shouldn't exist don't matter, because here is concrete proof on a popular, successful track (that I happen to like, even), even across the alien nation!

ispdemo.jpg


As you can plainly see, there are some pretty big ISP's. Some DAC's, including some computers, simply will not reproduce that. Yes, you can mistakenly use a digital volume control and raise your overall noise floor, but I'm not a big fan of that, either. The answer is to produce responsibly, and not do that. Anything else is being mean to the client, whose material will not sound the same on all machines. :(

Note: The red spots that are not above an absolute value of 1 are not an error, but you do expect that kind of result in an interpolation, which is, of course, how ISP's come to exist.
 
@j_j So, can you post/link a small part of that actual track (rip the 16/44 file or part thereof) for me to experiment with actual CD players and standalone D/As?

Those large chopped peaks should be plainly obvious if missing, even when captured via an A/D from the analogue outputs.
 
@j_j So, can you post/link a small part of that actual track (rip the 16/44 file or part thereof) for me to experiment with actual CD players and standalone D/As?

Those large chopped peaks should be plainly obvious if missing, even when captured via an A/D from the analogue outputs.

I will PM you the identity of the track, and the time index,how's that? There are worse, but I'm not at work, and I can't actually grab the worst of the examples in my limited set of tracks here.

3321637/8 is the biggest ISP in that track in channel 1. But third track on the album, not first. Pretty atrocious.

1.2727


ind =

3321637

But the index is of the x8 interpolated.
 
Last edited:
@j_j So, can you post/link a small part of that actual track (rip the 16/44 file or part thereof) for me to experiment with actual CD players and standalone D/As?

Those large chopped peaks should be plainly obvious if missing, even when captured via an A/D from the analogue outputs.

Wait, do you have either octave or matlab? I ***can*** give you the (trivial) matlab script for this.
 
Right, you’re trading one thing for another
Do the attenuation in mastering, DNR is lost forever when compared to the source the mastering engineer has, and at the format's bit depth (e.g. 16). This affects everyone. Even worse is some engineers prefer to use more limiting instead of simply reduce the overall level. In this case the damage being added by the limiter cannot be undone by using digital volume controls during playback.

Do the attenuation during playback/streaming/broadcasting etc, the result of the attenuation will be at least 24-bit (except for "vintage" products), and the lost of DNR depends on the DAC being used, and temporarily. Users have full control whether to use or how much attenuation is needed.
 
They are correct in that few, if any, DAC chips handle arbitrary signals without clipping. )
Thank you @mansr and @MC_RME for the replies on this. It really surprises me and seems primitive for the chips not to handle this. The AKM 2dB headroom seems a lot better than doing nothing. So I would say Benchmark's claim that "virtually every audio device on the market has an intersample overload problem" is a bit overstated.

Also, I found this article by @Archimago on the subject. Not recent but new to me, very helpful to a non-expert like me.
http://archimago.blogspot.com/2018/09/musings-measurements-look-at-dacs.html
 
Not at all. Intersample overs are a real thing, they demonstrably exist, and some DAC's do not handle them very well at all.

A proper production, of course, wouldn't have any, in my book, but that's a separate issue.

Didn't we just make the same two points? Digital clipping, hitting max recording level by accident or on purpose, is a real thing, and the some DACs don't handle it well. I agree that not handling the max level case is a design flaw. I suppose I just dislike the term "inter-sample over", hence my negative "marketing" connotation.

Recording and mastering engineers have known about digital clipping since the 1970s, and yet even with 24-bit word depths for audio production there are still recordings being made that have this problem. Silly, when you think about it.
 
IMO, this is a problem that should not exist. It comes from using too-hot levels when recording. This is completely unnecessary because the bit depth of the digital format virtually always exceeds that of the music being recorded. The only reason to set levels this high is to make the recording sound as loud as possible. That makes this not a real engineering limitation, but just a side effect of the loudness wars.

In that sense, it's sad to see companies having to engineer workarounds or features into their products to overcome a problem caused by the intentional abuse of the recording format.
 
IMO, this is a problem that should not exist. It comes from using too-hot levels when recording. This is completely unnecessary because the bit depth of the digital format virtually always exceeds that of the music being recorded. The only reason to set levels this high is to make the recording sound as loud as possible. That makes this not a real engineering limitation, but just a side effect of the loudness wars.

In that sense, it's sad to see companies having to engineer workarounds or features into their products to overcome a problem caused by the intentional abuse of the recording format.

While I agree in principle, I'm not sure if the clipping results from the recording process itself or the mastering process (or it could be both I suppose).
 
IMO, this is a problem that should not exist. It comes from using too-hot levels when recording. This is completely unnecessary because the bit depth of the digital format virtually always exceeds that of the music being recorded. The only reason to set levels this high is to make the recording sound as loud as possible. That makes this not a real engineering limitation, but just a side effect of the loudness wars.

In that sense, it's sad to see companies having to engineer workarounds or features into their products to overcome a problem caused by the intentional abuse of the recording format.

Well designed products take into account existing real world issues. If you know inter sample overs exist, you should plan for them occurring. Just like a well designed car would account for uneven road surfaces.
 
It comes from using too-hot levels when recording

Every self respecting recording engineer will have many dB's headroom (24 bit depth so no issue) during recording. This is not where the potential problem exists. Also most likely not during mixing itself nor when they try to get the right sound.
This all happens well below 0dBFS (or should be).
This potential problem occurs in the mastering process where the idiot behind the controls is compressing the hell out of the recording so the music sounds as loud as possible and they normalize so the highest sample values are just below or hitting 0dBFS.

Not all recordings suffer from ISP's and not all ISP's are equally audible (if at all).
The problem exists but not in all circumstances/recordings/cases.
It doesn't hurt to add headroom for this or to simple attenuate digitally before it is sent to the DAC if one wants to be sure.

I don't think it is wise to blow all of this out of proportion.
Just attenuate digitally before it is sent to the DAC and you are safe in all cases.
The loss of 6dB is not a problem for 24 bit DACs at all and given the fact that often distortion increases near 0dBFS it may only be beneficial.
 
Every self respecting recording engineer will have many dB's headroom (24 bit depth so no issue) during recording. This is not where the potential problem exists. Also most likely not during mixing itself nor when they try to get the right sound.
This all happens well below 0dBFS (or should be).
This potential problem occurs in the mastering process where the idiot behind the controls is compressing the hell out of the recording so the music sounds as loud as possible and they normalize so the highest sample values are just below 0dBFS.
...
Yes, I know and that's what I meant. Most likely, the recordings are fine and the too-hot levels are introduced in mixing, mastering, etc. I used the word "recording" in the most general sense of the entire process from recording, mixing, mastering process.

...
I don't think it is wise to blow all of this out of proportion.
Just attenuate digitally before it is sent to the DAC and you are safe in all cases.
The loss of 6dB is not a problem for 24 bit DACs at all and given the fact that often distortion increases near 0dBFS it may only be beneficial.
This is a good point. Many/most of us apply EQ in DSP upstream from the DAC, which involves some negative digital gain typically -3 to -6 dB. This should eliminate the issue without requiring any special handling from the DAC itself.
 
Would a sample-and-hold not hold the sample values and thus never exceed the minimum or maximum value, because I specifically mentioned NOS and filterless (both before and after conversion).
Of course the analog stage after the DAC chip could be so poorly designed it clips before the DAC chip reaches FSD.

I am sure we both agree a filterless NOS R2R design is broken to begin with though.
 
Wait, do you have either octave or matlab? I ***can*** give you the (trivial) matlab script for this.

I want to see what a range of actual players or D/As with varying conversion systems do with the sample content at the analogue outs.
 
Well designed products take into account existing real world issues. If you know inter sample overs exist, you should plan for them occurring. Just like a well designed car would account for uneven road surfaces.

Yes. Just make the DAC so it can handle any signal that is input. Please.
 
Would a sample-and-hold not hold the sample values and thus never exceed the minimum or maximum value, because I specifically mentioned NOS and filterless (both before and after conversion).
Of course the analog stage after the DAC chip could be so poorly designed it clips before the DAC chip reaches FSD.

I am sure we both agree a filterless NOS R2R design is broken to begin with though.

Well, S&H's have been known to not be so good, also I've encountered a few with interesting gain curves, but what you say *SHOULD* be true. (that word, "should", brrr)

Yes, a filterless r2r is not good. Worse time resolution worse frequency resolution, worse period, and with tons of HF noise to boot.
 
So, given that Benchmark has +3.5dB of headroom (gain factor of ~1.496), is this enough headroom?

I've processed the my music collection (all CD) with replaygain at 8x oversample and have a few offenders above 1.5 (wow!). Anyone else seen this occur in their collections? For reference all of my collection at 44.1kHz has peak levels for the albums at 0.999 or 1.0. 90%+ of my collection with 8x oversampling has peaks above 1.0. Three of the offenders are from Metallica: Garage Inc.

Just FYI for those reiterating the Benchmark statement... Intersample Overs in CD Recordings

Every D/A chip and SRC chip that we have tested here at Benchmark has an intersample clipping problem! To the best of our knowledge, no chip manufacturer has adequately addressed this problem. For this reason, virtually every audio device on the market has an intersample overload problem. This problem is most noticeable when playing 44.1 kHz sample rates.

I consider myself fairly rational; I interpret this statement as it is highly unlikely that Benchmark has tested every single implementation of a DAC. They are merely trying to prove a point that this is a fairly common problem that is mostly not dealt with by DACs.
 
Back
Top Bottom