• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Let's develop an ASR inter-sample test procedure for DACs!

I think the case has been made. The evidence is here. It’s a real issue.

If anyone wants to argue otherwise, then please provide evidence of the contrary. After all, that is what science is all about.

My humble take is that we should measure for inter-sample over clipping and let customers know about the performance of their devices.
It would also be great to measure whether or not lowering a particular DAC’s digital volume can indeed prevent clipping or not, since AFAIU that is not always the case.
I respectfully disagree with @amirm ‘s take that measuring clipping would incentivize manufacturers to lower the dynamic range of their products. If the argument is that ASR has an influence on product development, then I believe this issue can be solved by presenting the data in a balanced manner. What I mean is that we could simply write in a review that “this product does not have oversampling headroom, therefore it preserves dynamic range at the expense of potential for digital clipping due to inter-sample overs”, or that “this product has oversampling headroom, which means dynamic range is sacrificed in order to prevent most digital clipping due to inter-sample overs”. This way the user will be aware of the benefits and the downsides of either approach.

Ideally, it would be great if the customer could *choose* between the increased dynamic range and the clipping protection. Something like a “-3dB pad” option/button before the oversampling stage in DACs and SRCs could put an end to all concerns.

About the audibility issue. It probably is the case that inter-sample over clipping won’t be detected by the majority of people out there (I’m pretty sure I wouldn’t be able to tell). However, that could be said about digital clipping at the mastering stage as well. Why is it that if I produce a song that has transient peaks at 0.1dBFS all of a sudden my master is “illegal” while if my DAC is producing such clipping then it’s a non-issue? To me, this argument falls apart in a spectacular manner.

If digital clipping is to be avoided, then that’s that. End of story.

Of course, as we’ve seen in this thread, clipping is not the only issue caused by inter-sample overs and there is much worse stuff that can happen. Again, we’re talking about things that may not be easily audible but personally I still care about signal preservation. I want my converter to have a “what comes in, comes out” approach to the audio that I’m feeding it.
 
I respectfully disagree with @amirm ‘s take that measuring clipping would incentivize manufacturers to lower the dynamic range of their products.
But that is essentially what is required. It's just the stupid numbers game -- and ASR is fueling that game -- that makes people think an increased noise floor of 3...6dB would have any practical consequences... but it doesn't, given the performance of today's DAC chips. There might be some corner cases but these do not happen at home listening to final production music.
Clipped IS-Overs or worse IS-Over effects (like seen with most ESS DAC chips) has way more impact.
 
But that is essentially what is required.

Yes. What I mean is that we shouldn’t present the “high headroom“ solution as if it’s the greatest thing with absolutely no downsides whatsoever. It’s a trade off (a well worth one in my opinion), and we should be transparent about it.

If anything, we should ask manufacturers to give us the option to lower the digital volume at the input stage. We certainly don’t want them to decide for us what is best. While I understand Benchmark’s approach of streamlining things for the end user, I ultimately believe that taking away choice from the consumer is a bad thing.

It’s fine if designers want to make the “high headroom” option the default one, but there should be a way to disable it. As we’ve seen in this thread, not everyone agrees on what the best approach is, so the best way to satisfy everyone’s needs is to give the user the option to choose for themselves.
 
I agree, switchable 0dB/-6dB setting would be most appropriate. -6dB is just shifting the binary input values one bit to right which can even be done in simple hardware, and it also removes the need of re-dithering the data (relevant for 16bit input data like from CD).

EDIT: One also could implement an analog make-up gain to compensate for the 6dB loss, assuming the analog hardware (including downstream devices) has headroom for another 6dB for the (very) occasional peak. That would make A/B comparisons easy.
 
One also could implement
Maybe some measures are already implemented? In actual fact we don't know, and this is the main problem we could solve first. That's the reason for requesting test procedure.
I use AVR for music. This means, that digital signals are heavy processed before reaching DAC. It is difficult to guess, which part of digital signal path is susceptible to overload. Maybe it is not DAC? What tests should be done, to verify a proper working of modern audio setup? Measuring sinus output power and SINAD in direct mode, while very interesting, seems so far away from reality.
Testing internal processing of a DAC looks like a first step into measurements of modern equipment.
 
I've been scanning my library for the last week or so (takes time! )
Note that 90% of it is classical so not very susceptible of IS-Overs and truth is I only found a very small percentage of it to have any.

For the rest 10% though (mainly old rock,progressive,experimental,etc) the percentage was neither low nor very far back in terms of time of production.
Top stuff i saw was like +2-3dB.
Thankfully in my case,I have an ESS controller for the KTB that probably controls input cause otherwise is bad (like really bad) at IS-Overs as I have measured it with MTA's presets and the result was a horror show.Lowering the level by 3dB or so does it.
For the rest of my DACs I'm afraid to measure... :oops:
 
Last edited:
Maybe some measures are already implemented? In actual fact we don't know, and this is the main problem we could solve first.
The RME ADI-2 DAC and Pro as well as the ADI-2/4 have that compensation with their auto ref-level feature, besides that their volume control is in-front of the DAC chip.
If you adjust absolute output level to just above a given reference level, it has all the headroom up to next reference level for any IS-overs, so about 6..10dB of room.
 
I've been scanning my library for the last week or so (takes time! )
Note that 90% of it is classical so not very susceptible of IS-Overs and truth is I only found a very small percentage of it to have any.

For the rest 10% though (mainly old rock,progressive,experimental,etc) the percentage was neither low nor very far back in terms of time of production.
Top stuff i saw was like +2-3dB.
Thankfully in my case,I have an ESS controller for the KTB that probably controls input cause otherwise is bad (like really bad) at IS-Overs as I have measured it with MTA's presets and the result was a horror show.Lowering the level by 3dB or so does it.
For the rest of my DACs I'm afraid to measure... :oops:
Interesting and thank you for sharing. Could you hear the IS overs before scanning?
 
Could you hear the IS overs before scanning?
Admittedly no but my experience probably does not count as I either listen to this 10% very little and I always use a -3dB at this DAC even before I measured it.
But after scanning I did listened with all input at 0dB to see how it sounds like.I think I heard what most complain about digital in general,smeared,congested,etc.I call it hard soup.
They must probably learn about IS-overs and clipping in general cause otherwise I find nothing wrong with digital.

Would I know before scanning?No,but I would still complain about the recording in general (which is the truth after all,such recording will always be susceptible,if the people who did it didn't care care of this who knows what else have done wrong) .

One example came from my test folder which was probably about this and it was David Aude's cover of Coldplay's "Charlie Brown" .
I believe it was about +2dB.If you find it have a listen.
 
Admittedly no but my experience probably does not count as I either listen to this 10% very little and I always use a -3dB at this DAC even before I measured it.
But after scanning I did listened with all input at 0dB to see how it sounds like.I think I heard what most complain about digital in general,smeared,congested,etc.I call it hard soup.
They must probably learn about IS-overs and clipping in general cause otherwise I find nothing wrong with digital.

Would I know before scanning?No,but I would still complain about the recording in general (which is the truth after all,such recording will always be susceptible,if the people who did it didn't care care of this who knows what else have done wrong) .

One example came from my test folder which was probably about this and it was David Aude's cover of Coldplay's "Charlie Brown" .
I believe it was about +2dB.If you find it have a listen.
Thanks! Great description. It will help me to know what to listen for. I’ll look for Aude’s version.
 
Could you hear the IS overs before scanning?

People in this thread have got good results in ABX testing, however I don’t think it ultimately matters because I don’t think anyone in this forum would find a master with +0.1dBFS peaks (the non-inter-sample kind) acceptable, even though they are likely inaudible. In this thread we’re talking about +3dBFS plus worth of clipping and modulation and other potential garbage, so arguably much worse problems.

Why the double standard?

Would I know before scanning? No,but I would still complain about the recording in general (which is the truth after all, such recording will always be susceptible, if the people who did it didn't care care of this who knows what else have done wrong).

As @splitting_ears pointed out in the first page of the thread, producers and mastering engineers use tools such as true peak limiters that don’t always work the same way. It is possible for a limiter plugin to report safe dBTP levels and for a DAC to still introduce clipping in the finished master.

One example I can think of (and I encourage more educated people to correct me if I’m wrong) is a limiter using 4x oversampling and a DAC using 8x oversampling. In this case, the samples examined by the limiter might not exceed the 0dBFS threshold, but since the DAC is using double the sampling points it might catch the inter-sample peaks and still introduce clipping.

This is to say, the people in the production stage are probably doing their job correctly but since they don’t all use the same tools the results might vary. We could discuss about how modern music only ever utilizes the last 5dBs out of the 144 theoretical dBs of dynamic range offered by 24bits… but that’s another topic.

EDIT: reposting the source: https://www.saintpid.se/isp-true-peak-limiters-test/
 
Guys, Audioholics are doing a livestream on inter-sample overs on Wednesday 6th November evening (USA time):

Here is another issue that we have not discussed on this thread:

One effect of overloading the interpolation filter is that this overload destroys the stopband attenuation of the brick wall filter.

Start at 45 min into the video to get some context, and then take a look at the FFT shown at about 48 min into the video. The overload produces very high level ultrasonic output caused by the images of the baseband signal. The low-pass filter is non-functional when it overloads. The stopband attenuation goes to almost 0 dB.
 
Last edited:
Thanks for all the efforts to document the issue.

I only recently discovered it and started testing several old CD players. And thanks to @John_Siau and other members here who made me want to investigate more.

So let me share the below. It's an inter-sample over test at 5512.50Hz, which will generate reconstructed signal to up to +0.69dB (only). In the view below I overlayed the Onkyo C-733 (blue trace) and the Teac VRDS-25X (red trace):

1731063244859.png


We can suspect saturation in the SRC of the Teac to generate all the additional distortion + noise that we see here. The Onkyo handles well this test. We see that the Teac also suffers from additional and unwanted in audio-band distortion (below 20kHz).

The calculated ENOB of the Onkyo remains 16bits with that test, while it's only 5.2 bits with the Teac. Yes, it's as bad as it looks.

The Teac has a volume control but that does not change anything. I did not check the service manual yet, but it probably intervenes after the SRC generating the issue.

For additional information, with a single tone @0dBFS (with few samples “clipped” as seen in Audacity from the test file in use here), the Teac does not show any issue:

1731063829937.png
 
Last edited:
I too appreciate all this information and the actual test results being posted. I think it would be great to see IS overs become part of DAC testing - if @amirm feels it's feasible to do so on a wide enough range of equipment and models to make it fair and meaningful.

For myself, the simplest solution is to use active speakers with digital inputs whose volume control is in the proper place in the internal signal chain. I use the digital input on my Genelecs and their volume control comes early enough in the circuit that it prevents IS overs. At around -25dB the volume control results in output that's as loud as I can stand with most material.
 
Last edited:
That depends a lot on the material played, how exactly the DAC reacts to overs (ASRCs may spit out a slew of nasty anharmonics) and what the playback chain looks like.

When using RG with 4x oversampling using the SoX resampler DSP for peak scanning, the gnarliest loudness war era CDs in my collection (recorded prior to the use of oversampling brickwall limiters and true peak monitoring, generally in the late 2000s) are clustering around 1.35 - 1.38 - 1.39, there's even one a tad over 1.40. So in theory, about 3 dB of digital attenuation would entirely take care of this issue, and RG wants to attenuate those CDs by around 10 dB anyway. These days I'm rarely seeing more than about 1.15 on new releases. MP3s may peak higher, I've seen up to 1.59 in extreme cases.

It's those folks running CD players directly into a DAC who are the worst off, as they basically are at the mercy of DAC performance.

If the DAC has a volume control that acts before conversion, it can attenuate the CD player's unattenuated signal before conversion reducing the likelihood of inter-sample overs.
 
Guys, Audioholics are doing a livestream on inter-sample overs on Wednesday 6th November evening (USA time):
Thanks a lot for the suggestion. As profound as to be expected by Benchmark and John Siau.

I think the case has been made. The evidence is here. It’s a real issue.
Well, it is certainly as real as nominally the consequences of raised jitter levels, but the also real question is to what extent it alters the perceived audio beyond test signals. Playing devil's advocate here; this issue definitely is interesting and should be kept taken care of.

If anyone wants to argue otherwise, then please provide evidence of the contrary. After all, that is what science is all about.
As mentioned in a post before, one systematically cannot prove that something doesn't have an effect. In terms of measurements, I have no doubt that there is evidence for the decrease in reproduction quality when intersample clipping occurs, though as shown by Benchmark.

When it comes to the audibility, the usual blind tests would have to show that differences can be recognized with statistical significance. It is always the task to show that there are differences if claimed so, the "deniers" can righteously lay back and relax as long as proven otherwise.

I am not claiming that there are no differences, so again: devil's advocate for the scientific correctness.

My humble take is that we should measure for inter-sample over clipping and let customers know about the performance of their devices.
Definitely. Audible in daily life or not, from an academic point of view, if the last fractions of a dB in SINAD-performance are tested for in this forum, so should the intersample peak behavior more than ever because any only so slight distortion due to that would still have more theoretical relevance.


It would also be great to measure whether or not lowering a particular DAC’s digital volume can indeed prevent clipping or not, since AFAIU that is not always the case.
I would be interested in that as well. My guess would be that many modern DACs with digital gain control prevent it as long as no oversampling is used by them.


I respectfully disagree with @amirm ‘s take that measuring clipping would incentivize manufacturers to lower the dynamic range of their products.
Even if that premise is true, the question arises "so what"? No matter how big the issue of intersample peaks really is, it is for sure a bigger one than the moss of a few dB SNR no one is going to use, let alone able to perceive, anyway.

At this point, if one wants to criticize Amir with his rankings, then it's maybe the lack of audibility markings. Although borderline individual of course, there is a broad consensus/median about what can be heard and whatnot. So for instance, any DAC with a SINAD of 90dB or maybe even lower, could probably already be marked green in terms of "it's audibly not going to get better for you", etc. Then one could still get excited about some 0.23 dB improvement but would see the whole range and a lot worse DACs altogether in green land, grounding and reassuring and one-self that everything is alright, even if GoldenSound, GrowingGrass and VoodooPope (the latter two yet to be invented) on Youtube claim otherwise.

If the argument is that ASR has an influence on product development, then I believe this issue can be solved by presenting the data in a balanced manner.
I hope that Amir's tests have an influence and history has shown that they already did.

The lack of proper support of de-emphasis is one thing for tests to steer ignorant developers into the right direction and the intersample thing the other. Maybe, a decent equalizer like RME has one, would be yet another neglected topic.

[...]therefore it preserves dynamic range[...]
Which is lost anyway by definition as soon as one doesn't utilize the maximum possible SPL anymore, which virtually is always the case.

“this product has oversampling headroom, which means dynamic range is sacrificed in order to prevent most digital clipping due to inter-sample overs”.
I would add and stress "sacrificed on paper" in that kind of explanation has it doesn't really matter at all. The whole reason why DAC chip developers don't already implement that kind of headroom right away, is the very numbers game and in that way, we are all numberphiles, falling for the marketing nonsense.


Ideally, it would be great if the customer could *choose* between the increased dynamic range and the clipping protection.
Except for causing uncertainty for some, I don't see any disadvantage in providing choices, but again, the increased dynamic range is purely hypothetical and never to be experienced anyway as the noise floors are below the hearing threshold.

About the audibility issue. It probably is the case that inter-sample over clipping won’t be detected by the majority of people out there (I’m pretty sure I wouldn’t be able to tell). However, that could be said about digital clipping at the mastering stage as well. Why is it that if I produce a song that has transient peaks at 0.1dBFS all of a sudden my master is “illegal” while if my DAC is producing such clipping then it’s a non-issue? To me, this argument falls apart in a spectacular manner.

If digital clipping is to be avoided, then that’s that. End of story.
In the linked discussion, John Siau also says that non-clipped tracks with only containing intersample peaks is legit data and per se correct masters and thus should be reconstructed properly and not be clipped. So yes, signals higher than the digit-based 0dBFS should be reconstructed correctly, but it is also a problem provoked without absolutely any need and that is insane and frustrating at the same time.

That point is one that I personally missed in the discussion with Benchmark. The critic of most production and mastering habits today which on average are as lousy as never before while having the best equipment as never before as well. The staggering irony yet to be found.

While I certainly agree from a technical point of view, taking his favorite example of Steely Dan's "Two against nature", I would add that, despite being a per se well-produced recording, the tracks of this album in the CD version containing real clipping or not, already are an example of a "too loud" mastering with the pathological temptation of always wanting to fully utilize the maximum levels.
If the whole thing would peak at -6dBFS or even at -10dBFS and let's say one or two samples for the whole album at 0dBFS or even -3dBFS (god forbid if any track doesn't reach 0dBFS at least once - "oh no, but the SNR! Someone think about the SNR!), it wouldn't sound any worse noise and dynamic - wise for sure.

Quite the contrary: the DVD-audio counterpart of that very album, recorded in multichannel, is more dynamic in terms of the crest factor (DR level meter is greeting) with a lower level on average when properly downmixed to stereo, showing that while the CD version is far from being bad (compared to today's most releases anyways), it could have been mastered even better and is also too "loud" for no good reason other than maybe act as a good demonstration material for Benchmark in terms of the intersample peaks.

Same for all that mixing and mastering - if we wouldn't have the idiocy that everyone nowadays thinks that he has to maximize any channel or transmission path by pushing everything to the max, the intersample peaks would hardly if ever occur either (still should be handled correctly for the sake of science). The absurdity is that in order to preserve theoretical SNR values with noise floors one hasn't gotten to hear since a long time, we constantly risk to run into clipping. And in a world of today's equipment using 24 bit or even 32 bit floating point for recording and processing, maximizing levels is probably the least important thing to do. Even if I would deal with 8 bit audio (properly dithered that doesn't sound worse (rather better) than a good compact cassette recording, still widespread in the early 90s), I would always prefer a higher noise floor over clipping.

A lot would be done right if everything would just be turned down - and the volume/gain and the end user stage simply "up". Despite all fears, the volume control (be it virtual or physical) usually doesn't bite, but for many, that seems to go against nature just like Steely Dan.
 
A lot would be done right if everything would just be turned down - and the volume/gain and the end user stage simply "up". Despite all fears, the volume control (be it virtual or physical) usually doesn't bite, but for many, that seems to go against nature just like Steely Dan.
Amen brother.

I've been running lower and lower digital levels over the years.
 
Amen brother.

I've been running lower and lower digital levels over the years.

The gift of digital recording gave us ~100dB to play with and yet commercial interests think it's clever to jam it all in last few dB before overload.
 
The signal that is usually shown to illustrate intersample-overs is a "double unicorn". To show a sine wave that has no sample >1 and yet have peaks of +3 dBFS, you have to satisfy 2 conditions exactly:
  1. Signal frequency is exactly fs/4 (i.e. 1/4 of the sampling frequency)
  2. Timings of when the samples are taken at phase angles: π/4, 3π/4, 5π/4, 7π/4, ...
fs_over_4.png


That means the probability of this signal happening is similar to winning two lotteries in a row. If the signal is just a teeny tiny bit off, you'll find yourself with clipped samples, and that means the signal is irreversibly corrupted and cannot be reconstructed to resemble its original analog form.

Below shows a sine wave that is 0.24 fs and has peaks of 1 dB over full scale. The animation shows what happens when the timing of the sampling changes. The plot shows a little less than 4 full periods of the sine signal (each horizontal division is 2 sampling periods = 1 period at Nyquist frequency). The blue curve is the original signal, and the dashed red curve is the mathematically correct reconstructed signal. With less than 4 wave periods, in no case do we have all unclipped samples. Therefore the original and reconstructed signals cannot be the same.

intersample_overs.gif


So therefore, if you have more than a few intersample-over instances, it is almost guaranteed that you'd have clipped samples. While it is nice to be able to reconstructed the clipped signals the mathematically correct way, they are not going to resemble the original. The information to accurately reconstruct is forever lost due to clipping.

The real problem is that we have signal peaks over full scale in the source (hello loudness war). Having headroom to "reproduce intersample-overs" is just putting lipstick on pigs.
 
The signal that is usually shown to illustrate intersample-overs is a "double unicorn". To show a sine wave that has no sample >1 and yet have peaks of +3 dBFS, you have to satisfy 2 conditions exactly:
  1. Signal frequency is exactly fs/4 (i.e. 1/4 of the sampling frequency)
  2. Timings of when the samples are taken at phase angles: π/4, 3π/4, 5π/4, 7π/4, ...

And at 44.1kHz, the 11.025kHz (fs/4), even if it is clipped, the harmonics produced are outside the audible bandwidth in any case.
 
Back
Top Bottom