• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Let's develop an ASR inter-sample test procedure for DACs!

the authors of the limters table are discussing TP here: https://www.facebook.com/groups/1458538374409630/posts/3684754468454665/

and this is an interesting fact:

391705151_10100244654309192_5290277269966337888_n.jpg


some TPs are true than others, OR none of them are "absolutely true" anyways? lol
 
why the -1dBFS ceiling exists is file compression algorithms
Sorry but no, dBFS and dBTP are different, because PCM samples can't represent the true, underlying signal they represent. However it's true that lossy compression can create even more overshoots.

they wont care if they master for CD
In my work as a mastering engineer, I take great care to deliver CD masters with sufficient headroom. Sometimes I also take the time to listen to how it behaves with lossy encoders and adjust the output ceiling accordingly.

some TPs are true than others, OR none of them are "absolutely true" anyways? lol
Please take a look at the second table I quoted in post #19. You'll see that the specification used for true peak allows for overshoots of up to +/-0.55dB, depending on the oversampling factor implemented in the limiter/meter. This specification is only an imperfect attempt to estimate the true underlying waveform from the PCM values, which can only be known after a DAC has interpolated it. Hence the need to see how DACs behave, because we can only guess at what happens then.
 
Last edited:
What do you think?

Intersample overs are an entirely manufactured error by people pushing levels too far. Even then, they are marginal. Honestly, I don't care one bit. If the so-called "recording engineers" are that inept that they cannot keep their level below 0dBFS, they have no business being in the business.

Take some of the very best early digital (Pure DDD) recordings where the peak level is some 10dB below 0dBFS. They knew what they were doing...

Don't pass the buck to the D/A for poorly recorded content. 16 bit give everyone 96 dB to play with- why squash it all in the last 6dB?
 
Don't pass the buck to the D/A for poorly recorded content. 16 bit give everyone 96 dB to play with- why squash it all in the last 6dB?
Because, in no particular order: creativity, lack of standards, imperfect DSP/DAWs, lossy distribution, artistic decisions, loudness wars, competition, ...
Please read the first post again to see why it's not just a matter of headroom and recording/processing techniques.

Engineers have a role to play (more and more of them are aware of the problem and are trying to keep true-peak levels in check), but they are certainly not the only ones responsible for this situation. Even an engineer trying to do his/her job properly can only rely on imperfect true-peak meters that also overshoot and undershoot themselves.

More seriously, are you really comfortable with the fact that commercially released music from almost three decades (!) can sound even more distorted on certain devices, regardless of price?

Wouldn't it be great if each and every DACs out there could handle worst-case experiences too? If not, why do we measure DACs at all?
And if ISPs are not audible, why do we care about jitter below 175ns? Linearity below -70dB? THD below 1%?
 
Because, in no particular order: creativity, lack of standards, imperfect DSP/DAWs, lossy distribution, artistic decisions, loudness wars, competition, ...
Plus, I somehow doubt that proper volume normalization is consistently used when picking a mastering level, and we all know how easy it is to fall prey to the "louder = better" effect. Listeners generally do not think twice about using their volume control either... not necessarily the case in a studio.
 
Because, in no particular order: creativity, lack of standards, imperfect DSP/DAWs, lossy distribution, artistic decisions, loudness wars, competition, ...
Please read the first post again to see why it's not just a matter of headroom and recording/processing techniques.

Engineers have a role to play (more and more of them are aware of the problem and are trying to keep true-peak levels in check), but they are certainly not the only ones responsible for this situation. Even an engineer trying to do his/her job properly can only rely on imperfect true-peak meters that also overshoot and undershoot themselves.

More seriously, are you really comfortable with the fact that commercially released music from almost three decades (!) can sound even more distorted on certain devices, regardless of price?

Wouldn't it be great if each and every DACs out there could handle worst-case experiences too? If not, why do we measure DACs at all?
And if ISPs are not audible, why do we care about jitter below 175ns? Linearity below -70dB? THD below 1%?

Just another whole lot of hand-waving justifying poor recording practises.

I don't care one iota for poor recordings. If they are crap, I don't buy them or if I do, I return them for a full refund.

Intersample overs are a complete non-issue. Companies like Benchmark love creating a problem that doesn't really exist and then miraculously solve it for everyone's benefit. Do you think for one second anyone gave a rat's ass when someone pushed some peaks in the analogue days a bit too hard? Nope.

And in digital, everyone knew it sounded like chit when you ran out of bits. Big hint: turn down the level.

Intersample overs are nothing in the scheme of things. If you are right on 0dBFS, you are way too high. End of story. Or just go FP.
 
You have to live with the context: no artist, no label, no producer, would ever accept masters peaking at 10dB below 0, and they don't give a rat's ass about your opinion about Their music and your opinion about how loud they should be.

The notion of headroom has been 'forgotten' in the design of digital audio. And that had the (humanly logical) consequence to create races for maximum bit utilization. We can't blame it. While this is not technically wrong, it does force DAC manufacturers to pay more attention in their design, especially with over-sampling.

Digital signals peaking at 0 are technically valid. A properly designed DAC shall reproduce digital signals peaking at 0.
 
I don't care one iota for poor recordings. If they are crap, I don't buy them or if I do, I return them for a full refund.
[...]
Intersample overs are nothing in the scheme of things. If you are right on 0dBFS, you are too high. End of story.
I'm happy you got that problem all worked out. But do you really think that's a reasonable answer for everyone out there?

If I had to throw away every album of my music collection with ISPs... Wow, I don't think much would survive such a test. Does that mean only good music will remain?
Is it really so far-fetched to expect a premium DAC to be able to handle difficult signals without adding more distortion by itself?

My RME ADI-2 Pro FS is designed well enough to handle unstable S/PDIF signals from an entry-level preamplifier I have without any glitches or clicks. This was not the case with my Audient iD22. Of course, in a perfect world I would only use premium equipment and RME's FS technology would be useless. But I guess you see where I'm going with this example? :)
 
I'm happy you got that problem all worked out. But do you really think that's a reasonable answer for everyone out there?

If I had to throw away every album of my music collection with ISPs... Wow, I don't think much would survive such a test. Does that mean only good music will remain?
Is it really so far-fetched to expect a premium DAC to be able to handle difficult signals without adding more distortion by itself?

It's yet another manufactured problem with a "solution" sold to you, and you bought it, hook, line and sinker.
 
I'm happy you got that problem all worked out. But do you really think that's a reasonable answer for everyone out there?

If I had to throw away every album of my music collection with ISPs... Wow, I don't think much would survive such a test. Does that mean only good music will remain?
Is it really so far-fetched to expect a premium DAC to be able to handle difficult signals without adding more distortion by itself?

My RME ADI-2 Pro FS is designed well enough to handle unstable S/PDIF signals from an entry-level preamplifier I have without any glitches or clicks. This was not the case with my Audient iD22. Of course, in a perfect world I would only use premium equipment and RME's FS technology would be useless. But I guess you see where I'm going with this example? :)

You're arguing for DACs to fix a problem that is caused by mastering. Why not argue that mastering should be done in a way that doesn't cause these problems downstream? The test for ISOs is not hard, and can be done by any mastering engineer. All you need to do is oversample the signal relative to desired target rate and check for samples exceeding 0dBFS.

A sample over 0dBFS is not valid according to the PCM format. Therefore, it should not be allowed in the recording and that's the job of a mastering engineer to ensure that this is the case, not the DAC designer. Certainly there are devices, such as ADI-2 Pro's of the world that handle ISOs gracefully. But let's fix the problem at the source and not try to patch it up after it's already baked into a recording.
 
It's yet another manufactured problem with a "solution" sold to you, and you bought it, hook, line and sinker.

Thanks for your concern, but it's a real and tested solution... even on ASR:
After testing with the RME only, I wrote "Those devices are adding or substracting nothing to the signal they receive".

After further look and check with the more Jitter-sensitive (and way cheaper) Topping E30 II Lite, this is not absolutely true.
There are measurable performance degradations.

You can look here for the measurements: https://www.audiosciencereview.com/...pdif-converter-review-and-measurements.47732/
 
A sample over 0dBFS is not valid according to the PCM format. Therefore, it should not be allowed in the recording and that's the job of a mastering engineer to ensure that this is the case, not the DAC designer. Certainly there are devices, such as ADI-2 Pro's of the world that handle ISOs gracefully. But let's fix the problem at the source and not try to patch it up after it's already baked into a recording.
But that's precisely the point I'm trying to make! The problem with ISPs is that they occur even at 'on spec' PCM levels below 0dBFS.

Even a mastering engineer who makes things very, very loud for whatever reason will deliver files below 0dBFS. However, the real, underlying value of the waveform will overshoot much higher when decoded. For instance, these files all have peak values below 0dBFS, yet they overshoot like crazy.

EDIT: Just to be extra clear, as a mastering engineer I do my best to deliver files that are free of problems. More and more engineers are becoming aware of the issue too. This means that sometimes we try to educate artists, labels and whoever we work with that there's no need to blindly push things for no good reason.
But the solution to this ISP problem is *not* going to come from mastering engineers alone. And what about the 'problematic' music that has already been released? Should we just throw it away and never enjoy it again?
 
Last edited:
@restorer-john: it is correct to say that with enough headroom, the issue is mostly gone. But the thing is that headroom is not there, neither from music makers nor from manufacturers, so there is an actual issue, you cannot ignore it.
I get your position, but unfortunately it is not reasonable in a context where basically nobody was sufficiently educated, be it music makers or digital equipment manus.

DAC manufacturers need to account for what are valid digital signals, but which may not be reproduced distortion-free in their system.
 
But that's precisely the point I'm trying to make! The problem with ISPs is that they occur even at 'on spec' PCM levels below 0dBFS.

Even a mastering engineer who makes things very, very loud for whatever reason will deliver files below 0dBFS. However, the real, underlying value of the waveform will overshoot much higher when decoded. For instance, these files all have peak values below 0dBFS, yet they overshoot like crazy.

I don’t think you understood what I said. Testing for 0dBFS at 44.1k isn’t going to reveal ISOs. Oversample this to 8x the rate and then check for 0dBFS or overs and scale (or compress) appropriately. This will more than likely eliminate all ISOs at 44.1k.

A 44.1k PCM signal that has samples below 0dBFS isn’t a guarantee of no ISOs, because PCM is a sampled format and therefore doesn't encode every possible value of the waveform. The actual signal encoded by PCM can exceed 0dBFS, therefore it's up to the mastering engineer to ensure that this does not occur. And, as I said, this is not hard.
 
I don’t think you understood what I said. Testing for 0dBFS at 44.1k isn’t going to reveal ISOs. Oversample this to 8x the rate and then check for 0dBFS or overs and scale (or compress) appropriately. This will more than likely eliminate all ISOs at 44.1k.

A 44.1k PCM signal that has samples below 0dBFS isn’t a guarantee of no ISOs, because PCM is a sampled format and therefore doesn't encode every possible value of the waveform. The actual signal encoded by PCM can exceed 0dBFS, therefore it's up to the mastering engineer to ensure that this does not occur. And, as I said, this is not hard.
I think I understood your argument, but please tell me if I didn't. Since you used dBFS (which represents sample values), I answered with sample values in mind. I'm sorry if that's not what you meant. By the way, I edited my post too because I thought I might have been unclear. I'll try to avoid doing this in the future :)

I'm well aware that oversampling can help identify potential ISPs. In fact, it's one of the things recommended by the ITU BS.1770-4 specification. Many engineers now use dBTP instead of dBFS to see how an artificial interpolation might affect the resulting output. The problem is that the specification is not very precise. In other words, TP meters vary from one another, and this is a potential source of confusion.

Taking care of ISPs at the production stage is fine to me personally, and a good idea. Again, I do my best to deliver files with sufficient headroom and 'true-peak compliance', if I may say. I'm not at all against that.
The problem is how to deal with already released music that's not going to be remastered any time soon?

That's why I started this thread. By establishing an accurate test for this, we could:

1) educate listeners and enthusiasts about digital audio principles
2) raise awareness of ISPs and how to avoid them at any stage (be it production or listening)
3) encourage manufacturers to play their part in helping listeners enjoy less-than-ideal audio in the best possible way

Is it really so unreasonable? :(
 
The problem is how to deal with already released music that's not going to be remastered any time soon?
Those are already irreversibly messed up. So garbage in, what do you expect to get out?

If you want consistent reconstruction of these music across all different DAC implementations, just digitally attenuate them, say by 3 or 6 dB, then the intersample over reconstruction problem is eliminated. It is not hard.
 
Good recordings, that are nowhere near 0 dBFS, don't need any "help" from the DAC, and those that do need help - are nowhere near worth listening to.
There...
 
@pkane

Any valid PCM data should be decoded by a decoder. Simple as that.

The PCM format does not account for further oversampling issues: it's up to the one who oversamples (for example oversampled DAC manufacturers) to account for any issue; for that matter, any DSP implementer should account for issues they may introduce; this is just basic DSP practice, isn't it?
Also, mastering engineers are facing a gigantic variety of DACs listener will listen music on, amongst which non-oversampled; are you saying that MEs are to know eactly each DAC implementation, know exactly what oversample ratio and most importantly what oversample filter they use -- all of which giving different peak readings? You don't sound reasonable nor rational. Plus, there's what's being just said: what about the gazillion pieces of music already out there? (?)

All in all, oversampled DAC manufacturers need to account for the issues introduced by oversampling a perfectly valid PCM signal. But since inter-sample values can theoretically be 'infinite', oversampled DACs cannot make an 'infinite' headroom. So there should be a practical headroom value that would account of most material. Benchmark chose -3dB. Others advocate for -6dB. The Rolls-Royce of digital consoles — the Oxford R3 — had 14dB of headroom! Because the designer was well-educated enough, and didn't put the blame on music makers and mastering engineers.
 
I think I understood your argument, but please tell me if I didn't. Since you used dBFS (which represents sample values), I answered with sample values in mind. I'm sorry if that's not what you meant. By the way, I edited my post too because I thought I might have been unclear. I'll try to avoid doing this in the future :)

I'm well aware that oversampling can help identify potential ISPs. In fact, it's one of the things recommended by the ITU BS.1770-4 specification. Many engineers now use dBTP instead of dBFS to see how an artificial interpolation might affect the resulting output. The problem is that the specification is not very precise. In other words, TP meters vary from one another, and this is a potential source of confusion.

Taking care of ISPs at the production stage is fine to me personally, and a good idea. Again, I do my best to deliver files with sufficient headroom and 'true-peak compliance', if I may say. I'm not at all against that.
The problem is how to deal with already released music that's not going to be remastered any time soon?

That's why I started this thread. By establishing an accurate test for this, we could:

1) educate listeners and enthusiasts about digital audio principles
2) raise awareness of ISPs and how to avoid them at any stage (be it production or listening)
3) encourage manufacturers to play their part in helping listeners enjoy less-than-ideal audio in the best possible way

Is it really so unreasonable? :(

I think that "deliver files with sufficient headroom" isn't the answer. The answer is to specifically check for ISOs in production and scale/compress appropriately to ensure the waveform (not the samples at 44.1k) doesn't exceed 0dBFS. There's no guessing needed for the necessary headroom in this case.

So what's "an accurate test"? Again, just oversample the file, keep the samples floating point, and check for any exceeding +/-1.0 level. Easily done, and I believe already built-in to some software, such as SoX for example. If there's no plugins that do this already, this is easily fixed (if I thought it was important, I could probably write on in a day or two :) )
 
Back
Top Bottom