• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Let's develop an ASR inter-sample test procedure for DACs!

@pkane That’s another story and not what I’m trying to address with NTK. I’m not addressing the legality/validity of samples being at 0.
But sure: it’s all about headroom regarding the general topic.
 
That’s another story and not what I’m trying to address here. I’m not addressing the legality/validity of samples being at 0.
But sure: it’s all about headroom.

Normalizing sampled values to 0dBFS in mastering is an incorrect way to normalize a sampled waveform. Simple to do, but not valid in all cases. Easy enough to fix with TP and similar meters/plugins, but not always done.
 
Again, the very hard fact is that there is no norm, no standard, no nothing regarding headroom.
I won't repeat John's posts, but I agree with him when he says that full scale samples are technically correct and re the sampling theorem. To this day, nothing has forbidden it.
Headroom has unfortunately been a forgotten/overlooked part of the digital realm. That’s just how it is today.

Amir, John, those guys have the ability to influence the industry and change things. On the mastering side, guys like Ian Shepherd also do a lot. But mere discussions on forums don’t..

Edit: As a side note on mastering: even if mastering is done at max -1 or -2dB True Peak, there’s the issue of lossy encoding/decoding which will fatally produce overshoots over 0 - either sample or inter-sample.
So on the mastering and audio distribution side, things and more complex are far, far from being addressed (and maybe won’t ever?).
 
Last edited:
Again, the very hard fact is that there is no norm, no standard, no nothing regarding headroom.
I won't repeat John's posts, but I agree with him when he says that full scale samples are technically correct and re the sampling theorem. To this day, nothing has forbidden it.
Headroom has unfortunately been a forgotten/overlooked part of the digital realm. That’s just how it is today.

Amir, John, those guys have the ability to influence the industry and change things. Mere discussions on forums don’t..
With hardware that can routinely deliver dynamic range of >120 dB, we need more headroom? Have you gotten it backwards?
 
With hardware that can routinely deliver dynamic range of >120 dB, we need more headroom? Have you gotten it backwards?

It seems we have trouble understanding each other. @melowman is not arguing that we need more bits of digital headroom, he’s saying that the mastering engineers have stopped leaving headroom between the peaks of their recordings and the limit of full scale.
As you put it, they’re being stupid with their signal.

Masters nowadays are regularly peaking at 0dBFS or at -0.1dBFS thanks to the use (or abuse) of limiters.

Labels are cramming their whole mixes into the last 5dB (or less, sigh) of dynamic range to stay competitive with other labels because they think that if their song is quieter they will lose the loudness war and therefore lose money. It’s sad, but that’s the way things are right now. We could go on a tangent about how capitalism, competition and greed are ruining music, but I think that’s off topic.

These are the signals we’re dealing with, and so we have to make peace with ISOs going over full scale, at least for existing material.

EDIT: Typos.
 
If everybody used digital attenuation this whole debate would become academic. But no, people are hell-bent on feeding DACs right up 0dBFS, out of an irrational fear of digital noise or squashed dynamic rage or loss of "bit-perfect" playback.

I use DSP (Dirac) in my my playback system, which attenuates by -10dB before applying its corrections (in my case anyway, I think the Dirac default is -12.5dB). On top of that, my typical volume adjustment range is -3 to -9dB. My DACs will never see samples higher than -10dBFS.

No other piece of machinery or technology would routinely be driven at the absolute maximum of the allowable range. Why DACs?
 
If everybody used digital attenuation this whole debate would become academic. But no, people are hell-bent on feeding DACs right up 0dBFS, out of an irrational fear of digital noise or squashed dynamic rage or loss of "bit-perfect" playback.

I use DSP (Dirac) in my my playback system, which attenuates by -10dB before applying its corrections (in my case anyway, I think the Dirac default is -12.5dB). On top of that, my typical volume adjustment range is -3 to -9dB. My DACs will never see samples higher than -10dBFS.

No other piece of machinery or technology would routinely be driven at the absolute maximum of the allowable range. Why DACs?

...because, not every digital source offers the option of digital attenuation before the DAC. CD players, for example.

All of the things we've been talking about in the last few pages have already been addressed in the thread. We're kind of going in circles right now...
 
If everybody used digital attenuation this whole debate would become academic. But no, people are hell-bent on feeding DACs right up 0dBFS, out of an irrational fear of digital noise or squashed dynamic rage or loss of "bit-perfect" playback.

I use DSP (Dirac) in my my playback system, which attenuates by -10dB before applying its corrections (in my case anyway, I think the Dirac default is -12.5dB). On top of that, my typical volume adjustment range is -3 to -9dB. My DACs will never see samples higher than -10dBFS.

No other piece of machinery or technology would routinely be driven at the absolute maximum of the allowable range. Why DACs?

Because the best system noise and performance is obtained with the highest source signal and the lowest gain in subsequent amplification.

People can do what they like- you like playing with the source level- that's fine, for you.

I like level matching all my sources where they have variable outputs and keep them as close to full output as possible and do my global level control in a preamplifier. I have no reason to change as it makes sense when you have more than one source.
 
...because, not every digital source offers the option of digital attenuation before the DAC. CD players, for example.

You'd be surprised how many CD players in the past used digital level control prior to the OS filters and D/As. Yamaha, Pioneer etc. Many of course used post D/A level controls in the form of remote motorized volume controls on the variable outputs.
 
Because the best system noise and performance is obtained with the highest source signal and the lowest gain in subsequent amplification.

In theory, yes. But what is there to gain if the noise is already inaudible (ear to speaker)? Especially when the trade off is increased risk of things like inter-sample overs?

...because, not every digital source offers the option of digital attenuation before the DAC. CD players, for example.

Ah yes, I think I remember those... ;)
 
Many of course used post D/A level controls in the form of remote motorized volume controls on the variable outputs.

Yes, like every Sony CD player I ever owned.

In my mind, CD players were digital audio 1.0. These days, CDs should be seen as medium for distributing digital content, not playback. Rip them, then play the files. Read (and search!) the digital booklet instead of straining your eyes on the 12cm paper insert. Don't be bothered by the noise the player mechanism makes, which is far louder than any noise produced by 10-20dB digital attenuation.
 
Yes, like every Sony CD player I ever owned.

In my mind, CD players were digital audio 1.0. These days, CDs should be seen as medium for distributing digital content, not playback. Rip them, then play the files. Read (and search!) the digital booklet instead of straining your eyes on the 12cm paper insert. Don't be bothered by the noise the player mechanism makes, which is far louder than any noise produced by 10-20dB digital attenuation.

You seem to think your way of listening to music is the way everyone else should listen to music. How presumptuous is that? Your justifications for doing what you do are just that- yours. Not mine.

I have zero interest in reading 'digital booklets' on a stupid screen of CDs I already own. If I play the disc and for some reason want to read the booklet, I do. Not rocket science last time I checked.

I have zero interest in ripping the 7-8 thousand CDs I already own to yet another device so I can listen to exactly the same content. Life is way too short to undertake such a pointless exercise, especially when I have many excellent, fully functioning CD players I absolutely love using. I have enough players to outlast me and all my family for many, many decades. Long after your SSD/HDD/music subscription has failed/lapsed and your entire collection is gone or corrupted. I don't have to backup anything, create duplicates or generally waste time on 'curating' my digital music collection.

I am 100% happy with Compact Disc. I have been since the format debuted and continue to be happy with it 42 years later.
 
Normalizing sampled values to 0dBFS in mastering is an incorrect way to normalize a sampled waveform.
Maybe you can elaborate on why you come to this conclusion, assuming you hint on intending to "protect" the DAC from having to deal with intersample peaks > 0dBFS.

Taken aside the (depending on the scenario) questionable process of normalizing in general (no gain SNR-wise, introduction of further artefacts due to rounding errors, etc.), as it has been pointed out here already, audio material containing sample points at 0dBFS calling for a reconstructed waveform exeeding 0dBFS, per se is entriely legit as it is not a digital data's task to itself remain at lower levels just to fit some higher signal range which might not even be in the numbers realm anymore (e.g. when using a simple DAC without oversampling).

One could even argue that there isn't such a thing as intersample "overs" as those 0dBFS+ equivalents either occur in the analog chain during low pass filtering or still in the domain of numbers which however has to be accounted for mathematically so those values still fit in the defined range. Again, nothing of that is the task of the original data to take into account foresightfully.

So while I think that naturally, intersample peaks would hardly be an issue anyway if all those today's masters wouldn't be so stupidly high in level, I also believe in tune with Benchmark that a decent DAC should make sure that any intersample peaks, occuring for whatever reason, should be rendered correctly and not be clipped as it is a fundamental part of the reconstruction to deal with raised levels during that very process, any lack of required headroom being an engineering oversight.

Hence, in the ideal case, a normalized-to-0dBFS track should lead to exactly the same result in terms of distortion and SNR just like the below-0dBFS one.
 
I believe the culprit in most modern music isn’t normalization but hard limiting. And in a way, that’s even worse: instead of just a few rogue peaks here and there, you end up with a constant stream of samples sitting right below 0dBFS.

One thing worth mentioning that I forgot to add before is that hard limiting, or even clipping (a popular alternative to limiting) done at the mixing and mastering stages, can be very analog-like and sound very different from flat-line digital clipping (the kind popular on TikTok). In fact, you could get a similar result (though not as “flat”) by running a mix through analog gear and then converting it back to digital.

I really hope that no mastering engineer is actually digitally clipping every transient only to then normalize everything at -0.1dBFS, because that would be just awful.

Whatever the case might be, though, you’d want your DAC to accurately reproduce both hard-limited and normalized signals. Right…?
 
For the first digital recorders we had in the studio (Otari, Tascam, Sony) 0dB analog input level was somewhere between -18dB to -12dB (digital) FS.

-18, -12, and full scale...

1731498071711.png


An old Carla Bley tune on early CD, surely AAD...

1731498315208.png
 
:facepalm: Maybe we should rename the thread to “Loudness War = BAD”. What do you guys think?
 
:facepalm: Maybe we should rename the thread to “Loudness War = BAD”. What do you guys think?
I find it useful to learn both ends.
Loudness war is the one end and DACs able or unable to deal with IS-overs is the other.

Wouldn't be bad to cover both ends.
I have a very small horse in the race in terms of the former,classical is mostly decent.
Covering the other end too can be good for peace of mind.
 
I find it useful to learn both ends.
Loudness war is the one end and DACs able or unable to deal with IS-overs is the other.

Wouldn't be bad to cover both ends.
I have a very small horse in the race in terms of the former,classical is mostly decent.
Covering the other end too can be good for peace of mind.

Look, I agree that they are both fascinating topics worth discussing. But that isn't the purpose of this thread.

ISOs are only an issue on oversampling DACs. As @little-endian pointed out, if you buy a NOS DAC, you won't get any clipping regardless of the content played!

This is an issue SPECIFIC to oversampling DACs. It has absolutely nothing to do with the Loudness War.

Arguing that the mastering engineers should lower their volume because the DAC I bought can't reproduce their song accurately is absolutely ridiculous. It's the same as saying that they should produce quieter masters because my cr@ppy speakers are distorting. Or that we should go back to mono masters because mono Bluetooth speakers are incredibly popular.

That is not to say that any argument against the Loudness War is invalid and that we shouldn't raise awareness of the issue. I myself want to have dynamics back in my music. It's just that it's completely irrelevant to this particular conversation.
 
Look, I agree that they are both fascinating topics worth discussing. But that isn't the purpose of this thread.

ISOs are only an issue on oversampling DACs. As @little-endian pointed out, if you buy a NOS DAC, you won't get any clipping regardless of the content played!

This is an issue SPECIFIC to oversampling DACs. It has absolutely nothing to do with the Loudness War.

Arguing that the mastering engineers should lower their volume because the DAC I bought can't reproduce their song accurately is absolutely ridiculous. It's the same as saying that they should produce quieter masters because my cr@ppy speakers are distorting. Or that we should go back to mono masters because mono Bluetooth speakers are incredibly popular.

That is not to say that any argument against the Loudness War is invalid and that we shouldn't raise awareness of the issue. I myself want to have dynamics back in my music. It's just that it's completely irrelevant to this particular conversation.
Ok,if we want to take it to the edge why consider DACs and speakers?
Let's go straight to our ears and out brain.

Is it technically correct to have clipped samples (any clipping).
Of course not.
The thing now is that this stuff sounds exactly like that,clipped to hell.
So it is relevant.
 
Back
Top Bottom