• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Let's develop an ASR inter-sample test procedure for DACs!

This is the position, taken into account, considered to manage this problem of the foundries texas (I know that a DRC is sometimes available), akm, ess which is interesting if they read us?
 
And yet no one fires mastering engineers or producers who slam everything to 0 dBFS and who are actually the ones responsible for creating those ISOs.
Well hang on because it's the mastering or recording engineers who created these overs in the first place. And yet they're not fired.

This is not true and we went over this already.

A summary of the counter-arguments:

No one "created" ISPs. Virtually every reconstructed wave goes above the sample points. It's just a consequence of the sampling theorem. When the samples happen to be closer to 0dBFS, the probability of the reconstructed wave going past 0dBFS increases, but this doesn't mean that you can completely get rid of ISPs by keeping your levels down.

ISPs were not a problem (meaning, they didn't result in digital clipping) before oversampling DACs hit the market.

True Peak metering is not a standard and it gives different results based on the implementation.

Oversampling filters in DACs are themselves not standardized and can produce different results based on the implementation.

Even if we take all mastering engineers and fire them and have no new music, this will still not solve the problem:

There are millions of already existing songs that have been recorded without taking oversampling into account that cannot be fixed without remastering, to say nothing of non-musical audio content like videos, movies, podcasts etc.

Millions of digital/sampler instruments have DACs. Effect pedals have DACs. Audio interfaces have DACs. Speakers have DACs. The list goes on...

This is an issue related to audio reproduction.

Blaming mastering engineers for a problem introduced by oversampling DACs is ridiculous and achieves nothing.

Going forward, it would be ideal for everyone to adhere to standards that can allow us to have digital-clipping-free audio, just like recording engineers (more or less) adhere to the SOL standard. But right now there is no such standard, and solving the problem at the DAC level is by far the best and easiest path.
 
This is true, unless there is the need for SRC which often is done before digital volume control. In this case you have to take care to prevent clipping during DRC (especially upsampling)

Very good point! I do wonder, though - does this kind of resampling/upsampling induced digital clipping present the same risk of potentially audible clipping as the analogue clipping produced by inter sample overs from a brick walled signal? My impression - and my experience doing manual non-integer resampling of audio files in an audio editor - is that digital resampling of a “hot” signal can produce a waveform that doesn’t look very nice, but when you zoom in, it appears to produce a lot of individual clipped samples, and I don’t know that we can hear that unless or until there are a pretty large number of them all in a row in an extended peak.

But I’m a rank amateur in this regard and would be interested in your and others’ thoughts on digital vs analogue clipping.
 
True Peak metering is not a standard and it gives different results based on the implementation.

There is an implementation described in Recommendation ITU-R BS.1770. As the problem has been recognized for decades, one would think that the mastering engineers might want to ensure that the produced waveform does not exceed 0dBFS anywhere, and not just at the sampled points. For example, EBU R 128 recommended -1dBTP as the desired mastering level for many years:


1758469868373.png
 
This is not true and we went over this already.

A summary of the counter-arguments:

No one "created" ISPs. Virtually every reconstructed wave goes above the sample points. It's just a consequence of the sampling theorem. When the samples happen to be closer to 0dBFS, the probability of the reconstructed wave going past 0dBFS increases, but this doesn't mean that you can completely get rid of ISPs by keeping your levels down.

ISPs were not a problem (meaning, they didn't result in digital clipping) before oversampling DACs hit the market.

True Peak metering is not a standard and it gives different results based on the implementation.

Oversampling filters in DACs are themselves not standardized and can produce different results based on the implementation.

Even if we take all mastering engineers and fire them and have no new music, this will still not solve the problem:

There are millions of already existing songs that have been recorded without taking oversampling into account that cannot be fixed without remastering, to say nothing of non-musical audio content like videos, movies, podcasts etc.

Millions of digital/sampler instruments have DACs. Effect pedals have DACs. Audio interfaces have DACs. Speakers have DACs. The list goes on...

This is an issue related to audio reproduction.

Blaming mastering engineers for a problem introduced by oversampling DACs is ridiculous and achieves nothing.

Going forward, it would be ideal for everyone to adhere to standards that can allow us to have digital-clipping-free audio, just like recording engineers (more or less) adhere to the SOL standard. But right now there is no such standard, and solving the problem at the DAC level is by far the best and easiest path.
I cannot agree with you. The true peaks should be recognised by the mastering engineers. Why wouldn't they be?
 
I recently switched to a MoOde player with using CamillaDSP. CamillaDSP performs digital room correction using convolution files. To produce smaller quantity of files for different sample rates, I upsample the base to 176.4/192 kHz and then feed the upsampled signal into the DSP. When I listened to some modern tracks upsampled from 44.1 kHz, I began to hear frequent clicks, similar to the quiet clicks of vinyl. A review of Camilla's pipeline editor revealed numerous clipping issues in the input and output signals. Experimentally, I found that setting the gain to -4.5 dB in the mixer eliminates clipping in the output signal, while the input signal continues to happily clipped. The clipped sample counter is zero on about over ten tracks. I can't claim that this attenuation is guaranteed to be sufficient for every track ever released. But the clicks did stop. So this "harmless" clipping during upsampling and DSP processing is anything but harmless. Incidentally, it's precisely this method, oversampling by 4x, that ITU-R recommends calculating true peaks.
 
Ha, just as I was writing this, a masterpiece was discovered that exceeded 4.5 dB attenuation :) . Meet Infected Mushroom Wanted To, who produces this at the end of the track:
 

Attachments

  • tp.png
    tp.png
    56.5 KB · Views: 55
I cannot agree with you. The true peaks should be recognised by the mastering engineers. Why wouldn't they be?
ISOs are (sometimes) a playback problem, not a recording problem. Any sample stream must be considered as valid data..
IMHO, therefore the reproducing hard-/software should take care of that, by providing some headroom in digital and analog parts, like 1..2dB or so, and anything above that should be soft-clipped in the range of a another 1..2dB on top of that.
Of course. the recording industry would help us a lot by avoiding ISO from the start, in new productions (or remasters).
 
ISPs were not a problem (meaning, they didn't result in digital clipping) before oversampling DACs hit the market.
Wasn't Philips TDA 1540, introduced already in 1982, an oversampling DAC?
 
Wasn't Philips TDA 1540, introduced already in 1982, an oversampling DAC?
Not this DAC chip itself, but a separate digital interpolation filter chip.
 
does this kind of resampling/upsampling induced digital clipping present the same risk of potentially audible clipping as the analogue clipping produced by inter sample overs from a brick walled signal?

In the context of oversampling DACs, Inter-sample peaks over 0dBFS produce either digital clipping or filter overloading. Since samples cannot ever go above 0dBFS, what you get in theory is a "flattened" wave that introduces distortion, but in practice it can be worse if the filter overloads.

NOS DACs produce analog ISPs (meaning that they cause the voltage to go past 1/-1), which generally generate less clipping and can easily be fixed by just lowering the analog volume post-conversion. (And yes, NOS DACs do have a lot of other different issues)

There are posts about this earlier in the thread.

For example, EBU R 128 recommended -1dBTP as the desired mastering level for many years:

Which is fine, but if the True Peak meter oversamples by 4x and a DAC oversamples by 64x, we have mitigated the issue but we still haven't solved the problem.

I cannot agree with you. The true peaks should be recognised by the mastering engineers. Why wouldn't they be?

Of course they should. But even if all mastering engineers were to use proper 4x True Peak metering, this wouldn't solve anything. Everybody needs to adhere to standards for them to work. And again, it would do nothing for audio material that already exists and yadda yadda yadda...
 
Which is fine, but if the True Peak meter oversamples by 4x and a DAC oversamples by 64x, we have mitigated the issue but we still haven't solved the problem.
Sure. It actually doesn't matter by how much the DAC oversamples -- it is just filling in more points on the same analog waveform. The original waveform either exceeds 0dBFS level in the analog domain, or it doesn't.

ISP peaks can potentially reach higher the higher the frequency of the signal, but most music doesn't contain that much energy at higher frequencies. In fact, here are the possible "under-shoots" for frequencies at various oversampling rates. Note that 4x oversampling can miss at most a 0.688dBFS ISP for a signal at 1/2 the original sampling rate (maximum valid frequency).

So, a -1dBTP is perfectly acceptable as a target:

1758490821048.png
 
No one "created" ISPs. Virtually every reconstructed wave goes above the sample points. It's just a consequence of the sampling theorem. When the samples happen to be closer to 0dBFS, the probability of the reconstructed wave going past 0dBFS increases, but this doesn't mean that you can completely get rid of ISPs by keeping your levels down.
ISOs above 0dBFS can only occur when the analog signal fed into the ADC is higher than its maximum allowed input voltage (ignoring bad postprocessing). This needs to be avoided. Just because the digital samples don't look clipped doesn't mean the signal is not clipped. To rely on the DAC to be able to reproduce signals at higher output voltages than its own specs allow is wishful thinking. IME ISOs violate the transparency of the AD/DA chain in digital audio.
[..]
There are millions of already existing songs that have been recorded without taking oversampling into account that cannot be fixed without remastering, to say nothing of non-musical audio content like videos, movies, podcasts etc.
.. and all of them are the result of bad sampling and/processing. We can fix this easily using a digital volume control in Front of the DAC chip, like RME's ADI2 series. S/N of modern DACs is so high that nobody can detect a loss of 6 dB which would be enough to handle for almose all ISOs without clipping during upsampling or in the analog stage.
This is an issue related to audio reproduction.

Blaming mastering engineers for a problem introduced by oversampling DACs is ridiculous and achieves nothing.
Its an issue for audio reproduktion created by the engineers. Blaming them of course does not fix the issue, but maybe they are able to learn from mistakes in the past.

Going forward, it would be ideal for everyone to adhere to standards that can allow us to have digital-clipping-free audio, just like recording engineers (more or less) adhere to the SOL standard. But right now there is no such standard, and solving the problem at the DAC level is by far the best and easiest path.
One way to prevent ISOs is to add an analog peak detector at the input of the ADC which fires when the signal gets higher than the ADC is specified to accept.
 
ISOs are (sometimes) a playback problem, not a recording problem. Any sample stream must be considered as valid data..
Nope. ISOs are invalid data because the input of the ADC has been overdriven.

Another example is a simple rectangular signal with alternating series of +/- maximum values. Due to the Gibbs phenomen a DAC must create a signal higher than its output is specified. The antialias filter in the ADC creates those peaks when fed the original analog square signal, and then overloads the ADC.
IMHO, therefore the reproducing hard-/software should take care of that, by providing some headroom in digital and analog parts, like 1..2dB or so, and anything above that should be soft-clipped in the range of a another 1..2dB on top of that.
Agreed, because too many recordings have been clipped. 6 dB can be spared easily as S/N of modern DACs is high enough.
Of course. the recording industry would help us a lot by avoiding ISO from the start, in new productions (or remasters).
This!
 
Nope. ISOs are invalid data because the input of the ADC has been overdriven.
Are you sure about that? My understanding is that ISO's are a random phenomenon due to the sampling theorem itself which only becomes a problem when sampling at or near 0 dB. Are ADC's even used extensively besides vocals in most modern recordings? Seems like most of the "instruments" are just digital synthesisers? Could be wrong and trying to understand.
 
Are you sure about that? My understanding is that ISO's are a random phenomenon due to the sampling theorem itself which only becomes a problem when sampling at or near 0 dB.
This is correct. Leave enough headroom is the proper way to prevent them.
Are ADC's even used extensively besides vocals in most modern recordings?
All instruments which are recorded through a microphone or an analog output (guitar pickup, amplifier output) need an ADC.
Seems like most of the "instruments" are just digital synthesisers? Could be wrong and trying to understand.
I wouldn't say most, it depends on the musical style.
 
Nope. ISOs are invalid data because the input of the ADC has been overdriven.
What ADCare you speaking of? There might not even be an ADC involved in the production, for electronic music etc.
 
Seems like most of the "instruments" are just digital synthesisers?
Most "instrument sounds" these days have been analog capture originally but have undergone extensive digital post-processing. Think any kind of samples being used, for example.
 
A lot rides on the makers of the software the music industry uses . I can only guess that results are varied :)

Some probably indicate or prevent problems with varying success .

Some may not ?
Never worked with this personally .
 
Back
Top Bottom