• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Let's develop an ASR inter-sample test procedure for DACs!

splitting_ears

Active Member
Joined
Aug 15, 2023
Messages
110
Likes
256
Location
Saint-Étienne, France
Hi there,

It's been a while since I thought about this, but why aren't DACs tested for their ability to handle inter-sample overs (or ISP, inter-sample peaks) in ASR reviews?

Here are some established facts:
  1. ISPs are very real, and almost any material that's been released after the 90's can have a significant amount of them 'baked-in' regardless of engineering, artist calibre and technical excellence. Most of the time they stay below +3dBFS, but it's possible to find commercial releases that almost reach +6dBFS. Please also note that there is no mathematical maximum to ISPs, as demonstrated here.
  2. ISPs are a necessary by-product of PCM encoding, which means that the 'true peak' values will always be greater than the sample values. This is due to the nature of encoding itself: samples are not the signal but an intermediate representation of it. It becomes a "real" signal only after decoding, ie. reconstruction/interpolation.
  3. DACs handle these overshoots differently: some of them distort, and some of them implement an internal safety margin, effectively reducing SNR. For instance, Benchmark and RME do this with great merit. In most cases, this margin adds between 2 and 3dB of tolerance for overshoots.
  4. Distortion from ISPs only occurs when the DAC is 'pushed' to its maximum output volume. If the unit has a digital volume control, turning it down will usually solve the issue. However, some others can't due to different design choices (fixed output, analogue volume pots, ...)
  5. Sample rate conversion can further increase the reconstructed peak levels due to the Gibbs phenomenon. Lossy encoding also creates many overshoots that are even harder to predict.
Ok, so how could we solve this problem to get better fidelity, even with difficult material?

Firstly, by creating a standard test (which would include a test tone + documented procedure) very similar to the J-test for jitter. Currently, there are many inter-sample test files floating around, but none of them are really established, which true peak levels are all over the place. The test procedure itself is also unclear to many people. I'm sure the whole community here could come up with a very robust, yet simple test.

When the test looks robust enough, adding it to the main reviews would not only add an interesting angle not covered elsewhere, but would also push manufacturers to trade some SNR for higher safety margins to avoid potential playback distortion. In other words, put more emphasis on a "clean" most significant bit (msb) than on the least significant bit (lsb). This would be a significant quality improvement, well beyond the magnitude of a 2dB better noise floor.

Today, most properly designed DACs are well above the human hearing threshold, but sometimes fail to provide adequate protection for a necessary by-product of PCM encoding. Only by publishing objective data and reviews can we encourage more manufacturers to solve this problem. ASR has changed many things for the better in the hi-fi/pro audio world, so I'm sure this would be a great next challenge to tackle.

What do you think?
 
Last edited:
Hi there,

It's been a while since I thought about this, but why aren't DACs tested for their ability to handle inter-sample overs (or ISP, inter-sample peaks) in ASR reviews?

Here are some established facts:
  1. ISPs are very real, and almost any material that's been released after the 90's can have a significant amount of them 'baked-in' regardless of engineering, artist calibre and technical excellence. Most of the time they stay below +3dBFS, but it's possible to find commercial releases that almost reach +6dBFS. Please also note that there is no mathematical maximum to ISPs, as demonstrated here.
  2. ISPs are a necessary by-product of PCM encoding, which means that the "true peak" values will always be greater than the sample values. This is due to the nature of encoding itself: samples are not the signal but an intermediate representation of it. It becomes a "real" signal only after decoding, ie. reconstruction/interpolation.
  3. DACs handle these overshoots differently: some of them distort, and some of them implement an internal safety margin, effectively reducing SNR. For instance, Benchmark and RME do this with great merit. In most cases, this margin adds between 2 and 3dB of tolerance for overshoots.
  4. Distortion from ISPs only occurs when the DAC is 'pushed' to its maximum output volume. If the unit has a digital volume control, turning it down will usually solve the issue. However, some others can't due to different design choices (fixed output, analogue volume pots, ...)
  5. Sample rate conversion can further increase the reconstructed peak levels due to the Gibbs phenomenon. Lossy encoding also creates many overshoots that are even harder to predict.
Ok, so how could we solve this problem to get better fidelity, even with difficult material?

Firstly, by creating a standard test (which would include a test tone + documented procedure) very similar to the J-test for jitter. Currently, there are many inter-sample test files floating around, but none of them are really established, which true peak levels all over the place. The test procedure itself is also unclear to many people. I'm sure the whole community here could come up with a very robust, yet simple test.

When the test looks robust enough, adding it to the main reviews would not only add an interesting angle not covered elsewhere, but would also push manufacturers to trade some SNR for higher safety margins to avoid potential playback distortion. In other words, put more emphasis on a "clean" most significant bit (msb) than on the least significant bit (lsb). This would be a significant quality improvement, well beyond the magnitude of a 2dB better noise floor.

Today, most properly designed DACs are well above the human hearing threshold, but sometimes fail to provide adequate protection for a necessary by-product of PCM encoding. Only by publishing objective data and reviews can we encourage more manufacturers to solve this problem. ASR has changed many things for the better in the hi-fi/pro audio world, so I'm sure this would be a great next challenge to tackle.

What do you think?
There's already a test like this (three actually)in the presets of Multitone Analyzer.


Chart 44.1kHz, 64k fft, In L  Out L+R.jpg
ISO.PNG
 
Last edited:
Inter-sample is a niche test for a niche situation.

At worst, your track may contain like three or four inter-sample overs.
Do you think that how gracefully a DAC handles a few problematic samples among the ~10 million samples that make up a typical song, can meaningfully affect the user experience? I doubt it.
 
There's already a test like this (three actually)in the presets of Multitone Analyzer.
Yes, I'm aware of it! But thanks for the reminder. Personally, I find +3dBFS enough to test ISP protection, but it would be interesting to discuss whether going further would be useful from a testing perspective, i.e. checking that a 3dB margin is correctly implemented and then seeing how it distorts.
Is there any way someone (you know who I'm thinking of) could use that test signal directly with an Audio Precision analyser?

Inter-sample is a niche test for a niche situation.

At worst, your track may contain like three or four inter-sample overs.
Do you think that how gracefully a DAC handles a few problematic samples among the ~10 million samples that make up a typical song, can meaningfully affect the user experience? I doubt it.
I beg to differ. Below are three albums from my music collection, with one of them selling almost 400,000 copies in a few weeks on a major label. Admittedly, these are rather extreme examples (and I did some searching to find examples of ISPs above +3dBFS), but they really are ubiquitous in modern recordings.
In fact, finding a pop/rock/EDM release from the 2000s with no overs would be an interesting game :D

Current Value - The All Attracting (2021).png


Tokyo Jihen - Kyoiku (2004).png


Ryoji Ikeda - dataphonics (2010).png
 
Last edited:
Please also note that there is no mathematical maximum to ISPs, as demonstrated here.
Sounds unbelievable :)

I think the proof is, that given arbitrary sequence of sample values, you can get any ISP value you want. Fallacy is that you can't get these values with a proper digitalization of analog signal, because of applying anti-aliasing filter first. Quote: "The pathological waveform we are interested in is a series of
N
alternations between
-1
and
1
, followed by silence". Alterations between -1 and 1 is frequency Fs/2, which is not allowed in a proper data, right?

Probably the attempt to digitize single impulse gives maximum value of real world ISP, which would be about 4dB.

It could be interesting to test how does DAC behave with artificial data, this would give some info, how resilient is internal processing. But probably more meaningful would be a test with a proper real signal.
 
Fallacy is that you can't get these values with a proper digitalization of analog signal, because of applying anti-aliasing filter first.
Well, I don't think it's a fallacy, because the article focuses on signal synthesis and measurement, not digitisation, which is a separate issue. But you're right to point out the difference.

Let's not forget that some very good albums have been made without any microphone, or any analogue transfer at all :)
 
I've been curious about this too... There is no reason for a 0dB limit on the analog-side so I've wondered if it's really a problem.

But it doesn't worry me because I've never heard inter-sample clipping myself, nor have I ever heard of it being audible. I'm not afraid of 0dBFS...
 
  • Like
Reactions: EJ3
@splitting_ears Have you seen these threads?


 
Given most DAC now offer digital volume control, isn’t this issue easily avoided by running your DAC at -3dB volume or lower?
As I said in the first post, yes, this usually solves the problem. However, many DACs (even excellent and/or competently designed ones) either have no digital volume control or have analogue volume controls that operate at the end of the signal chain. See here for an illustration.

There is also the option of reducing the volume directly in the audio player of your choice. But then you are asking the listener to solve a problem he or she is not responsible for, without knowing what margin is safe. And if your volume control is mapped from 0 to 100, how do you know if 90 is enough?

The purpose of this thread is not to start a debate about whether ISPs are audible or not, or how to solve it at user level. Rather, the idea is to establish a solid test standard and incorporate it into ASR reviews, so that it encourages manufacturers to take ISPs into account from the design stage... even if it means a slightly lower SINAD.

Take jitter, for example. We measure it and we like to see manufacturers take care of it, even though the audibility of jitter is much lower than people usually think. I would like to see the same for ISPs, and be assured that a DAC will not distort no matter how extreme the incoming signal is. This distortion may be short and hard to hear in a very dense recording, but it's real, measurable and variable between devices.
Just by measuring only 3 DACs, Archimago found 3 different results of "ISP overhead":
  • TEAC UD-501 - 0dBFS
  • RME ADI-2 Pro FS - +2.1dBFS
  • Oppo UDP-205 - +3.5dBFS
Aren't you curious to see how other devices behave with this? :)
 
Last edited:
If the reconstruction of the sampled signals ever goes over full scale, i.e. 0 dB (or +/-1), the signal is in all likelihood clipped at the source. As clipping irreversibly damages the signal, how they are "reconstructed" is of little importance. The reconstruction will be distorted and not be the same as the original.

Usually a plot like the one below is used to illustrate intersample overs. Note that you can get this only if the samples are taken with perfect timing and at the perfect frequency (in this case 1/4 of the sampling frequency). If sampling starts a little earlier or a little later, you will get clipped samples.

intersample_overs_1.png


Below shows samples taken a little bit earlier than the previous plot. You can see we've got clipping.

intersample_overs_2.png


Here is the reconstruction from the clipped samples, which is not the same as the original. Therefore, the correct solution to the problem is not to have intersample overs in the first place, i.e. take care of your signals properly. If you have intersample overs, the source signal is broken, and it is rather pointless to argue over what is the right way to reconstruct it or measure it with artificial signals that we will never encounter in real life.

intersample_overs_3.png
 
Last edited:
What do you think?
IS-overs are pretty much irrelevant for all practical means and purposes, with one or two notable exceptions:
  • The analog circuitry is clipping so hard and having a such a nasty clipping recovery that we may approach hearing thresholds (all "normal" IS-over clipping is completely inaudible (not only) because of masking, this can be easily tested for with an emulated IS-over clipping).
  • The IS-overs not resulting in clipping but in a sample value wraparound... some 90'ies CD players had such a behavior (in a digital volume control), which of course has more chances to become actually audible.
Therefore, testing for IS-over behavior has some merits. For a complete overview, all available digital filter choices must be tested individually as the filter type governs everything. "Better" (more brickwall'ish) filters are the worst (produce the highest IS-overs)

And the obvious method for this would be looking at the final output signal with an oscilloscope and check for ill-effects other than benign soft-clipping (wraparound, oscillation, extended rail-sticking, stuff like that). Test signal could be any of the known sequences producing known worst-case (with ideal reconstruction filter) peaks of, say, +6dB, ramping them up in level.
 
Therefore, the correct solution to the problem is not to have intersample overs in the first place, i.e. take care of your signals properly. If you have intersample overs, the source signal is broken, and it is rather pointless to argue over what is the right way to reconstruct it or measure it with artificial signals that we will never encounter in real life.
This sounds reasonable, but unfortunately this is not how things works out there.
Yes, in a perfect world, the PCM specification would include some mandatory headroom so that any decoding or post-processing would not result in clipping.

But to describe signals whose decoded peaks are higher than their encoded peaks as 'clipped' is simply wrong. That's how the whole concept of PCM works, and it's perfectly normal. Of course, heavily clipped and/or limited signals will get higher inter-sample values than others, but you can also get reconstruction overs without going astray. Even on Steely Dan recordings, if I may use an overused example. Besides, limiting/clipping is a perfectly valid production/processing technique, even in the digital domain. It's an integral part of the sound of certain genres, and why would anyone want their DACs to add more distortion than the artists intended?

Digital clipping is also not the only factor responsible for ISP. The Gibbs phenomenon plays a role too, as mentioned in my first post or demonstrated in an AES paper here:
2. Gibb's phenomenon. Occurs when limiting the bandwidth of a wide-band signal (or truncating an impulse response). This is particularly important when the signal is clipped in the digital domain, but it applies generally. What happens is that a square wave (or hard clipped signal) can be viewed upon as a sum of individual sine waves of frequencies 1, 3, times the fundamental frequency. The flat top of the square wave depends on the presence of all harmonics at the right levels and phases. If some of the harmonics are removed by lowpass filtering, the peak value of the signal rises. When convening from digital to analog a Iow pass filter is always applied, so the analog level may be higher than expected.


1698571443561.png

I'll also quote Christine Tham here:
So, if 0dBFS+ levels are "illegal" then why should we worry about them? If they represent "mastering errors" then recording engineers shouldn't be creating or releasing titles containing 0dBFS+ levels. Why should manufacturers create workarounds for problems caused in the recording studio?
If only everything was so simple!

[...]

Speaking purely as a consumer, the fact of life is, regardless of whose "fault" it is, recordings with 0dBFS+ levels already exist today (or easily created by unsuspecting consumers). I have detected the presence of 0dBFS+ levels in a small but significant portion of my personal CD collection. Sure, I can rant and complain about the incompetence of the recording engineers that created these "faulty" and "illegal" recordings, but that's not going to change the fact that I already own these discs, which I don't intend to sell, and I have no way of forcing the studios to remaster these recordings and re-release them. Well, I could initiate a class action suit, but I suspect that would benefit lawyers more than consumers …

I was surprised to discover in the course of my testing that some manufacturers are aware of the 0dBFS+ issue and have designed players capable of reproducing 0dBFS+ levels without distortion (or at least no more distortion than below 0dBFS levels). The Sony SCD-XA777ES is clearly one such player, and the Panasonic DVD-RP82 has partial support for handling 0dBFS+ levels. Others are either unaware of the issue or is unwilling to make the compromises necessary in order to handle 0dBFS+.

Of course, a player 0dBFS+ levels can create problems further down the chain, as 0dBFS+ levels can overload the preamp stage or compromise the preamp design (It requires a preamp stage that can handle up to 4Vrms instead or 2V).

I would like to salute those manufacturers who have taken the trouble to provide players that handle 0dBFS+. It gives me, as a consumer, the chance to experience "faulty" recordings without the distortion that I would otherwise be forced to hear, provided I am very careful about the choice of equipment I use to play back these recordings.
 
Last edited:
For a complete overview, all available digital filter choices must be tested individually as the filter type governs everything. "Better" (more brickwall'ish) filters are the worst (produce the highest IS-overs)

And the obvious method for this would be looking at the final output signal with an oscilloscope and check for ill-effects other than benign soft-clipping (wraparound, oscillation, extended rail-sticking, stuff like that). Test signal could be any of the known sequences producing known worst-case (with ideal reconstruction filter) peaks of, say, +6dB, ramping them up in level.
That's a great point, thank you very much! Indeed, filters are the key factor and the most unpredictable part of the whole 'true peak' ITU BS.1770-4 specification. This is a real problem for mastering and audio engineers, as TP readings can vary widely between different software, depending on the implementation.

And so you think going to +6dBTP would be better than +3dBTP for testing purposes?

I would be very curious to see how something like a Chord DAC (with their trademark ultra-steep reconstruction filters) behaves with a test like this.
 
And so you think going to +6dBTP would be better than +3dBTP for testing purposes?
You can always make it lower, so yes. One signal, used at various levels, should be enough.
I would be very curious to see how something like a Chord DAC (with their trademark ultra-steep reconstruction filters) behaves with a test like this.
Beyond some point the increase of peak values is negligible, IME. The typical linear-phase "sharp" filters in DAC chips already come quite close to the "infinite" sinc() reconstruction wrt IS-over peak values.
 
IMV almost all occurrences of ISPs in recordings are not due to ADC overflow but rather to bad processing of audio samples (changing amplitude, adding effects, compression, ...) in mixing and/or mastering stage. I think in those cases the processing has more effect on bad sound than the ISPs.
 
IMV almost all occurrences of ISPs in recordings are not due to ADC overflow but rather to bad processing of audio samples (changing amplitude, adding effects, compression, ...) in mixing and/or mastering stage. I think in those cases the processing has more effect on bad sound than the ISPs.
ISPs are an entirely digital phenomenon. As stated by @popej, you can't get ISPs from an ADC stage.

Again, creative processing in mixing/mastering is not the only way to get ISPs. Think Gibbs phenomenon, sample rate conversion, digital equalisation. Depending on how these stages are implemented, they can exacerbate the problem by creating overshoots even when the source signal was below 0dBTP.
 
Without even speaking of Gibbs phenomenon, max inter-sample values are by definition greater or equal to max sample values. This is very wrong to incriminate any processing, or any « bad » processing — this would be just pure misunderstanding of what inter-samples are. (I dare calling them ‘inter-samples’).
 
Last edited:
Ok, since many people make the argument that "ISP overs are simply a result of bad mixing/mastering", here is a table (source) that shows a comparison between many different limiters that are typically used as the last step in the mastering process. In some cases, even with a -1dBFS cap (which is a fairly conservative setting) can result in +1dBTP peaks.

Even if the mastering engineer wants to be aware of TP values, it is common is to use a true-peak meter according to the ITU BS.1770-4 specification. The problem is that the specification tolerates a margin of error of up to +/-0.55dB (table below, source).
DeveloperPluginTrue PeakSample PeakdBTPComments
2MagixSeqouia sMax11 v13.0Yes-1.16-1.11Balanced setting. We're guessing Seqouia compensates for TP by lowering the gain post limiting. This is a great way to handle ISP but it will make your master -1dB quieter.
3KuassaKratos Maximizer 2Yes-1.07Result with 4x Oversampling – search the list for the result without Oversampling
4IzotopeMaximizer v8Yes-1.01-1.01
5IzotopeMaximizer v7.0Yes-1As stated above – This is the result from IzotopeRX – Measurements with other tools varied between -1.0 to -0.8
6IzotopeVintage Limiter v7.0Yes-1As stated above – This is the result from IzotopeRX – Measurements with other tools varied between -1.0 to -0.8
7VoxengoElephant v4.4No-1Oversampling at 8x
8IzotopeVintage Limiter v8.0Yes-0.99-0.99
9FabFilterPro-MBNo-0.98Not a limiter per se but handle TP very well if you use max Lookahead. This result is with Single Band – max ratio – both Att/Rel at 20%
10ToneBoosterBarricade v3.1.3yes-0.97
11ToneBoosterBarricade v4Yes-0.97Transparent profile - Limiter only, compressor and pre bypassed
12VladgsoundLimiter No6 OSXYes-0.97Free
13Thomas MundtLoudMax v1.23Yes-1-0.96Free - We used the wrong settings with our first test of version 1.23, this is the updated result after the developer reach out to us and made us aware.
14SignumBute Limiter 2Yes-1.05-0.94To achieve comparable settings we used a Threshold at -7.5 and Post Gain at +6.5
15TBProAudioLAxLimit2Yes-0.94Remember to change detection from "Peak" to "ISP"
16SonibleSmart:Limit 1.00Yes-1.09-0.93Universal setting. Clip set to 0. Saturation, color, bass management set to off
17PSPXenon v1.52Yes-0.92We used version 1.3 at first which did not give as good results (-0.61). Upgrade if you're using an older version.
18Tokyo Dawn RecordsTDR Limiter 6 GEYes-0.91We used the True Peak module alone, none of the other modules where active
19CockosReaLimitYes-1.01-0.87
20FabFilterPro-L v1.13No-0.87Pro-L does not handle ISP automatically – you need to use Lookahead and keep an eye on the meter.
21KazrogKClipYes-0.87
22Newfangled AudioElevate 1.5.7Yes-1.14-0.87
23SonnoxLimiter v3.03.17Yes-0.87
24MeldaProductionMDynamicsLimiter v9.21No-0.85
25NugenISL2 v2.0Yes-0.85
26BrainworxBX_Limit True PeakYes-1-0.84Tone controls bypassed, just the limiter.
27Lively AudioMaxwell Smart v3Yes-1-0.84Free, Windows only.
28FabfilterPro-L2Yes-0.84These results are with Modern setting, no oversampling... The result was pretty much the same at 0.69ms Lookahead and 16x oversampling.
29LVC-AudioLimited-Z 2.0.0Yes-1.17-0.83Mode: LVC Basic. We noted that the level/percieved loudness went down with ISP protection activated. We also noted that the peaks measures -1.17dBFS when set to -1dBFS limiting. The paid version has oversampling which probably will have an impact on the result.
30WavesWLMYes-0.83
31FluxPure Limiter v3.5No-0.81Handle it as Sequoa, by a volume knob last in the chain
32VladgsoundLimiter No6 v1.0.2b WinYes-1.01-0.71Free - ISP Precise and ISP Protect set to on. Everything else bypassed
33Acustic AudioHWMCNo-0.92-0.68Because of Acustic Audio's graphics it's hard to set exact values, therefor we used a gain plugin both before and after to set the values. +7.5dB pre and -1.0dB post limiting.
34HofaIQ-Limiter v1.07Yes-0.64
35Sonic AnomalyUnlimitedYes-0.6Free
36A.O.MInvisible Limiter 1.7.5Yes-0.58
37PSPOld Timer v2.0No-0.56Not a limiter per se but Sigurdór made it work with a fast attack/release and a high ratio
38DMG AudioLimitless v1.03Yes-0.51
39KuassaKratos Maximizer 2Yes-0.45This is without Oversampling – search the list for the result with Oversampling turned on
40T-RacksStealth LimiterYes-0.44Mode set to Tight – ISPL at 16X
41PresonusLimiter (Studio One 3.2)Yes-0.4
42SoundSpotVelo2No-1-0.39Set to 8x oversampling
43g200kgVeeMaxYes-1.01-0.24
44WavesL1+Yes-0.24In 'Analog' mode
45112dBBig Blue Limiter v1.1.5No-0.23
46AirwindowsNC-17No-0.91-0.18
47AudacityLimiterNo-0.18Soft Limit
48WavesL3No-0.14Extreme Analog Profile
49CockosMaster Limiter v5.1No-0.1Comes with Reaper
50KiloheartsLimiterNo-1-0.09Free
51Dead DuckLimiterNo-0.04Note - The Threshold act as the Ceiling/output.
52Image LineMaximusNo-0.02Not really a mastering limiter, more of a multiband compressor but added upon request.
53McDSPML4000 ML1No0.02In 'Smart' mode
54AbletonLimiterNo0.03
55VoxengoElephant v4.4No0.05Oversampling at Auto – We added this option to make everyone aware of how important oversampling while limiting is. Elephants scored a perfect -1.0dBTP in all measures with x8 oversampling.
56UADPrecision LimiterYes0.08
57WavesL1No0.08
58MasseyL2007 Plugin Revison 5604No0.11
59WavesL2No0.11
60Boz Digital LabsThe WallYes*0.13Only claims to handle ISP by oversampling, the test were made with 8x oversampling
61WavesL3No0.15Basic Profile
62Brainworxbx_limiter v1.3No0.21
63AvidDyn 3 v12.5No0.36
64MeldaProductionMMultibandLimiter v9.21No0.54
65Final Mix SoftwareM1 Limiter Lite v1.0.1No0.59
66Cockos v5.1Zero Crossing MaximizerNo0.68Comes with Reaper
67Slate DigitalFG-XNo1.04
68CockosEvent Horizon v5.1No1.09Comes with Reaper
69MeldaProductionMLimiter v9.21No1.09
70KjaerhusClassic Master Limiter v1.0.6No-11.1The ceiling is set to -0.2 and you can't change it so this result is with a volume attenuation of -0.8 after the limiter. Without the adjustment it would have been about +0.9dBTP
71VenomodeMaximal 2No1.14In 8x quality mode (we're guessing oversampling)
72LVC-AudioLimited-Z 1.0.1No-1.171.29Version 2.0 has ISP protection, search the list for its score.
73CockosJS Volume AdjustmentNo-11.5Comes with Reaper
74CockosNP1136 v5.1No1.69Comes with Reaper
75Image LineMaximusNo-0.212.35

 
Ok, since many people make the argument that "ISP overs are simply a result of bad mixing/mastering", here is a table (source) that shows a comparison between many different limiters that are typically used as the last step in the mastering process. In some cases, even with a -1dBFS cap (which is a fairly conservative setting) can result in +1dBTP peaks.

Even if the mastering engineer wants to be aware of TP values, it is common is to use a true-peak meter according to the ITU BS.1770-4 specification. The problem is that the specification tolerates a margin of error of up to +/-0.55dB (table below, source).

the only reason to care about intersample peaks and also why the -1dBFS ceiling exists is file compression algorithms. they wont care if they master for CD. They would care if it was audible, but there is already so much wanted! distortion it is impossible to hear. I don't think even listening to a single sample isolated you could ABX it. Many of these software even have a soft clip algorithm baked in to add wanted! clipping
 
Back
Top Bottom