splitting_ears
Active Member
Hi there,
It's been a while since I thought about this, but why aren't DACs tested for their ability to handle inter-sample overs (or ISP, inter-sample peaks) in ASR reviews?
Here are some established facts:
Firstly, by creating a standard test (which would include a test tone + documented procedure) very similar to the J-test for jitter. Currently, there are many inter-sample test files floating around, but none of them are really established, which true peak levels are all over the place. The test procedure itself is also unclear to many people. I'm sure the whole community here could come up with a very robust, yet simple test.
When the test looks robust enough, adding it to the main reviews would not only add an interesting angle not covered elsewhere, but would also push manufacturers to trade some SNR for higher safety margins to avoid potential playback distortion. In other words, put more emphasis on a "clean" most significant bit (msb) than on the least significant bit (lsb). This would be a significant quality improvement, well beyond the magnitude of a 2dB better noise floor.
Today, most properly designed DACs are well above the human hearing threshold, but sometimes fail to provide adequate protection for a necessary by-product of PCM encoding. Only by publishing objective data and reviews can we encourage more manufacturers to solve this problem. ASR has changed many things for the better in the hi-fi/pro audio world, so I'm sure this would be a great next challenge to tackle.
What do you think?
It's been a while since I thought about this, but why aren't DACs tested for their ability to handle inter-sample overs (or ISP, inter-sample peaks) in ASR reviews?
Here are some established facts:
- ISPs are very real, and almost any material that's been released after the 90's can have a significant amount of them 'baked-in' regardless of engineering, artist calibre and technical excellence. Most of the time they stay below +3dBFS, but it's possible to find commercial releases that almost reach +6dBFS. Please also note that there is no mathematical maximum to ISPs, as demonstrated here.
- ISPs are a necessary by-product of PCM encoding, which means that the 'true peak' values will always be greater than the sample values. This is due to the nature of encoding itself: samples are not the signal but an intermediate representation of it. It becomes a "real" signal only after decoding, ie. reconstruction/interpolation.
- DACs handle these overshoots differently: some of them distort, and some of them implement an internal safety margin, effectively reducing SNR. For instance, Benchmark and RME do this with great merit. In most cases, this margin adds between 2 and 3dB of tolerance for overshoots.
- Distortion from ISPs only occurs when the DAC is 'pushed' to its maximum output volume. If the unit has a digital volume control, turning it down will usually solve the issue. However, some others can't due to different design choices (fixed output, analogue volume pots, ...)
- Sample rate conversion can further increase the reconstructed peak levels due to the Gibbs phenomenon. Lossy encoding also creates many overshoots that are even harder to predict.
Firstly, by creating a standard test (which would include a test tone + documented procedure) very similar to the J-test for jitter. Currently, there are many inter-sample test files floating around, but none of them are really established, which true peak levels are all over the place. The test procedure itself is also unclear to many people. I'm sure the whole community here could come up with a very robust, yet simple test.
When the test looks robust enough, adding it to the main reviews would not only add an interesting angle not covered elsewhere, but would also push manufacturers to trade some SNR for higher safety margins to avoid potential playback distortion. In other words, put more emphasis on a "clean" most significant bit (msb) than on the least significant bit (lsb). This would be a significant quality improvement, well beyond the magnitude of a 2dB better noise floor.
Today, most properly designed DACs are well above the human hearing threshold, but sometimes fail to provide adequate protection for a necessary by-product of PCM encoding. Only by publishing objective data and reviews can we encourage more manufacturers to solve this problem. ASR has changed many things for the better in the hi-fi/pro audio world, so I'm sure this would be a great next challenge to tackle.
What do you think?
Last edited: