This is a quote from Rob Watts on Head-Fi , owner I believe of Chord Electronics:
So why would somebody choose to misrepresent this test? It may be ignorance; or it may be that the tester has other motives. Conventional delta sigma modulators (noise shapers) have amplitude linearity issues; as the wanted signal approaches the noise shaper's resolution limit, it can no longer respond to the signal, and essentially the amplitude gets smaller. This is easy to see on noise shaper simulations, and it's something I have eliminated (that's one reason why I test (using verilog simulation) my noise shapers with -301dB signals and it must perfectly reconstruct it). If you want to counteract this issue, then simply add the correct amount of noise using the conventional test; the loss in amplitude is balanced by noise replacing it. Thus tweaking the bandwidth to add an exact amount of noise to suit the desired DAC to give a "perfect" linearity plot is a way round this problem. But of course it is not science; it's just a way to tweak measurements you want to present, to suit the narrative that you may have.
Rob
Amirm, could you explain what he is talking about in laymen's terms? Thanks.
Sure.
He was brought in to help Jude with respect to a great technical point made by someone on head-fi:
He absolutely speaks the truth and is a point I have repeatedly made.
Let's dig in. As DAC output gets lower and lower -- which is what linearity graph shows -- its output starts to get corrupted with noise and distortion. Power supply noise that was at -110 dB is nothing compared to 0 dB signal. But lower our signal to -120 dB and now the power supply noise is actually quite a bit louder than the signal itself! Ditto for all other noise and distortion sources that are not level dependent.
Ideally we want our linearity measurements to show what happens to the output of the DAC as we lower the volume. After all that is what we hear!
Alas, we can't do that because our analog to digital converter in our audio analyzer is doing the same thing to some extent, having its result corrupted by distortion and noise just the same.
The "solution" is like a pill that addresses the illness but has a lot of side effects. Namely, we apply filters to whatever our ADC in the analyzer captures. The lower in level of linearity measurements we make, the stronger this filter needs to be to get rid of our analyzer noise and distortion.
Unfortunately that filtering can't distinguish between ADC noise/distortion in the analyzer versus noise/distortion in the DAC being tested. It cleans up both just the same.
The filter is so strong and so "good" that it does what member lowvolume says. It can render beautiful sine waves out of total garbage produced by the DAC (and ADC in our analyzer). It is like demonstrating how dirty a plate in a restaurant is, after washing it a 100 times!
Instead of accepting this as a fact and thinking about how to deal with it, Jude puts up a defense and then asks both Bob Smith (atomicbob) and Rob Watts to defend him. Both put up an improper defense.
More specific to Rob's post, a designer may indeed have strong desire to measure just the pure tone output of the DAC. In his case, he strives for production of tones accurately at ridiculous levels as indicated in his post. He can then tweak his design to see if he can get better outcome.
Those
measurements as I explained however, do not reflect what we hear. There is no filtering whatsoever in the output of the DAC when we connect it to our amplifiers and speakers. We hear the output of the DAC, noise/distortion and all together.
Our goal with any audio measurement needs to be correlation with what we hear. To the extent we have applied amazingly strong filters to the output of a DAC prior to measurement which no user does (or can as otherwise you would only hear one tone), creates measurements that are simply not that useful.
So what can we do here? Two things:
1.
Don't measure too deeply as to require such severe filtering. I stop at -120 dBFS which is plenty to cover ear's dynamic range of roughly 116 dBFS. Jude and Atomicbob to go -140 dB. At -140 dB there is so much noise and distortion that they are the signal, not what the DAC is attempting to produce! Filtering the dominant output of the DAC and then showing a tiny signal within doesn't demonstrate anything useful.
2.
Use a filter that is just enough but no more. Here is the response of my filter relative to (one of two) Jude's filter settings centered around measuring linearity at 200 Hz:
As you can see, my filter in red not only doesn't filter as much (50 dB versus 60 to 70 dB for Jude's), but it has a much more well behaved frequency response. See the various troughs in Jude's especially one around mains frequency of 50 to 60 Hz which helps the DAC but not showing as much of its power supply in DAC output.
I have also worked to make sure that with or without filter, the output of the DAC at the main frequency of 200 Hz doesn't change. Digital filter can ring and levels can change. While this is not a big error in Jude's case, it is there nevertheless which I corrected in my custom filter.
Bottom line: would you like the measured accuracy of a DAC be through that blue curve or red? I hope we both agree that less is more and attention needs to be paid to the underlying signal processing to create a correct measurement.
Back to Rob Watts' post, he is defending these ultra steep filters or use of FFT (steepest filters of all) without paying attention to the point made by member lowvolume: that we are not measuring what is coming out of the DAC and reaches listener ears. That by filtering just about any crap out of the DAC, we can get some semblance of a sine wave to drive our linearity. That is not what we want to measure.
Finally, someone should show him the Yggdrasil measurements using the FFT method he advocates and ask him what thinks of that:
He will fall off his chair and delete his post.
FFT or not, the Yggdrasil owners have (as in the three I have tested), lose complete linearity ast -115 and don't care what you give them as input anymore.
It doesn't matter what method is used. Jude's, or mine. What has been shipped to customers shows the same broken design.
Summary
Rob's defense of Jude misses the point completely. Our goal with linearity measurements is not an academic exercise where we ignore DAC noise and distortion and celebrate what comes out after filtering these. He as a designer may have interest in such data (as should have Schiit), but not us as users. We should use as little filtering as we can get away with.
Regardless, he was not told about the larger plot that using any method available to us, the Schiit Yggdrasil DAC produces non-competitive linearity results. I challenge him to defend this. He will not and cannot.
It is disingenuous to not acknowledge the great point lowvolume made that any signal including a square wave can be cleaned up with such filters to produce a sine wave! It was a great teaching moment that got destroyed by defending a company's commercial interest as opposed to getting to the truth.