• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Review and Measurements of Schiit Yggdrasil V2 DAC

Wombat

Master Contributor
Joined
Nov 5, 2017
Messages
6,722
Likes
6,464
Location
Australia
Here is day 8 measurements of USB input, balanced output on Schiit Yggdrasil warm-up test per my promise to its owner.

View attachment 13733

View attachment 13734

As the line in the TV series Mythbusters goes, the "DAC warm-up myth is busted!"

My obligation to the owner is now complete (he asked for 7 days).


But,but,but. What was the light level, humidity and cloud cover during each test? ;)
 

derp1n

Senior Member
Joined
May 28, 2018
Messages
479
Likes
629
SBAFtards believe you need three weeks before this thing doesn't sound like garbage. :rolleyes:
 

Jimster480

Major Contributor
Joined
Jan 26, 2018
Messages
2,895
Likes
2,055
Location
Tampa Bay
Amazing work!
It seems that now 3/3 random sample units are all measuring exactly the same.
Everything is transparent as possible and the results are still the same....
Looks like Jude fudged his measurements to make Schiit look good. Seems like a desperate attempt by Schiit to save their brand.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,654
Likes
240,828
Location
Seattle Area
Looks like Jude fudged his measurements to make Schiit look good. Seems like a desperate attempt by Schiit to save their brand.
I rather not go there. :) I think he was given a hand-picked unit to test. Or alternatively, Schiit realized they made a mistake with the signal processing and has fixed the problem in newer units. But they hate admitting it now.

None of the scenarios "smell" good though.
 
Last edited:

Jimster480

Major Contributor
Joined
Jan 26, 2018
Messages
2,895
Likes
2,055
Location
Tampa Bay
I rather not go there. :) I think he was given a hand-picked unit to test. Or alternatively, Schiit realized they made a mistake with the signal processing and has fixed the problem in newer units. But they hate admitting it now.

None of the scenarios "small" good though.
I think you mean "smell" :)

But yes its one of these issues, although I would more gather that there was a cherry picked unit or a specific firmware they were running. IF the design flaw is there I am less likely to believe that you could cherry pick a unit without flaws since 3/3 perform the same despite being from different era and either upgraded or not upgraded, etc.
 

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,491
Likes
4,080
Location
SoCal
I thought Mike M. of Schiit mentioned they reluctantly fixed the 0 crossing glitch at some point, or is it a completely different issue you mean?
I rather not go there. :) I think he was given a hand-picked unit to test. Or alternatively, Schiit realized they made a mistake with the signal processing and has fixed the problem in newer units. But they hate admitting it now.

None of the scenarios "small" good though.

I thought Mike M. of Schiit mentioned they reluctantly fixed the 0 crossing glitch at some point, or is it a completely different issue you mean?
 

Jimster480

Major Contributor
Joined
Jan 26, 2018
Messages
2,895
Likes
2,055
Location
Tampa Bay
I thought Mike M. of Schiit mentioned they reluctantly fixed the 0 crossing glitch at some point, or is it a completely different issue you mean?


I thought Mike M. of Schiit mentioned they reluctantly fixed the 0 crossing glitch at some point, or is it a completely different issue you mean?
Totally different problem honestly
 

derp1n

Senior Member
Joined
May 28, 2018
Messages
479
Likes
629
But yes its one of these issues, although I would more gather that there was a cherry picked unit or a specific firmware they were running. IF the design flaw is there I am less likely to believe that you could cherry pick a unit without flaws since 3/3 perform the same despite being from different era and either upgraded or not upgraded, etc.

Firmware seems likely, check out this (unannounced? I can't find anything on schiit.com) Modi Multibit firmware change:
The glitch in V1 firmware previously visible at -70 dBFS is gone in V2. -90 dBFS now renders a sine.
While V1 Gain Linearity was previously quite good for a 16 bit multiplying multibit architecture DAC,
with V2 Schiit has set another new standard for this type of 16 bit DAC.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,654
Likes
240,828
Location
Seattle Area
Firmware seems likely, check out this (unannounced? I can't find anything on schiit.com) Modi Multibit firmware change:
Fascinating. That is the best proof we have that our work here has resulted in them going back and fixing these problems. And that their selected measurement folks given the bits before the general public without due notice.
 

Palladium

Addicted to Fun and Learning
Joined
Aug 4, 2017
Messages
662
Likes
814
I'm sure they are going to fairly compensate you for testing their products for them gratis, by making sure you have the least possible chance of touching any of their stuff again.
 

rmo

Member
Joined
Jun 25, 2018
Messages
67
Likes
52
This is a quote from Rob Watts on Head-Fi , owner I believe of Chord Electronics:
So why would somebody choose to misrepresent this test? It may be ignorance; or it may be that the tester has other motives. Conventional delta sigma modulators (noise shapers) have amplitude linearity issues; as the wanted signal approaches the noise shaper's resolution limit, it can no longer respond to the signal, and essentially the amplitude gets smaller. This is easy to see on noise shaper simulations, and it's something I have eliminated (that's one reason why I test (using verilog simulation) my noise shapers with -301dB signals and it must perfectly reconstruct it). If you want to counteract this issue, then simply add the correct amount of noise using the conventional test; the loss in amplitude is balanced by noise replacing it. Thus tweaking the bandwidth to add an exact amount of noise to suit the desired DAC to give a "perfect" linearity plot is a way round this problem. But of course it is not science; it's just a way to tweak measurements you want to present, to suit the narrative that you may have.

Rob


Amirm, could you explain what he is talking about in laymen's terms? Thanks.
 

maxxevv

Major Contributor
Joined
Apr 12, 2018
Messages
1,872
Likes
1,964
Firstly, Rob isn't the 'owner'of Chord. He is the man behind Chord's FPGA DAC chip (software) development.

Secondly, what is the full context of this answer from him about ?
 

rmo

Member
Joined
Jun 25, 2018
Messages
67
Likes
52
Firstly, Rob isn't the 'owner'of Chord. He is the man behind Chord's FPGA DAC chip (software) development.

Secondly, what is the full context of this answer from him about ?
Here's Rob's entire quote. He was talking about the Schiit Yggdrasil measurements of Amirm's .


Rob Watts


Sponsor: Chord Electronics


Joined: Apr 1, 2014
Posts: 1,700
Likes: 3,358

The term linearity test is a shortened term; the full term is fundamental amplitude linearity test. I quote this because it proves that Jude is absolutely correct in that one categorically must resolve the amplitude of the fundamental; to do this test properly one needs to resolve only the fundamental and not the distortion and noise. To do this completely accurately one needs to do an FFT so that only the fundamental amplitude is measured and absolutely nothing else.

This test grew out of extremely serious and obvious problems that early digital had; it could not resolve small signals accurately, due to inherent problems in R2R, DSD and delta sigma DACs. In the early 1990's, one could employ a simple analogue technique of filtering out all signals apart from the fundamental, then simply measuring and plotting the error as the signal fell. The errors in those days were considerable, in that +/- 2dB was not uncommon at -90dB. Today however, -90dB is pretty accurate, and the tell tale lift, using this simple test, is simply noise from the DAC and so is unimportant. My pulse array DACs, from 1995, resolved this issue, and meant that the traditional analogue technique was worthless, as it simply measured residual noise. So I always use FFTs, with careful calibration of a -60 dB signal and measuring at -120 db; indeed even this technique reveals no linearity error once a suitable number of averages are done.

That's not to say the AP is perfect; it's not. I have recently being upgrading this test, and getting it to resolve +/- of one LSB of 32 bit data. This is a -186.638 dB signal. To do this I need to set the AP to FFT at 6 kHz with a 1.2M point; this is so that I can actually resolve this tiny signal. With synchronous 128 averaging and using a 2.496 kHz signal I can get the observed noise floor to be centred at -214 dB, so that the -186.638 dB signal stands out like a sore thumb. And all my DACs resolve this signal - but always with a +0.6dB error. I am still trying to investigate this error, but since all my DACs (Hugo 2, TT2, Dave) do it with the same error, I am pretty sure it's an AP measurement issue (due to the ADC's fundamental linearity limit). For a signal at -120 dB, this error would translate into a +0.0003 dB - not detectable for the usual -120db levels.

So why would somebody choose to misrepresent this test? It may be ignorance; or it may be that the tester has other motives. Conventional delta sigma modulators (noise shapers) have amplitude linearity issues; as the wanted signal approaches the noise shaper's resolution limit, it can no longer respond to the signal, and essentially the amplitude gets smaller. This is easy to see on noise shaper simulations, and it's something I have eliminated (that's one reason why I test (using verilog simulation) my noise shapers with -301dB signals and it must perfectly reconstruct it). If you want to counteract this issue, then simply add the correct amount of noise using the conventional test; the loss in amplitude is balanced by noise replacing it. Thus tweaking the bandwidth to add an exact amount of noise to suit the desired DAC to give a "perfect" linearity plot is a way round this problem. But of course it is not science; it's just a way to tweak measurements you want to present, to suit the narrative that you may have.

Rob
wpc.gif
Stay updated on Chord Electronics at their sponsor page on Head-Fi.

https://www.facebook.com/chordelectronics
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Here's Rob's entire quote. He was talking about the Schiit Yggdrasil measurements of Amirm's .

Rob Watts

Sponsor: Chord Electronics


Joined: Apr 1, 2014
Posts: 1,700
Likes: 3,358

The term linearity test is a shortened term; the full term is fundamental amplitude linearity test. I quote this because it proves that Jude is absolutely correct in that one categorically must resolve the amplitude of the fundamental; to do this test properly one needs to resolve only the fundamental and not the distortion and noise. To do this completely accurately one needs to do an FFT so that only the fundamental amplitude is measured and absolutely nothing else.​
This test grew out of extremely serious and obvious problems that early digital had; it could not resolve small signals accurately, due to inherent problems in R2R, DSD and delta sigma DACs. In the early 1990's, one could employ a simple analogue technique of filtering out all signals apart from the fundamental, then simply measuring and plotting the error as the signal fell. The errors in those days were considerable, in that +/- 2dB was not uncommon at -90dB. Today however, -90dB is pretty accurate, and the tell tale lift, using this simple test, is simply noise from the DAC and so is unimportant. My pulse array DACs, from 1995, resolved this issue, and meant that the traditional analogue technique was worthless, as it simply measured residual noise. So I always use FFTs, with careful calibration of a -60 dB signal and measuring at -120 db; indeed even this technique reveals no linearity error once a suitable number of averages are done.​
That's not to say the AP is perfect; it's not. I have recently being upgrading this test, and getting it to resolve +/- of one LSB of 32 bit data. This is a -186.638 dB signal. To do this I need to set the AP to FFT at 6 kHz with a 1.2M point; this is so that I can actually resolve this tiny signal. With synchronous 128 averaging and using a 2.496 kHz signal I can get the observed noise floor to be centred at -214 dB, so that the -186.638 dB signal stands out like a sore thumb. And all my DACs resolve this signal - but always with a +0.6dB error. I am still trying to investigate this error, but since all my DACs (Hugo 2, TT2, Dave) do it with the same error, I am pretty sure it's an AP measurement issue (due to the ADC's fundamental linearity limit). For a signal at -120 dB, this error would translate into a +0.0003 dB - not detectable for the usual -120db levels.​
So why would somebody choose to misrepresent this test? It may be ignorance; or it may be that the tester has other motives. Conventional delta sigma modulators (noise shapers) have amplitude linearity issues; as the wanted signal approaches the noise shaper's resolution limit, it can no longer respond to the signal, and essentially the amplitude gets smaller. This is easy to see on noise shaper simulations, and it's something I have eliminated (that's one reason why I test (using verilog simulation) my noise shapers with -301dB signals and it must perfectly reconstruct it). If you want to counteract this issue, then simply add the correct amount of noise using the conventional test; the loss in amplitude is balanced by noise replacing it. Thus tweaking the bandwidth to add an exact amount of noise to suit the desired DAC to give a "perfect" linearity plot is a way round this problem. But of course it is not science; it's just a way to tweak measurements you want to present, to suit the narrative that you may have.​
Rob​

wpc.gif
Stay updated on Chord Electronics at their sponsor page on Head-Fi.

Curious to hear @amirm ’s reply.
:)
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,654
Likes
240,828
Location
Seattle Area
This is a quote from Rob Watts on Head-Fi , owner I believe of Chord Electronics:
So why would somebody choose to misrepresent this test? It may be ignorance; or it may be that the tester has other motives. Conventional delta sigma modulators (noise shapers) have amplitude linearity issues; as the wanted signal approaches the noise shaper's resolution limit, it can no longer respond to the signal, and essentially the amplitude gets smaller. This is easy to see on noise shaper simulations, and it's something I have eliminated (that's one reason why I test (using verilog simulation) my noise shapers with -301dB signals and it must perfectly reconstruct it). If you want to counteract this issue, then simply add the correct amount of noise using the conventional test; the loss in amplitude is balanced by noise replacing it. Thus tweaking the bandwidth to add an exact amount of noise to suit the desired DAC to give a "perfect" linearity plot is a way round this problem. But of course it is not science; it's just a way to tweak measurements you want to present, to suit the narrative that you may have.

Rob


Amirm, could you explain what he is talking about in laymen's terms? Thanks.
Sure. :)

He was brought in to help Jude with respect to a great technical point made by someone on head-fi:

1531236287561.png


He absolutely speaks the truth and is a point I have repeatedly made.

Let's dig in. As DAC output gets lower and lower -- which is what linearity graph shows -- its output starts to get corrupted with noise and distortion. Power supply noise that was at -110 dB is nothing compared to 0 dB signal. But lower our signal to -120 dB and now the power supply noise is actually quite a bit louder than the signal itself! Ditto for all other noise and distortion sources that are not level dependent.

Ideally we want our linearity measurements to show what happens to the output of the DAC as we lower the volume. After all that is what we hear!

Alas, we can't do that because our analog to digital converter in our audio analyzer is doing the same thing to some extent, having its result corrupted by distortion and noise just the same.

The "solution" is like a pill that addresses the illness but has a lot of side effects. Namely, we apply filters to whatever our ADC in the analyzer captures. The lower in level of linearity measurements we make, the stronger this filter needs to be to get rid of our analyzer noise and distortion.

Unfortunately that filtering can't distinguish between ADC noise/distortion in the analyzer versus noise/distortion in the DAC being tested. It cleans up both just the same.

The filter is so strong and so "good" that it does what member lowvolume says. It can render beautiful sine waves out of total garbage produced by the DAC (and ADC in our analyzer). It is like demonstrating how dirty a plate in a restaurant is, after washing it a 100 times!

Instead of accepting this as a fact and thinking about how to deal with it, Jude puts up a defense and then asks both Bob Smith (atomicbob) and Rob Watts to defend him. Both put up an improper defense.

More specific to Rob's post, a designer may indeed have strong desire to measure just the pure tone output of the DAC. In his case, he strives for production of tones accurately at ridiculous levels as indicated in his post. He can then tweak his design to see if he can get better outcome.

Those measurements as I explained however, do not reflect what we hear. There is no filtering whatsoever in the output of the DAC when we connect it to our amplifiers and speakers. We hear the output of the DAC, noise/distortion and all together.

Our goal with any audio measurement needs to be correlation with what we hear. To the extent we have applied amazingly strong filters to the output of a DAC prior to measurement which no user does (or can as otherwise you would only hear one tone), creates measurements that are simply not that useful.

So what can we do here? Two things:

1. Don't measure too deeply as to require such severe filtering. I stop at -120 dBFS which is plenty to cover ear's dynamic range of roughly 116 dBFS. Jude and Atomicbob to go -140 dB. At -140 dB there is so much noise and distortion that they are the signal, not what the DAC is attempting to produce! Filtering the dominant output of the DAC and then showing a tiny signal within doesn't demonstrate anything useful.

2. Use a filter that is just enough but no more. Here is the response of my filter relative to (one of two) Jude's filter settings centered around measuring linearity at 200 Hz:

Jude Filtering versus Amir.png


As you can see, my filter in red not only doesn't filter as much (50 dB versus 60 to 70 dB for Jude's), but it has a much more well behaved frequency response. See the various troughs in Jude's especially one around mains frequency of 50 to 60 Hz which helps the DAC but not showing as much of its power supply in DAC output.

I have also worked to make sure that with or without filter, the output of the DAC at the main frequency of 200 Hz doesn't change. Digital filter can ring and levels can change. While this is not a big error in Jude's case, it is there nevertheless which I corrected in my custom filter.

Bottom line: would you like the measured accuracy of a DAC be through that blue curve or red? I hope we both agree that less is more and attention needs to be paid to the underlying signal processing to create a correct measurement.

Back to Rob Watts' post, he is defending these ultra steep filters or use of FFT (steepest filters of all) without paying attention to the point made by member lowvolume: that we are not measuring what is coming out of the DAC and reaches listener ears. That by filtering just about any crap out of the DAC, we can get some semblance of a sine wave to drive our linearity. That is not what we want to measure.

Finally, someone should show him the Yggdrasil measurements using the FFT method he advocates and ask him what thinks of that:
Schiit Yggdrasil DAC FFT Linearity Measurement.png


He will fall off his chair and delete his post. :) FFT or not, the Yggdrasil owners have (as in the three I have tested), lose complete linearity ast -115 and don't care what you give them as input anymore. It doesn't matter what method is used. Jude's, or mine. What has been shipped to customers shows the same broken design.

Summary
Rob's defense of Jude misses the point completely. Our goal with linearity measurements is not an academic exercise where we ignore DAC noise and distortion and celebrate what comes out after filtering these. He as a designer may have interest in such data (as should have Schiit), but not us as users. We should use as little filtering as we can get away with.

Regardless, he was not told about the larger plot that using any method available to us, the Schiit Yggdrasil DAC produces non-competitive linearity results. I challenge him to defend this. He will not and cannot.

It is disingenuous to not acknowledge the great point lowvolume made that any signal including a square wave can be cleaned up with such filters to produce a sine wave! It was a great teaching moment that got destroyed by defending a company's commercial interest as opposed to getting to the truth.
 

gvl

Major Contributor
Joined
Mar 16, 2018
Messages
3,491
Likes
4,080
Location
SoCal
I totally agree that Schiit measurements show lack of engineering hygiene in their TOTL and other offerings, however the statement that we focus on what we hear contains a contradiction as we can't really hear at these levels considering background environmental noise and what not. So one way or another it is an academic exercise as well, albeit a different one.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,654
Likes
240,828
Location
Seattle Area
I totally agree that Schiit measurements show lack of engineering hygiene in their TOTL and other offerings, however the statement that we focus on what we hear contains a contradiction as we can't really hear at these levels considering background environmental noise and what not.
It is not that academic actually. Please read the article I wrote on actual effect of environmental noise: https://audiosciencereview.com/forum/index.php?threads/dynamic-range-how-quiet-is-quiet.14/

In summary, for a transparent channel, we need 120 dB. Anything less and there can be doubt.

Let's remember that the context of the measurements on head-fi is headphones where we are able to achieve massive amount of noise reduction as compared to speakers. So there, the issue is even more real.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,654
Likes
240,828
Location
Seattle Area
I was asked to comment on a specific "dynamic range" measurement by Bob Smith (AtomicBob). Specifically, he shows the FFT spectrum of a -60 dB tone and on it, declares a signal to noise ratio of 122 dB. As usual, his charts are impossible to read. So please allow me to annotate them such:
AtomicBob Signal to Noise Ratio.png


As you see, his "FFT meters" are declaring that there is 121+ dB of Signal to noise ratio.

That data directly conflicts what the FFT is actually showing. The mains hum alone is enough to make that difference around 65 dB, not 122 dB. Add up all the other distortions and noise and there is no way we have 122 dB of proper dynamic range.

And no, you can't look at the noise floor of the FFT and measure that difference. FFT noise floor gets artificially lowered based on its parameters (called "FFT gain"). But even if we did, that noise floor is at -160 db so subtracting our -60 dB signal from it, we get 100 dB, not 122.

So what is going on? First, let's look at the same measurement using my Audio Precision analyzer of the same -60 dB signal:

Schiit Yggdrasil DAC THD+N and SINAD measurement.png


We see the same mains contribution at 120 Hz and bunch of harmonic distortion. The dedicated meter in Audio Precision is reporting about 60 dB of dynamic range above our noise and distortion which matches more or less the manual math I performed on AtomicBob's graph. And we can with eye confirm the same. Starting with -60 dB signal, the noise floor would have to be -180 dB for us to get the math and no way can we get there.

The key thing for now is that both of our FFT measurements are producing essentially identical results. So the issue is not the device being tested but what the meters on AtomicBob's graph is saying.

Alas, despite all the shouting that goes on on accuracy and documentation of measurements, we have none here from AtomicBob. The meter says: "USER: DAC SNR Residual Async." Good luck trying to find out what that means. :)

Fortunately I have used the Prism Sound analyzer and still have the software. So I went in there and found this custom script to make measurements from FFT. This is what it looks like when not minimized as he has done:
Prism Sound FFT THD+N Band Refject.png


I know, I know, still makes no sense. :) But stay with me. What this is trying to do is filter out the tone at 1000 Hz ("band reject"). As with any filter, the bandwidth matters. Here, we are interested in taking out our main tone and look at what is left as our distortion+noise power. Unfortunately, the filter used here by default is improper. It has a wide bandwidth of 1/3 octave instead of just a hertz or two to take out the 1 kHz tone.

Prism Sound help file explains the motivation and problems with it:

Prism Sound FFT THD+N Band Reject help.png


Yup, the 1/3 octave filtering is there to emulate old analog THD+N meters! It was hard to filter sharply as we can today with our digital signal processing (and much better analog ones too). As they say, using of this method results in "residual components .... to be underestimated." And underestimated it is and hence the reason he is showing much better values than it should.

Lesson here is that custom scripts for making measurements in analyzers need to read and understood. And results confirmed to make sure it passes the smell test. Clearly an FFT that shows noise components just -65 dB below the signal can't have any useful figure out merit of 120+ dB.

Summary
My measurements/FFT spectrum of a -60 dB tone essentially matches AtomicBob's data. That results in SINAD (signal over noise+distortion power) of just 60 dB. The meter used in AtomicBob's graph to derive the Signal to Noise ratio is simply wrong and not configured correctly. We can easily confirm this on his graph as I have shown.

Let me know if you have any questions.
 
Top Bottom