• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Understanding Audio Measurements

ouldn't you solve your problems using a Mutec 3+ USB?
I can't figure out how from quick read of their page. What needs to happen is to take S/PDIF output from AP and turn it around and feed a USB DAC of my choosing.

They don't allow S/PDIF to USB conversion.
Last edited:
Is your spdif electrical or optical?


I.e input spdif into your pc using the ministreamer or usb streamer. Route to usb dac connected to pc using something like v patchbay or vab


It has both. What you describe is essentially what the blog talked about. The only difference is that he tried to get them to run synchronously. I am not sure that matters for my testing.
In Remco Stoutjesdijk's blog, he talks about tapping the I2S output of the DAC and feeding that as a bit clock signal to the sync input of the Audio Precision. I suspect that this is not an option for the majority of DACs that you are measuring, Amir.
True and he doesn't explain why he felt bit for bit timing accuracy is important. AP has settling parameters for when to capture some value which should be enough to deal with latency and variations in timing.
It has both. What you describe is essentially what the blog talked about. The only difference is that he tried to get them to run synchronously. I am not sure that matters for my testing.
As far as I can see it shouldn't make a jot of difference as the output is asynchronous USB.

The Ministreamer is only $35 (without box) and $105 (with box)


I use it to input spdif to my PC so I know that bit works, just need to check the audio patch software works as advertised and that it doesn't introduce any odd things into the measurements.
Last edited:
Hi Amir,

Regarding your THD+N vs Frequency measurements, it appears that a sweep is used. I am curious regarding the duration of the sweep, more specifically the frequency step size and step duration. I am a big fan of using sweeps for audio analysis but I share your concern regarding "THD+N is a rather poor metric since it considers all harmonic distortion products to have the same demerit". Do you have any ideas on how to improve your data? Perhaps a waterfall plot visualizing harmonic distortion over the audible frequency spectrum would be more appropriate?
@amirm, something I've not seen you discuss (I may have missed it) are apparent anomalies the low frequency component of the jitter test.

RME ADI-2 DAC Jitter and Noise over USB Measurement.png

I assume it's power supply related - is that correct?

And what is the significance of it? What can one draw from it when looking at the measurements?

I assume it's power supply related - is that correct?
Not all of them. First that initial spike right to the left is a "DC" component and an artifact of measurements and should be ignored.

The next one does look like power supply.

The next one if at 1 K and then 2K, could be artifacts of USB packets.

The 2K one to some extent if my memory is right, is present in a lot of my measurements indicating that it is in my analyzer.

Good question by the way. :)
Hello Amir,

Don't know if you are familiar with this thread, and particularely the discussion happening around thos pages here: http://www.diyaudio.com/forums/vend...egrated-preamp-crossover-dac-project-107.html

Turns out implementing ESS reference output stage causes common mode noises (and even DC) to be emitted on balanced outputs.
Obviously this is a problem for amps with poor CMR.

It would be great when testing DACs with balanced output if you could also test one leg of the XLR (and also look for DC offset there), especially with ESS DACs.
I don't know if mobile ESS implementation can have this issue, but it could be something interesting to check on the SMSL SU8 for example.
Last edited:
@amirm I have a question about interpreting the measurements. The jitter spectrum (J-Test) involves a -3 dBfps signal (if I understand correctly). What conclusions can we draw about jitter when fed with a lower level input signal? Do you jitter levels then linearly drop until they are below the noise floor? Or does something more complex happen?
You don't want to touch a j-test signal. In reality it is a square wave (not sine!) which means it is not subject to error, rounding or dither. The moment you adjust it with level, then you have upset that special characteristic.

We can accomplish what you say with using a normal sine wave at the same frequency and vary its level. But by definition it will have some dither noise so the noise floor of the signal itself, and hence the equipment will be higher.

Getting more detailed, jitter amplitude is proportional to both frequency and amplitude of the source signal. So if you lower the source frequency, the jitter components go down with it. Since jitter values are so small as it is (often 100 dB or more lower than the main signal), it won't take much for them to disappear in the noise.

Our jitter measurements are indirect. We do NOT, let me repeat, do NOT measure jitter itself. We look at what it creates in the spectrum of the tone we feed the device. Jitter will create sidebands around our main tone which show up when we perform an FFT and look at the spectrum of the signal. This spectrum method is far more sensitive than time domain measure of actual timing. So much so that you can measure audio jitter with even a simple sound card.

Did I answer your question? :)
Top Bottom