• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

SINAD Measurements

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,317
Location
Albany Western Australia
Are we being fair/slightly misleading (to the less technically aware) with the SINAD measurements? Thought I would get some feedback on this.

Below is the table we are compiling for SINAD performance. However we are comparing components that have disparate output voltages. The typical output voltage for dBFS is 2 V rms and seems to be a sort of unofficial standard, however some of the DACs here are producing anything up to 20 V rms. This will obviously bring an advantage to the measurement, but not necessarily in the case of usage. Whilst it is perfectly correct to measure the full voltage SINAD, should we also perform a nominal 2 V rms measurement for fairer comparison? Thoughts?

index.php


Example
Chord Mojo Fed with 0dBFS signal. Output volume set to 2 V RMS. SINAD = 100dB

1534236304859.png


Volume set just prior to clipping at 5 V rms. SINAD = 106.6dB

1534236501404.png
 

Soniclife

Major Contributor
Forum Donor
Joined
Apr 13, 2017
Messages
4,500
Likes
5,417
Location
UK
I think best case for the DUT is the right thing for a single table, if you want to know more you should read the full review.

Adding the voltage it was tested at for the given rating could be an enhancement.
 

mindbomb

Active Member
Joined
Nov 11, 2017
Messages
284
Likes
175
I think the distortion is typically higher with higher voltages, so it doesn't have the same scale of problems that just a plain SNR chart would have.
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,835
Likes
16,497
Location
Monument, CO
Sometimes, but higher output does usually provide greater SNR, so arguably there's an advantage most consumers who have gear with 2 Vrms nominal inputs can't realize. And attenuating the signal usually results in a higher noise floor (in the RF world, attenuation always reduces the noise figure and thus SNR, though distortion is improved -- always a compromise...)
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,392
Location
Seattle Area
This is a problem. As of late I have started to think that in the Dashboard view I should show the best case scenario even if it means lowering the volume to get around extreme clipping and such. Open to feedback on this.

Fortunately a lot of times either levels are the same or I have some easy way of compensating/getting close such as changing jumpers or slight volume adjustment.

As noted, people do need to look at the graphs and I always say to not worry about small dB differences between SINADs.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,399
There's obviously no perfect way to do this. It would make sense to base all comparison measurements on a single reference point, either a particular dBfs or a particular V(out).

It seems that most of the DACs you measure operate at or near their best at -6dBfs, regardless of output voltage.

There are certainly DACs that seem to clip horribly at 0dBfs but measure ok at -6dBfs, but rarely do there seem to be DACs that measure significantly better at e.g. 0dBfs or -20dBfs than they do at -6dBfs.

Perhaps this would be a good reference point?
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
There is no perfect solution as noted above.

2 volts is something of a standard. Or with powered speakers it often is higher. I'm for first taking measurements showing the best the device can do. With DACs that often is - 1dbFS instead of 0 dbFS. Gain staging a total playback system is the key to getting best performance, and without the info on what is best possible with a given device you can't do that. I wouldn't want to lose that in measurements.

Now showing the 2 volt level if it isn't a huge bother for Amir is worthwhile I think. I do measurements without an AP so I have a file of test signals I play after setting gain levels. One of the things I do is duplicate 1 khz thd and IMD at -6 db, -12 db and -18 db. To see if a device dramatically improves just below where I tested it.

You have the problem with a device capable of 20 volts output that you are using it at -20 db at 2 volts. That certainly should give a very low level of distortion though SNR may be impacted.

I've in the past put a pair of voltage divider resistors in systems that had excessive preamp output or unusually low power amp input for max power. Using low values didn't impact noise floors as it was below the gear noise levels. It did help with the useful dynamic range available because instead of just attenuating the output level like a DAC does, you are dropping both signal and noise levels maintaining your maximum SNR and DR range.

So overall, providing a second look at SINAD at a standard voltage level is probably useful info to many people.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,579
Likes
38,280
Location
Gold Coast, Queensland, Australia
I have brought this up several times previously.

https://www.audiosciencereview.com/...ts-of-benchmark-dac1-usb-dac.3708/#post-88831

The S/N+D comparison bar graph is certainly not a fair or level playing field due to the widely different output levels. In short, it is rather meaningless. The commentary on SINAD numbers in reviews should take note of the relative level and call it for what it is, not just focus on the absolute number.

We also have a whole slew of D/A converters that appear inherently faulty and need help with tweaking their output levels to maximize S/N and minimize their THD/Clipping near 0dbFS it would seem.
With DACs that often is - 1dbFS instead of 0 dbFS.

IMO, D/As with poor performance on or around 0dBFS should be called for their poor design and their numbers published as is. After all, it is only a few decades ago where 0dBFS in all D/As gave the absolute best case figures.

This is a problem. As of late I have started to think that in the Dashboard view I should show the best case scenario even if it means lowering the volume to get around extreme clipping and such. Open to feedback on this.

The concept of helping a product achieve its best S/N and THD figures seems ridiculous to me. What's next, redesigning Schiit products before testing them? ;)

Where line-level devices put out obscenely high levels, that are essentially unusable in normal operation, in an attempt to get industry leading S/N numbers, those tactics should also be called for what they are.

2.0V* has been a consumer level standard for digital since the introduction of the Compact Disc in 1983. At the time, there were various reasons for that number (minimum residual noise in line stages, the current output LSBs and of course, maximizing S/N). Previous to that, 'line' level for amplifier inputs, and most devices was 150mV for full power.

Pro level is a different story, with it's own standards for level and S/N should be referenced to those standards.

Headphone amplifier S/N should be referenced to their specified rated output level, not their maximum output achieved.

Amplifiers have always been referenced to 1 watt (2.83V@8ohm) and their specified rated power.

The most important number is the residual noise in uV. Wideband and weighted. This is rarely specified. With more and more gear running SMPS supplies, the 22KHz bandwidth limit hides a lot of sins. I work back from Amir's S/N numbers to approx total residual as it is always the limiting factor. It's what gets amplified right through the chain and becomes the hiss you hear with your efficient speakers and high gain amplification. That noise is what buries your bits.

* incidentally, a last minute change to the consortium's specs called for an increase from 1.4V to 2.0V. The reasons for this are not in any of the early CD documentation and books that I have, however, I believe it was an attempt for the already designed Toshiba 14bit budget D/A converters** to hit the minimum S/N spec the consortium (the Japanese manufacturers) wanted for worldwide release and not let Sony gain (pun intended) the upper hand in published specs. Consider Sony had already released in the Japanese home market, the CDP-101 with a 2.0V rated output, 6 months prior...

**TD6710N which appeared in several early budget machines.

Here is that last minute change to production for the Hitachi DA-1000, dated March 1, 1983.

hitachi.JPG


Basically, output buffer gain change and ~3dB of S/N & DR improvement for free, just in time for the worldwide debut of CD...

So, as you can see, output level and S/N games have been going on for many decades.
 
Last edited:

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,835
Likes
16,497
Location
Monument, CO
ADCs are typically tested at -1 dBFS to provide a little headroom for noise and such. For DACs -1 or 0 dBFS are common; since the input is digital, clipping due to (analog) noise and such should not be a concern (digital clipping is another matter, but the assumption is a digital test signal is well-known and controlled).

I semi-agree with John that comparisons among converters at different output levels requires more information to allow apples-to-apples comparisons. If you know the linearity and noise curves with output and frequency you can predict how the converter will perform at lower output, but in general you can't tell that from a one-point analysis. And since the distortion curves in particular at not linear near clipping there's no guarantee you can extrapolate correctly. Picking 2 Vrms seems reasonable to me, but now you (actually Amir) have the problem of additional tests and of trying to judge and scale prior data that were not tested at 2 Vrms output.

IME/IMO - Don
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,835
Likes
16,497
Location
Monument, CO
Actually, since you said "fairly meaningless", I withdraw the "semi-" part. How's that? :D

I really appreciate the depth of knowledge you bring, John, hope you didn't take offense.
 

derp1n

Senior Member
Joined
May 28, 2018
Messages
479
Likes
629
The concept of helping a product achieve its best S/N and THD figures seems ridiculous to me. What's next, redesigning Schiit products before testing them? ;)
Or testing them with USB and power filtering, stacking them on mousepads, and using a rubidium clock source.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
Well I think it makes sense to test them in a way they show the very best they are capable of. We all know plenty of ways to corrupt that. But it tells you if used with care the best that is possible. The Emotiva pre/pro I tested is a good example. In many systems it could be pretty horrible over the USB input. In others, used via HDMI (which is the norm for AV use) and where it need not exceed 2.2 volts its not bad. Not commendable, but very usable. And a friend has one which is used just that way. HDMI only, and he needs at most 1.5 volts for his gear that follows. I could show him my terrible results and he'd wonder what is going on because it works fine for his needs in that context. Again not exemplary, but not the horrid beast it could be. Unfortunately not knowing that I purchased one and it is marginal for my use over HDMI. After measuring it I know why it wasn't performing like it did at his place. (BTW, if anyone could use such a thing I have it and would part with it).

Another for instance, MSB says their gear puts out a higher than 20 volt signal because that is part and parcel of the near SOTA SNR numbers they claim. They internally use divider resistors to drop it to a more normal voltage for their customers. Which preserves nearly all of those extraordinary SNR results. If some particular group of conditions far from the norm are necessary for best results it behooves the maker to make that clear. But I'd like to know how good can a given device be. If it is unusual in getting best results such is easy enough to note in a review.

Plenty of active speakers have the pro audio reference of +4 dbu which is expected to have 20 db of headroom to =24 dbu (which is 12. 28 volts). For someone using such speakers 2 volts is meaningless. Some speakers are switchable to consumer -10 dbv which is 3.16 volts for max input being 20 db over the nominal rating. Again 2 volts isn't the right level. I'm okay with using 2 volts as a standard point of comparison, but not by dropping trying to get the best results available at other levels of signal.

I don't think that should involve modding the device or any such just testing it. The line between what to do is drawn with a crayon not a mechanical pencil with .2 mm lead in it. The Modi 2 I believe is pretty horrid just plugged into USB with some computers. Plugged into some sort of USB powered buffer it is not so afflicted. Is that going too far? I sort of think it is, and yet it was interesting to see Amir test that. I wouldn't suggest it happen with every single device. In a good device it shouldn't be necessary. But things are too varied for overly rigid rules. Standard simply aren't followed that rigidly in audio pro or consumer. In lots of ways it would be very nice if they were.
 
Last edited:

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,579
Likes
38,280
Location
Gold Coast, Queensland, Australia

chauct

Member
Joined
Dec 3, 2018
Messages
28
Likes
22
From what I read here, SINAD is not in general considered a reliable metric for SQ. For typical human ears, how much of an improvement if we were go from 90 to 105? I posed a similar question in the CCA thread but it doesn't seem to generate much response. Apologies if it is duplicated.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
From what I read here, SINAD is not in general considered a reliable metric for SQ. For typical human ears, how much of an improvement if we were go from 90 to 105? I posed a similar question in the CCA thread but it doesn't seem to generate much response. Apologies if it is duplicated.
My prediction would be no improvement. But using only one number is not a good approach.
 

daftcombo

Major Contributor
Forum Donor
Joined
Feb 5, 2019
Messages
3,687
Likes
4,068
Hi,
A few brands display the THD for the range 20 Hz to 20 kHz for their amplifiers. How can it be compared to the THD measured at 1 kHz ?

Also, for those products, why is the THD value under @1W lower than the THD value @50W? On Amir's charts, the curve goes down with the Watts before clipping.
 
Last edited:

sergeauckland

Major Contributor
Forum Donor
Joined
Mar 16, 2016
Messages
3,440
Likes
9,100
Location
Suffolk UK
ADCs are typically tested at -1 dBFS to provide a little headroom for noise and such. For DACs -1 or 0 dBFS are common; since the input is digital, clipping due to (analog) noise and such should not be a concern (digital clipping is another matter, but the assumption is a digital test signal is well-known and controlled).

I fully understand and agree that measuring DAC distortion at -1dBFS makes sense, as the SINAD shouldn't be that different from 0dBFS. However, many (most?) CDs these days are mastered to 0dBFS, and I've come across a few DACs that were clipping at 0dBFS, so the performance at 0dBFS is very relevant today.

Consequently, I would be suspicious of a DAC that was specced at -1dBFS, and would like to see its measurements for 0dBFS, just to make sure it wasn't clipping.

S.
 
Top Bottom