• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Proposal: New SINAD Ranking Design (Histogram)

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,494
Voice of reason

No no, don't change it, just throw it out the window
Yeah, maybe if someone lost their minds they'd do that. You're more than free to come up with a metric that is more informative in the same amount of space and brevity (meaning a 2-3 digit integer). But like most things, if it existed, I doubt this many people would be unwilling to relent on such a good suggestion.
 

JSmith

Master Contributor
Joined
Feb 8, 2021
Messages
5,153
Likes
13,211
Location
Algol Perseus
That's why many rate there products in A weighted thd+n
Nah... it's for the improved numbers most of the time.
Sadly ASR has no weighted Sinad or noise measurement
Which is good as it's a direct measurement of the device. A-weighting is a pretty rough way to estimate the frequency sensitivity of every persons ears as it doesn't take into account the non-linear behavior of the ear.


JSmith
 

Ra1zel

Addicted to Fun and Learning
Joined
Jul 6, 2021
Messages
531
Likes
1,048
Location
Poland
Yeah, maybe if someone lost their minds they'd do that. You're more than free to come up with a metric that is more informative in the same amount of space and brevity (meaning a 2-3 digit integer).
That's not even a challenge then since THD+N at some random frequency is as meaningless of a metric as it gets, what information does 120 SINAD vs 100 SINAD give me? Only that both are a few orders of magnitude past what is needed for music reproduction in domestic setting in a normal (noisy) room. For an amplifier give me FR, Power, output impedance and its infinitely more useful.

For headphone amps thd+n at 4V output is even funnier since you will go deaf very quickly if this is how you listen.

Feel free to to create any equation that includes more metrics than just thd+n to calculate "points" and it would be better.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,690
Likes
6,013
Location
Berlin, Germany
You're more than free to come up with a metric that is more informative in the same amount of space and brevity (meaning a 2-3 digit integer)
Such a simple metric does not exist and never can exist. You simply cannot reduce a complex multifacetted behavior into a simple scalar number, even under unrealistically benign lab bench conditions.

I do not understand why people feel a need for a condensed one-dimensional metric nor do I undertstand what they would need it for. What other items in your life do you buy based on such overly reduced simple ratings?

IMHO, current 1kHz SINAD only has two useful regions of information one can infer from the number:
- so low that chances are high the device is flawed elsewhere also that in total it may not be audibly transparent even if that SINAD alone might still be transparent
- so high that chances are very low the device is flawed elsewhere to be not audibly transparent in spite of the high SINAD
For the whole range in between the correlation between SINAD and transparency is low and unreliable.

And, one thing is paramount here: The SINAD value is only applicable for the exact same and benign(!) operating conditions that were used for the test which more often than not are not what you are going to have in practice.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,384
Location
Seattle Area
I do not understand why people feel a need for a condensed one-dimensional metric nor do I undertstand what they would need it for. What other items in your life do you buy based on such overly reduced simple ratings?
A ton. 0 to 60 times of cars. US EPA mileage numbers for cars and appliance power usage. Color fidelity index for LED lights. Gasoline Octane. Air filter particle capture. Printer speed rating. True Lumens for displays/projectors. Percentage fat in milk. Amount of saturated fat in servings of processed food. List goes on.

These are all examples of strong single metrics for products. I am amazed that you don't run into any of them yourself.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,384
Location
Seattle Area
And, one thing is paramount here: The SINAD value is only applicable for the exact same and benign(!) operating conditions that were used for the test which more often than not are not what you are going to have in practice.
In the case of DACs as we are discussing, they absolutely are tested as they are used.
 

Digby

Major Contributor
Joined
Mar 12, 2021
Messages
1,632
Likes
1,555
I agree that it does look a little ungainly, why not just have, say, 5 products either side in whichever the reviewed product falls in. Not the entire graph, but a few choice selections. Then provide a link to the whole graph elsewhere.
 

MCH

Major Contributor
Joined
Apr 10, 2021
Messages
2,580
Likes
2,197
Then why not set a sinad value above which all devices are recommended and below which all are not recommended?
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,384
Location
Seattle Area
I agree that it does look a little ungainly, why not just have, say, 5 products either side in whichever the reviewed product falls in. Not the entire graph, but a few choice selections. Then provide a link to the whole graph elsewhere.
That requires photoshop work. Will it really add value what is next to it? Seems to me the most useful info is what is already there: were in the long list it sits. If you don't zoom into the image, that is what you get.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,384
Location
Seattle Area
Then why not set a sinad value above which all devices are recommended and below which all are not recommended?
Because SINAD and my recommendations are not tied together. They are correlated but not equivalent. My recommendation for example takes in looks, functionality, and broader set of measurements.
 

MCH

Major Contributor
Joined
Apr 10, 2021
Messages
2,580
Likes
2,197
Because SINAD and my recommendations are not tied together. They are correlated but not equivalent. My recommendation for example takes in looks, functionality, and broader set of measurements.
Well i think that is what some people are trying to say
 

Digby

Major Contributor
Joined
Mar 12, 2021
Messages
1,632
Likes
1,555
Seems to me the most useful info is what is already there: were in the long list it sits
It is of value in relation to other factors, perhaps most importantly cost. The Chord DAVE performs very well, inaudibly different from many of the best DACs, yet it costs $14,000 and there are others that perform similarly for $300 or so. For most, I think this is the information that needs to be front and centre.

Perhaps a value graph, SINAD performance per $ would be helpful.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,690
Likes
6,013
Location
Berlin, Germany
These are all examples of strong single metrics for products. I am amazed that you don't run into any of them yourself.
I have to admit that my brain seems to be wired completely different. I'm looking at life in a holistic way, not in a reductionist way. I base my decisions on an informed and careful weighting of many many properties.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,690
Likes
6,013
Location
Berlin, Germany
In the case of DACs as we are discussing, they absolutely are tested as they are used.
Experts would beg to differ, but let's leave it at that and agree to disagree.
 

Sokel

Master Contributor
Joined
Sep 8, 2021
Messages
5,838
Likes
5,765
In the case of DACs as we are discussing, they absolutely are tested as they are used.
Amir I apologize in advance as I have zero experience in measurements,but the last 15 days I have measured my stuff in many ways and I have manage to replicate (multiple times with repeatable conclusions) than even the length of my test cable or the placement of the dac I measure make a big difference (10db or more!)

In real life placement with meters of cables and all kinds of interference must be a lot worst.

I again apologize,I know I'm an ant speaking to an elephant in this matter.
(I also know I'm a newbie enthusiast and that is a bad combination,like every fanatic)
 

Lambda

Major Contributor
Joined
Mar 22, 2020
Messages
1,785
Likes
1,519
Which is good as it's a direct measurement of the device. A-weighting is a pretty rough way to estimate the frequency sensitivity of every persons ears as it doesn't take into account the non-linear behavior of the ear.
A weighting noise is surely not the perfect weighting but its way better to compare/estimate how noticeable His/hum is then having no weighting at all.

Sure it also makes the number look better but they are also better comparable.
It gives a advantage to devices with noise or hum at levels that are less noticeable. (as it shuld be)
and vice versa it gives disadvanteage to devices with noise at frequency most ears are most sensitive (as it shuld be)

Of cause its not including all the psycho/physio acoustic but its way better then nothing and the industry standard.
A AMP/DAC with noise/hum at 20Hz is less of an distraction compared to one with noise at 1-4kHz at the same absolute level.

In the case of DACs as we are discussing, they absolutely are tested as they are used.
Most users don't adjust DAC volume to exactly 2/4V
As you said yourself small voltage potential differences between grounded devices are normal. and so is a common mode current between underground devices.
Especially lots of common mode noise is coming from many PCs over USB.

Non of this immunity to Common mode current or voltage on input or output is tested or simulated for the test.
 

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,212
Likes
24,171
The histogram is misleading, at a quick glance it implies that the Chord Mojo 2 is the best when it is not. Thus it does not serve the intended purpose.

Cheers

index.php


No it doesn't. It shows that the Chord Mojo falls into the same bucket as a bunch of other (borderline) excellent DACs.


If one looks at a histogram of score distribution on an examination, does one think that the highest bar is the best grade?
I sure hope not! :)

(It is the most common grade -- rarely the best unless the test is really easy or the instructor was really good!)

1659797698946.png

source (randomly chosen ;) ): https://www.researchgate.net/figure...a-histogram-for-the-eight-item_fig1_266945541
 
Last edited:

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,212
Likes
24,171
Actually, reflecting on distribution of a sample (or of an entire population), perhaps the simplest metric isn't even graphical.
One could just calculate the percentile into which the test sample falls (and also specify n, the number of samples tested to date). (speaking of single number, dimensionless surrogates! ;) )
110 dB is a mode for this dataset -- moreDACs fall into this bucket than any other bucket. It's a very common SINAD (in this case) for all of the DACs tested.
There are many possible explanations, e.g.
  • It is easy to achieve with readily available parts.
  • It is preferred based on market research.
  • It is a breakpoint in terms of performance vs. price (or manufacturing cost or... whatever metrics matter to a for-profit company).
  • It just is (i.e., it is a random outcome).
  • etc. (?!)
My eyeballs suggest to me that the distribution of DACs tested here to date might (!) be well described as a Poisson distribution. But I am too lazy to take the dataset, try a few models, and compare the goodness-of-fit objectively. ;)
At least today. :cool:
 
Last edited:

Chromatischism

Major Contributor
Forum Donor
Joined
Jun 5, 2020
Messages
4,765
Likes
3,703
Actually, reflecting on distribution of a sample (or of an entire population), perhaps the simplest metric isn't even graphical.
One could just calculate the percentile into which the test sample falls (and also specify n, the number of samples tested to date). (speaking of single number, dimensionless surrogates! ;) )
110 dB is a mode for this dataset -- moreDACs fall into this bucket than any other bucket. It's a very common SINAD (in this case) for all of the DACs tested.
There are many possible explanations, e.g.
  • It is easy to achieve with readily available parts.
  • It is preferred based on market research.
  • It is a breakpoint in terms of performance vs. price (or manufacturing cost or... whatever metrics matter to a for-profit company).
  • It just is (i.e., it is a random outcome).
  • etc. (?!)
My eyeballs suggest to me that the distribution of DACs tested here to date might (!) be well described as a Poisson distribution. But I am too lazy to take the dataset, try a few models, and compare the goodness-of-fit objectively. ;)
At least today. :cool:
I had this thought a couple of days ago, but it was quickly discarded when realizing that every one of those metrics gets out of date with subsequent reviews. If Amir can't have that automatically update itself, it's a no go.
 

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,212
Likes
24,171
I had this thought a couple of days ago, but it was quickly discarded when realizing that every one of those metrics gets out of date with subsequent reviews. If Amir can't have that automatically update itself, it's a no go.
Good point. They are (all) moving targets.
One could pick a reference point (e.g., 110 dB) and express the performance of each test item relative to the reference (as a fraction or as a percentage)... but not much value added by that.

It's just unfortunate that the current graph has (heh-heh-heh) such a low signal to noise ratio. :facepalm:
Not much added information in the graphical representation.

Arbitrarily (from my perspective) color-coding into four groups doesn't help much, either, I'd opine.
At the very least: in real life, isn't there something between very good and fair? I mean, I sure hope so, since that's where I fall! :)

One of my many mottos is Strive for mediocrity
(it's a stretch for some of us)
 
Top Bottom