If someone with admin rights to the tableau dashboard export can export the flat file of the underlying data as a .csv/.txt/.xlsx I'll play around with the SINAD graph and come back with some samples for members to review. Should I reach out to Amir for this?
I capture the data for ASR. Amir's position is that it should be kept private. You can PM me though and we can talk about what you have in mind if the below doesn't address it.
I actually made a series of SINAD charts around a year ago which had five divisions:
- █ >120dB (>20 bits): Provably inaudible noise and distortion.
- █ 109dB—119dB (18 bits—20 bits): Well beyond 16-bit (CD) resolution.
- █ 96dB—108dB (16 bits—18 bits): Meets and exceeds 16-bit (CD) resolution.
- █ 86dB—95dB (14.3 bits—16 bits): Does not meet 16-bit (CD) resolution.
- █ <85dB (<14.3 bits): Performance is below the minimum lenient threshold defined in this thread.
This was based on a proposal by
@martijn86. 96dB was the middle because it's the total resolution of CDs, not considering dither. Besides being a 30 year old format, there's a study I can't recall at the moment which said that 96dB was generally enough for transparent reproduction (other studies push for higher figures for absolute transparency). There was a bunch of discussion and that was the approach for a few months before the data upkeep forced a new approach and highlighted some problems.
The first problem is that psychoacoustic concerns for the entire 120dB range of human hearing are pertinent only for DACs and preamps. Every other device will fail in some metric, least of which is SINAD. This means that you already have to take engineering ability/possibility into account.
The next is that we are assigning a ranking based on a single device and not the chain. Noise and distortion are cumulative. The common assumption holds that upgrading one device in the chain will improve some aspect. But there can be no commentary about the effect of a single device on an undefined chain. This means that we can't address specific use cases.
Related is that people want precision about expectations. The complexity of audio means that we can make basic, only partially helpful comments. For combo DACs/headphone amps where you would just plug in a headphone, you would still have to understand what kind of headphone you're using and its electrical requirements at least to some degree before buying (like impedance, sensitivity). But then on top of that buyers want specifics beyond electrical compatibility: what subjective qualities will be experienced? This is where the talk of blacker backgrounds, greater separation between instruments and so in comes in, which is immensely mistaken form of description and creates an incredible industry problem.
This last point some may consider less important. There's a lot of attention on this one metric, SINAD, but there are bunch more that are in the database (power under different loads, SNR, headphone amp output impedance). Consistency would require us to figure concrete thresholds for all of them. But all that requires some specificity about the use case and device type. It is probably underappreciated just how many devices types there are when considering their construction and purposes. There are over 70 categories in the review index, for example.
The conclusion I've come to is that we should emphasize and make other communities understand that these charts are not a replacement for reading reviews and having some background knowledge. We should resist the impulse to make these charts more meaningful or apply audibility thresholds to them. They are simply a list of results.
I think the reason this is unsatisfying is because the drive towards meaning is part of the day-to-day discourse in audio despite it being a phenomenal problem to assume that every design decision will alter what you hear. We should not add to that by allowing measurement results to be interpreted in a manner which will fetishize them the same way materials and other aspects are fetishized.
In that sense ASR's reviews represent something else entirely, and can't be lumped in with those elsewhere that promise and deliver immediately meaningful conclusions. At ASR the conclusion is subordinate to the measurement information, and the ranked measurements are a general shorthand for the work that's been done up to that point.
If there is one thing to take away it's that
it is a fallacy to extrapolate subjective sound expectations from single-number ranked chart.
For the ranking itself, the most recent work by
@RickSanchez,
@Koeitje and I would be to have user-selectable divisions based on categories of quartile colouring (aka a statistical division of the sample and nothing more), yes/no recommendation colouring and a no division single-tone "off" option, with colour-blind versions of each. This is still a work-in-progress but expect to see it.
What remains is the buyer's guide aspect. What's good enough? What's great? This is largely determined by what's available on the market at what prices, and the collection of measured products. The main reason I wouldn't buy a DAC that shows 80dB SINAD at max output is because I can buy one that shows over 110dB for cheap, with no downsides if I do the research right. The figures are similar for headphone amps and lower for power amps. There are still plenty of details to consider outside of the chart itself.
Having this conversation over and over is probably the best way for it to take hold.