• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Is SINAD important? - "Myths" about measurements! [Video YT]

It is, it must be noted, the only measured parameter which can be sorted by on the master index. It would be a strawman of ASR to assert that people only care about SINAD here, but surely we can accept that it is something people on here are commonly ranking by?
It is a convenient single number for a ranking of expected good engineering. Probably the only one generic enough to be suitable for that purpose, and useful as a proxy for likely performance in other measurements. Perhaps we look at it first. If it is bad we move on. If it is good, we check the other measurements to confirm they are all in the right ballpark.

It is only one of a large number of measurements and data points that are taken in a typical review. These are ALL discussed at length here by the regulars. Take the recent thread on the poor measurement of a particular amp, in which IMD was a particular focus due to that falling badly behind specification. That thread is currently running to 37 pages.

So the focus here is not on SINAD. It is on all measurements - and will pick apart any of them that show the weakness of a particular device.
 
Last edited:
Has Amir measured a component that has excellent Sinad but measures poorly in other respects?
Keith

Only a few in the first 7 pages (not counting cables) with high SINAD, but didn't get Amir's recommendation.
122dB - Schiit Freya S - one channel had worse distortion, QC issue second sample was fixed and recommended
117dB - Weiss DAC205 - IMD, jitter
116dB - Auralic Vega G2 - one channel worse distortion, price to performance

Only Weiss would fit your criteria - Schiit was a QC issue, and Auralic's non-recommendation was due to price.
 
I assumed the labels meant "excellent engineering" versus "fair" or "poor engineering" because SINAD at a certain point becomes more of an engineering issue than an audible/practical one and labels are there for convenience.

Well, then I don't understand why the chart is not labeled with "excellent engineering" ... "poor engineering".

The chart could easily mean "excellent" ... "poor" SINAD since it is a SINAD chart. I took it as such.

Same chart ... different perspective/understanding.

They way the chart is label could easily also mean "excellent" ... "poor" performance/audio quality/etc..

I don't know why it is left open to interpretation?

---

I really like the way spinorama puts a huge warning at the top of the ranking page. Their main ranking metric is "Tonality".

2025-01-08 09_33_31-Ranking table of speaker's measurements — Mozilla Firefox.png


And it has a very clear/concise explanation of it's "Tonality" metric.

  • Tonality: this is a value between -10 and 10. It is defined in the CEA2034 standard and is computed from the spinorama data.
    • Higher is better
    • It make sense for tower, bookshelves or center but not for surround, in-wall or column speakers; if you see *** instead of a number, it is to remind you that the score is not valid for some shapes of speakers. If you go to the page of a speaker you will still still see the computed value. It needs to be taken with care. It may "work" for some speakers but not for others. The predicted in-room response assumes a rectangular room with standard reflection and a dipole speaker. All bets are off for a stadium or an omnidirectional speaker.
    • Note that a difference between two scores is only significant if the difference is greater than 0.6.
    • Be also mindfull than smoother or less precise data can yield a higher score than an Klippel generated set of data for example. You can easily remove 0.5 to 1.5 to the score for highly smoothed data.
    • The score does not tell about maximum output or about distorsion. You can get the same score for a portable bluetooth speaker and a large tower.
    • If you know how to read code, you can find my implementation in python here. There is also a version in Cython which is faster if needed be in the same repository.
 
I don't know why it is left open to interpretation?
It's not really TBH... as SINAD is really an engineering metric that measures the quality of an audio or electrical signal (i.e. ratio between the desired signal and N & D). It’s a technical measurement and not directly related to perception or audibility.

Our hearing is influenced by many factors apart from just SNR and distortion, like the type of distortion, artefacts and psychoacoustic aspects.

A device with a very high SINAD may not necessarily sound "better" to a listener compared to one with lower SINAD, or even different. Audibility also depends on the specific listening environment, the equipment used (i.e. headphones, speakers), and the listeners ability to hear distortions.

While a higher SINAD often indicates a cleaner signal, it doesn’t directly necessarily guarantee that a listener will perceive a noticeable difference in sound quality.

That said, if one had a choice between 2 devices with the same functions/connectivity, decent aesthetics and the same price range, yet one was 110dB SINAD and the other was 55 SINAD... what would be the reason to pick the lower SINAD device?


JSmith
 
"That said, if one had a choice between 2 devices with the same functions/connectivity, decent aesthetics and the same price range, yet one was 110dB SINAD and the other was 55 SINAD... what would be the reason to pick the lower SINAD device?"

A preference for masking detail!

well,

It's rather silly to me that tube amp buyers often trade up to higher fidelity models within the same company where the noise and distortion becomes progressively less as they move up to the more expensive models. The better models have more detail retrieval and the buyers do notice and appreciate the lower distortion and more detail that comes with it. Why don't they just stick with the lower SINAD model? It's all about the glowing tubes, sexy aesthetic and maybe some nostalgia.
 
It is, it must be noted, the only measured parameter which can be sorted by on the master index. It would be a strawman of ASR to assert that people only care about SINAD here, but surely we can accept that it is something people on here are commonly ranking by?
For the sake of argument, let's accept that people here are commonly ranking by SINAD.

How is that worse than Resolve equating SINAD with snake oil?

For context:
There may be a disqualifying aspect to SINAD if the results are quite bad, but even then… you may want to disqualify products that score much better if you have more sensitive IEMs for example. And at worst, SINAD is effectively a different kind of snake oil, where it causes people to make purchase decisions thinking they’re getting better performance when in fact they aren’t.

Why just not explain what SINAD is and what it isn't and not confuse the issue? Why spin a technical measurement? And lastly why is a rare most sensitive IEM the go-to example?
 
For the sake of argument, let's accept that people here are commonly ranking by SINAD.

How is that worse than Resolve equating SINAD with snake oil?
I mean...objectively, the ranking used here is typically SINAD. It's kind of weird to me that this is in contention at all, because a hierarchically ranked list sorted by SINAD is present in every review of DACs, amplifiers, etc.

It's worse in the scenario where it encourages someone to overspend on a product, IMO - precisely as with any other technically-true aspect of audio which doesn't relate to sound quality. I'm equally ornery with somebody saying "don't buy a MiniDSP 2x4HD, it has poor IMD (which is almost assuredly below the threshold of audibility with music)" and someone saying "don't buy a Topping L30, it doesn't use gold-based solder and sounds worse as a result". In that respect, I see commonality between an obsession with a measureable-but-not-audibility-correlated metric and every other form of snake oil in the industry. That isn't so that that this is the default pattern on ASR, or of Amir - but objectively I do observe people being told that they should buy new equipment or pay more than they would otherwise in order to get something better-scoring (more often than not on sites other than ASR, but with reference to the rankings here), and that is indeed quite annoying to me.

For context:
There may be a disqualifying aspect to SINAD if the results are quite bad, but even then… you may want to disqualify products that score much better if you have more sensitive IEMs for example. And at worst, SINAD is effectively a different kind of snake oil, where it causes people to make purchase decisions thinking they’re getting better performance when in fact they aren’t.

Why just not explain what SINAD is and what it isn't and not confuse the issue? Why spin a technical measurement?

I would pretty happily endorse this sentiment - amplifiers and DACs are and have long been a solved problem. With the exception of a small pool of wrong-on-purpose designers and a few hobbyists-cum-manufacturers with no technical knowledge whatsoever, it is extremely difficult to find a DAC or amplifier which will audibly impact your experience of sound. The cases where this does happen are almost universally due to audible noise, with amplifiers which have audible frequency response variation¹ or distortion outside of clipping being a drastic minority.

You aren't going to get a better experience of sound by buying a JDS Atom if you have an O2, or frankly, most likely even if you have an original Magni, which measured meaningfully worse. In that respect, I don't want people to get the misimpression that something that "measures better" is going to impact the sound inherently, any more than I'd want them to think that a better metallurgy in their cable will impact the sound. So that is to say, the "spin" here is - at least insofar as I'm putting spin on things - part of my general annoyance with focusing people's attention on the parts of audio that functionally don't matter (DACs, amplifiers, and indeed "source gear" as a whole) over the areas that do matter (headphones, speakers, and digital signal processing). Cameron, of course, believes in that sort of thing, so I'm sure his motivations differ, but he's been long banned from this forum, so he can't speak to his POV here - you are welcome to inquire on our forum, of course.

1: Excepting the effect of Zout, although the number of amplifiers I've seen with high output impedance which were not in the "built wrong on purpose" category is vanishingly small and mostly long out of production

And lastly why is a rare most sensitive IEM the go-to example?
This is mostly as a general "gotcha", although sadly stupidly-sensitive IEMs are not that uncommon - in spite of the complete irrationality of their design, Campfire sold a lot of Andromedas, and the Andromeda is something on the order of 143dBSPL/V, meaning that it would produce audible noise connected to the APX555's signal generator through a noiseless buffer. A number of the "more drivers more better" Chinese IEM companies are also following this same trend (presumably in a similar way, by paralleling high sensitivity, low-Z drivers), so one legitimately can end up with an IEM which can hear what should otherwise be a uselessly low noise floor.

To be clear, this is entirely a failure on the part of IEM designers - there is no need whatsoever to have consumer IEMs that produce audible noise from devices that have 120dB SNR referenced to 2V, but sadly it does still happen, and it serves as an ironic reminder of the futility of trying to perfect the audio experience in source gear when the incompetence of the transducer designers will always be the big issue.
 
That said, if one had a choice between 2 devices with the same functions/connectivity, decent aesthetics and the same price range, yet one was 110dB SINAD and the other was 55 SINAD... what would be the reason to pick the lower SINAD device?
Two obvious ones jump out to me here: Cost and stability. Very high SINAD will typically imply very high open loop gain or relatively complex feedback topologies such as @JohnYang1997's nested current feedback system, and such amplifiers can be less stable than designs with lower open loop gain and lower performance. This is not a certainty, and after a couple of teething problems in early models I would hope that Topping's amplifiers are effectively unconditionally stable with headphone loads, and John is a very competent engineer so I would assume they will be. However, a "simple" design can have lower odds of causing problems in the first place, and if the benefits of a design that's more difficult to compensate and isolate from the load are not reflected in sound quality, that's a case for taking the path of least transconductance if you feel me.

On cost, it really depends on the relative scale - you can get a chip from ESS or Cirrus now that'll easily get you 110dB SINAD, and the costs are very reasonable. I would, as said, be quite ornery with suggesting that people should pay for something that uses dual mono ES9039Pros or something over a humble ES9018/what have you if the price gap is significant (>$20, which, given the chip costs, it virtually must be), and I do think that making a ranking that puts the former above the latter does incline people to do that.

This said, it is absolutely possible to have a high-SNR, low-distortion, low-cost, highly-stable amplifier, and that's one of the great joys of the modern era: we can get metrology-grade equipment (for those of us wot like that sort of thing) at slightly above consumer prices, and consumer-priced gear is virtually certain to be audibly transparent.
 
I mean...objectively, the ranking used here is typically SINAD. It's kind of weird to me that this is in contention at all, because a hierarchically ranked list sorted by SINAD is present in every review of DACs, amplifiers, etc.
No. If those were the ultimate ranking, I wouldn't do any other tests. All I do is put the SINAD in a table. It ranks itself! That seems to trigger people. Some hate the fact that SINAD cuts through subjective assessments. Some are jealous of the notoriety that SINAD and as a result ASR has. None of this is our problem.

Noise and distortion have been metrics of audio fidelity from beginning of time. That the industry forgot about this and started to sell products on completely unproven subjective metrics needed a hard reset and SINAD does that. You can claim all kinds of greatness about the product. But when the SINAD is 60 dB, you have nowhere to hide. So this causes angst for those folks.

Members here read all the measurements and always comment on the full suite. My recommendation is based on full suite of test results and how I feel about the build, functionality, etc.

These are the facts and they are resonating with vast majority of audiophiles. None need to be saved by other reviewers, who themselves are new to audio technology, with some argument.
 
No. If those were the ultimate ranking, I wouldn't do any other tests. All I do is put the SINAD in a table. It ranks itself!
I feel like you are tilting at a position I'm not espousing here - my claim is not "ASR values only SINAD", and if I've said that somewhere, please point me to where, because I'd like to correct it. My claim is "products are ranked on ASR by SINAD"...because you have a ranked list of products using SINAD, sometimes SNR, and very few other metrics of the many you do measure (e.g. there is no analogous table for distortion-free range on multitone tests, maximum output power at a given distortion and load, etc).
Noise and distortion have been metrics of audio fidelity from beginning of time. That the industry forgot about this and started to sell products on completely unproven subjective metrics needed a hard reset and SINAD does that.
You say this, but in many years of limited objective testing of source equipment - with the exception of the extremely pulled punches of @John Atkinson - we still saw an industry of products that had generally inaudible distortion, and generally low levels of noise. From my POV, SINAD if anything enables people to sell more expensive amplifiers and digital to analog converters to consumers, and I'm really not keen on that trend - laypeople shouldn't get the impression that it's reasonable to pay $300 for a discrete circuit to do a job an NE5532 in an audio interface can do just as well from an audibility standpoint.

Members here read all the measurements and always comment on the full suite. My recommendation is based on full suite of test results and how I feel about the build, functionality, etc.
This is true - ironically, I think that folks who mostly interact on ASR are somewhat insulated from the people who do annoying things with data from ASR, because I mostly see those dilettantes on Discord, Youtube, Reddit, etc. The median poster on this site understands the measurements being discussed, and isn't rushing out to buy (or tell others to buy) a new product just because you liked it - but there is a meaningful following of surface-level readers who do misuse the data you're putting out there. That's not to say that this is your fault, of course.
 
I do think that making a ranking that puts the former above the latter does incline people to do that
Well if they do that is on them TBH... they need to read and understand the entire suite of measurements posted. I'd imagine they would be more inclined to be sold on manufacturer claims and implications that more DAC IC's are better than one for example, than one simple metric noted here. If people want the best measuring devices, then a ranking on SINAD with links to the whole review is pretty appropriate.


JSmith
 
Well if they do that is on them TBH... they need to read and understand the entire suite of measurements posted. I'd imagine they would be more inclined to be sold on manufacturer claims and implications that more DAC IC's are better than one for example, than one simple metric noted here. If people want the best measuring devices, then a ranking on SINAD with links to the whole review is pretty appropriate.


JSmith
Oh, it is on them - my statements here aren't an indictment of ASR, Amir, the reviews here, what have you - they're a complaint about a set of behaviors I see from consumers and folks in the community at large which is partially driven by a cargo cult around data from this website. That cargo cult isn't necessarily anyone here's responsibility, but it's still annoying, and it's still something that those of us who keep getting bothered by it do wish to deflate.

I would agree that "snake oil classic" is more common than "SINAD oil" in terms of selling people on overpriced audio products - probably by at least a factor of 5, based on a few sales figures I'm familiar with - and that's something I spend a great deal of time complaining about as well, but nobody every posts about it on ASR when I'm on another site lambasting the idea that DAC chip companies have different "sound signatures" or what have you, so that doesn't get much exposure here :D
 
but objectively I do observe people being told that they should buy new equipment or pay more than they would otherwise in order to get something better-scoring (more often than not on sites other than ASR, but with reference to the rankings here), and that is indeed quite annoying to me.

Just look at any online subjective reviews of the Eversolo amp-6 vs the eversolo amp-6 master edition streamer.

Every single reviewer praised them both but talked of the ability of the master edition to extract more detail, grabs your attention, more nuanced, greater finesse, a definite upgrade but only you can decide if it is worth the extra over the standard but you will not be disappointed. etc….etc.

Only real audiophiles buy master editions

:facepalm:
 
Oh, it is on them - my statements here aren't an indictment of ASR, Amir, the reviews here, what have you
Fair enough, all good. :)
they're a complaint about a set of behaviors I see from consumers and folks in the community at large which is partially driven by a cargo cult around data from this website. That cargo cult isn't necessarily anyone here's responsibility, but it's still annoying, and it's still something that those of us who keep getting bothered by it do wish to deflate.
This may be a too harsh of a term, but for want of a better descriptor I do see some out there "weaponising" measurements, citing them out of context or stating they mean something they don't. That said there is a certain poster (not you) that does this here all the time... so not something we can really control. So, how do you propose we deflate that? Further, a SINAD ranking being available or not isn't going to change that either. It's a little bit like kids sport, where some parents don't want scores kept... the kids and all spectators are all keeping score in their minds anyway. The SINAD chart is just an easy way for people to refer to a key engineering measurement metric across all reviews.


JSmith
 
If someone were to rely solely on the SINAD ranking chart provided on this site, disregarding all other information, they would almost certainly identify a device that offers the best performance and cost-effectiveness. Of course, anyone is free to take their chances with other approaches, such as additional research or listening tests.
 
Just look at any online subjective reviews of the Eversolo amp-6 vs the eversolo amp-6 master edition streamer.

Every single reviewer praised them both but talked of the ability of the master edition to extract more detail, grabs your attention, more nuanced, greater finesse, a definite upgrade but only you can decide if it is worth the extra over the standard but you will not be disappointed. etc….etc.

Only real audiophiles buy master editions

:facepalm:
I'm very grateful that I'm not required to think about source gear enough to recognize those names or know how large the difference in price there is, frankly. Apparently one of them has an inter-IC sound output, which is just a truly stupid idea, however.

This may be a too harsh of a term, but for want of a better descriptor I do see some out there "weaponising" measurements, citing them out of context or stating they mean something they don't. That said there is a certain poster (not you) that does this here all the time... so not something we can really control. So, how do you propose we deflate that?
Yes, it's tragically common IMO. It's been borderline heartbreaking to me to watch the widespread rise of measurements with ASR, Crinacle, Erin, etc precipitate this new problem. I remember being absolutely starstruck in the late twenty teens that we suddenly had so much data available to the public, and thinking how it would make everything so much better. I suppose that not everything comes without unintended side effects.

From my POV the angle of attack is and must be about emphasizing skepticism as a core tenet of scientifically minded audio. Ironically, the people who reject measurements have, atypically, managed to muster the skepticism about one particular factor predicting their subjective experience - I personally try to use that as an outreach opportunity for "let's apply that same lens to the claim that Au-Ag conductors improve the sound quality too" with...mixed success. For those of us not inclined to buy into the fairy dust, I mostly just try to hold that line that we shouldn't be celebrating something that isn't actually useful from an audibility standpoint: I tell people to get cheap gear, pay more if they want bling, and not expect that a DAC or amp is going to "reveal new details in the music". It goes without saying for those of us who are deep in it, but a lot of people can follow a ranking chart without understanding what's actually being quantified or how it relates to perception.

Obviously, that won't stop the assholes, but the best thing we can do with them is to stigmatize and shun that sort of behavior IMO.

Further, a SINAD ranking being available or not isn't going to change that either. It's a little bit like kids sport, where some parents don't want scores kept... the kids and all spectators are all keeping score in their minds anyway. The SINAD chart is just an easy way for people to refer to a key engineering measurement metric across all reviews.
Here I outright disagree with ya - for the people who only scan a couple images in a review, having any kind of a ranking is going to have some impact on their perception of the world. It's their fault, and it's down to them being intellectually lazy, but it is something that happens. It's something we ponder internally at headphones.com whenever we think about making that kind of content: the easier it is to understand "product A > product B in this respect", the more "economical" it is for people to substitute that for actually understanding the comparison.

Obviously, reviews are comparative by their nature - in a world where Johnson noise didn't exist, why would we accept -118dBV noise when we don't have to, even if it only matters on IEMs designed by people of questionable competence? - so there must by some reference to the "state of the field". I personally would much prefer a "pass/fail" kind of ranking of products (e.g. "nonlinear distortion below a reasonable threshold of audibility in music: check. Noise below the threshold of audibility with 'normally sensitive' devices of its intended product class: check. Output power sufficient to drive a normally sensitive device of its intended product class: check", etc) and then ranking anything that "passes" inversely by price. From my POV, it would be better to encourage there being more SINAD 100 $50 DACs that had DSP vs. creating an incentive for more SINAD 122 $300 DACs.

ASR is important. The rankings that exist here have a real impact on the market, and I reckon there are ways that they could benefit consumers more.
 
If someone were to rely solely on the SINAD ranking chart provided on this site, disregarding all other information, they would almost certainly identify a device that offers the best performance and cost-effectiveness. Of course, anyone is free to take their chances with other approaches, such as additional research or listening tests.
So let me explain why I disagree: If you come upon that ranking chart without properly understanding the nature of audio playback - that your transducers will have meaningfully more than -90dB distortion products, that noise can only matter if it is audible with your playback device, etc - you can easily get the impression that a Topping D90III or SMSL SU-10, both of which are well north of $600, is a "better choice" than a $15 apple dongle DAC....even if both of them are gonna sound exactly the same for you. Yes, the D90 or SU10 are superior in their fidelity...but not in ways that actually matter from a sound quality standpoint, and they do command a premium price proportionate to the cost of achieving those sorts of measured results.

Lay users are, generally, pretty bad at identifying what's audible based on data - hell, on this very site I've seen arbitrarily drawn flat lines vs. frequency for what constituted "acceptable" distortion levels in audio products. I'm confident that someone who buys something based on the SINAD chart isn't going to get a product that's audibly distorting, and if it has audible noise that's almost certainly going to be a fault of their IEM selection/the designers thereof, but I am not confident that they'll get the most cost-effective solution that would sound equivalent.
 
I feel like you are tilting at a position I'm not espousing here - my claim is not "ASR values only SINAD", and if I've said that somewhere, please point me to where, because I'd like to correct it. My claim is "products are ranked on ASR by SINAD"...because you have a ranked list of products using SINAD, sometimes SNR, and very few other metrics of the many you do measure (e.g. there is no analogous table for distortion-free range on multitone tests, maximum output power at a given distortion and load, etc).
You are not hearing me and repeating exactly what I was responding to. The SINAD is just a number in the dashboard. I simply put it in a table so that members know how that number compares to other numbers. That is all it is. A way to organize presentation of a number. I organize other numbers like this such as sensitivity of headphones. Are you going to say just because I do that, we rate headphones by their sensitivity???

To be sure, the presentation is powerful because it exposes how bad audio industry has gotten in letting simple performance metrics lag for no good reason. No product is made cheaper by making higher distortion and noise. Heck, it is the other way around, with cost going up. In other words, the number telegraphs itself to the industry. I had nothing to do with the situation getting this way. But am part of a solution to it.

Companies need to measure and optimize for lowest performance and noise unless they can demonstrate, through controlled listening tests, that excess distortion & noise has a positive subjective value.

Until then, pointing the arrow at us as if we have created a problem is totally out of line. Your favorite product has too low of a SINAD? Go ahead and prove to me its efficacy regardless. Don't complain about SINAD itself. And certainly don't create FUD around how it is measured, etc.
 
So let me explain why I disagree: If you come upon that ranking chart without properly understanding the nature of audio playback - that your transducers will have meaningfully more than -90dB distortion products, that noise can only matter if it is audible with your playback device, etc - you can easily get the impression that a Topping D90III or SMSL SU-10, both of which are well north of $600, is a "better choice" than a $15 apple dongle DAC....even if both of them are gonna sound exactly the same for you.
How did you jump from SINAD to "distortion?" Nearly all high SINAD ranked products are dominated by noise. And noise has easy threshold of detection/audibility.

I am reminded of a very technical reviewer contacting me, surprised that even a Purifi amp had audible idle noise on the high sensitivity active speaker he was working with. I had to explain to him how this was expected and that if he wanted to get a dead silent system, he had to use likes of Topping amp with 120 dB of SINAD.

So no, the $15 dongle won't do in all cases. And that there are considerations such as losing headroom through equalization, etc. On the other hand, the $15 dongle will outperform multi-thousand dollar DACs. We know that based on SINAD and the dashboard it comes out of.

Fielder, a Dolby and AES Fellow, wrote a number of papers a decade or two back on how our end to end electronics were not good enough to present a noise-free channel with full dynamic range of a live concert. We are finally at a point where we can get that. And at very reasonable cost. To dismiss that as you are doing is improper.
 
You are not hearing me and repeating exactly what I was responding to. The SINAD is just a number in the dashboard. I simply put it in a table so that members know how that number compares to other numbers. That is all it is. A way to organize presentation of a number. I organize other numbers like this such as sensitivity of headphones. Are you going to say just because I do that, we rate headphones by their sensitivity???

To be sure, the presentation is powerful because it exposes how bad audio industry has gotten in letting simple performance metrics lag for no good reason. No product is made cheaper by making higher distortion and noise. Heck, it is the other way around, with cost going up. In other words, the number telegraphs itself to the industry. I had nothing to do with the situation getting this way. But am part of a solution to it.

Companies need to measure and optimize for lowest performance and noise unless they can demonstrate, through controlled listening tests, that excess distortion & noise has a positive subjective value.
So I'm trying to reconcile these three paragraphs - in and of itself, paragraph one makes sense as an argument: one can show an ordinal list without implying that falling to one end of said list is good or bad. This becomes slightly confusing to me in the context of the successive paragraphs, which seem to imply that, indeed, you are stating that falling towards one end of the list is explicitly bad. Like, what am I missing here? It seems like a pretty reasonable characterization of your position to say that you're saying that lower noise and distortion is better performance, and you are making a list showing the relative performance according to that metric.

Until then, pointing the arrow at us as if we have created a problem is totally out of line. Your favorite product has too low of a SINAD? Go ahead and prove to me its efficacy regardless. Don't complain about SINAD itself. And certainly don't create FUD around how it is measured, etc.
I should note, this really isn't a burden of proof we - being those of us against the snake oiliness - likely want to endorse. Like, I am not going to go out and try to prove that a 99.999% CU cable is equal to a .99999% one, the burden of proof is on the folks asserting an audible improvement from any given metric. And, outside of audible improvement, what is there in audio?

How did you jump from SINAD to "distortion?" Nearly all high SINAD ranked products are dominated by noise. And noise has easy threshold of detection/audibility.
You will note, if you re-read what you quoted, that I referenced both noise and distortion there...

So no, the $15 dongle won't do in all cases. And that there are considerations such as losing headroom through equalization, etc. On the other hand, the $15 dongle will outperform multi-thousand dollar DACs. We know that based on SINAD and the dashboard it comes out of.
Can you tell me a scenario where you do not believe that a $15 apple dongle will be audibly sufficient as a digital-to-analog converter, assuming sufficient gain on the headphone/power amplifier it is connected to? Depending on the contention here, I might be willing to place a bet on a blind listening test.

Fielder, a Dolby and AES Fellow, wrote a number of papers a decade or two back on how our end to end electronics were not good enough to present a noise-free channel with full dynamic range of a live concert. We are finally at a point where we can get that. And at very reasonable cost. To dismiss that as you are doing is improper.
The full dynamic range of a live concert is objectively not something we should be recommending to end users, because it implies peak and rms levels exceeding safe recommendations for human exposure. In order to use more than approximately 100dB of DNR - assuming that the content which we played back had such dynamic range to begin with, I should note - we would require either extremely high crest factors or RMS levels unsuitable for sustained exposure without hearing damage.
 
Back
Top Bottom