• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Is SINAD important? - "Myths" about measurements! [Video YT]

But as I said, it is orthogonal to the mission and charter of this site.
lol I’m not stupid enough to argue with the site’s founder about what the site’s mission is. Or am I?

Nah. But I am curious. Is this because it is something that you aren’t interested in, or because you don’t think it’s something that you have the right tools for making accurate determinations about? The latter makes a ton of sense to me; the former not so much.
Reliability is very important but not something that can be checked in his reviews, apart from obvious issues during testing or inspection of the device under test.
Yes, but for me this is like searching for your lost car keys under a lamp post, not because that’s where you dropped them, but because that’s where the light is good.

When a $10 dongle is perfectly transparent for 98-100% of users (and certainly myself), features and reliability become far, far more important. But Amir and I have disagreed about this before and it’s fine. We’re allowed to have different opinions and I’m not asking him to do anything different. And I very much admire and respect his work!
 
lol I’m not stupid enough to argue with the site’s founder about what the site’s mission is. Or am I?

Nah. But I am curious. Is this because it is something that you aren’t interested in, or because you don’t think it’s something that you have the right tools for making accurate determinations about? The latter makes a ton of sense to me; the former not so much.

Yes, but for me this is like searching for your lost car keys under a lamp post, not because that’s where you dropped them, but because that’s where the light is good.

When a $10 dongle is perfectly transparent for 98-100% of users (and certainly myself), features and reliability become far, far more important. But Amir and I have disagreed about this before and it’s fine. We’re allowed to have different opinions and I’m not asking him to do anything different. And I very much admire and respect his work!
I've been involved in reliability testing for product developement. We would test around 100 units (or more for a higher volume product) for around 6 months of constant load cycling 24/7 to get anywhere near any sort of confidence. And that is not really even scratching the surface of a statistically valid result.

I can't think of any mechanism at all which would allow anyone other than the manufacturer to do this sort of testing.
 
I've been involved in reliability testing for product developement. We would test around 100 units (or more for a higher volume product) for around 6 months of constant load cycling 24/7 to get anywhere near any sort of confidence. And that is not really even scratching the surface of a statistically valid result.

I can't think of any mechanism at all which would allow anyone other than the manufacturer to do this sort of testing.
There must be some established principles to it,don't it?
Like keeping heat in check,using known durable components,a little over-engineering,etc.
We're not spiting hairs here nor we can predict unusual designs,etc.

A nicely made 15-0-15 classical linear PSU using 7815/7915 (or double 7815 the old DIY way :p ) for example can be bomb proof judging by its gazillion applications,etc .
 
There must be some established principles to it,don't it?
Like keeping heat in check,using known durable components,a little over-engineering,etc.
We're not spiting hairs here nor we can predict unusual designs,etc.

A nicely made 15-0-15 classical linear PSU using 7815/7915 for example can be bomb proof judging by its gazillion applications,etc .
You'd be making way too many assumptions. You could be using the exact same components, but then the supplier of one of them makes a change to their process or one of their input materials starts to come from a different supplier, and that previously bullet proof component is now a major source of failures.

Besides, what are we expecting Amir to do here, exactly? Open up every single item he gets for review, check that all the components are from a reputable supplier and have the correct specifications, eyeball the design, and declare a totally made up guess about the potential reliability? A huge waste of time, IMO. Not to mention that you are now making Amir into a pro-bono QC department for all these companies. Have you even considered that maybe this is something that Amir just does not want to do, even if it could theoretically be done in a somewhat useful way?
 
You'd be making way too many assumptions. You could be using the exact same components, but then the supplier of one of them makes a change to their process or one of their input materials starts to come from a different supplier, and that previously bullet proof component is now a major source of failures.

Besides, what are we expecting Amir to do here, exactly? Open up every single item he gets for review, check that all the components are from a reputable supplier and have the correct specifications, eyeball the design, and declare a totally made up guess about the potential reliability? A huge waste of time, IMO. Not to mention that you are now making Amir into a pro-bono QC department for all these companies.
We obviously don't ask Amir to do such,he did it in the past cause he had probably less work to do but we don't expect it anymore.
What I wrote is exactly that,an eyeballing,some temp measurement,etc,not necessarily by Amir and will probably happen down the review thread if the device is interesting enough or fails as we have seen.

The comment was a general one.
 
You'd be making way too many assumptions. You could be using the exact same components, but then the supplier of one of them makes a change to their process or one of their input materials starts to come from a different supplier, and that previously bullet proof component is now a major source of failures.

Besides, what are we expecting Amir to do here, exactly? Open up every single item he gets for review, check that all the components are from a reputable supplier and have the correct specifications, eyeball the design, and declare a totally made up guess about the potential reliability? A huge waste of time, IMO. Not to mention that you are now making Amir into a pro-bono QC department for all these companies. Have you even considered that maybe this is something that Amir just does not want to do, even if it could theoretically be done in a somewhat useful way?
And even that won't help. As many, or more, reliability problems come from design or manufacturing process failures as from component quality. You can't find those by inspection of one unit.

Take the biggest disaster we've seen here - the PA5. No inspection of component quality would have caught that.
 
And even that won't help. As many, or more, reliability problems come from design or manufacturing process failures as from component quality. You can't find those by inspection of one unit.

Take the biggest disaster we've seen here - the PA5. No inspection of component quality would have caught that.
Yep,that's why the burden is on the manufacturer's side.
Both for safety first and reliability after if they are advertised as such.

Cost is a major issue of course,we don't expect a low budget device to last 40 years.It can very well be,but that it's not is selling point,price is.
For a pricier one that's excepted though with some service,as with everything else.
 
I've been involved in reliability testing for product developement. We would test around 100 units (or more for a higher volume product) for around 6 months of constant load cycling 24/7 to get anywhere near any sort of confidence. And that is not really even scratching the surface of a statistically valid result.

I can't think of any mechanism at all which would allow anyone other than the manufacturer to do this sort of testing.
Consumer Reports does it for cars by polling their readers in a systematic way. Perhaps a similar thing could be done here. Note, I’m NOT asking Amir to do this.

I’ve been looking for a web app to create as I attempt to transition away from legal code to computer code. Maybe I’ll throw together a prototype and then see if it gets any traction.
 
Last edited:
Nah. But I am curious. Is this because it is something that you aren’t interested in, or because you don’t think it’s something that you have the right tools for making accurate determinations about? The latter makes a ton of sense to me; the former not so much.
There are two types of failures:

1. Design failures. Fixtures can be built to torture review samples to potentially find a percentage of these. Failures that take a while to show up are not going to be found this way.

2. Manufacturing/build problems. This requires random sampling of products to have a remote chance of having statistically significant data with respect to such issues. I don't see this at all being in scope of what someone could do. You all collectively are responsible for pooling your data and pointing at the problem.

To do #1 requires fair bit of money, some design resources and a dedicated person who enjoys watching paint dry. I am only mildly interested in building the fixtures but personally have no time to dedicate to it.

If these devices cost thousands of dollars and had high percentage of failure, the ROI would be high to do these things. But we are talking about very integrated, simple boxes with low failure rate. You could set up the above and do it all year and catch one or two instances of failure. Is that worth it? Would you all fund a full time person and tools just to do this? Or do more reviews as I am doing of products?
 
There are two types of failures:

1. Design failures. Fixtures can be built to torture review samples to potentially find a percentage of these. Failures that take a while to show up are not going to be found this way.
True.
However, such a test provides valuable insights into potential future behavior. For example, does the amplifier shut down when it reaches high temperatures? If not, how hot does it get? Are components like capacitors rated for temperatures that could suggest future issues or reduced lifespan due to heat?

The results of this type of test can highlight shortcomings, reveal limit behavior, and evaluate protection efficiency, serving as an indicator of possible future problems.

To do #1 requires fair bit of money, some design resources and a dedicated person who enjoys watching paint dry. I am only mildly interested in building the fixtures but personally have no time to dedicate to it.

If these devices cost thousands of dollars and had high percentage of failure, the ROI would be high to do these things. But we are talking about very integrated, simple boxes with low failure rate. You could set up the above and do it all year and catch one or two instances of failure. Is that worth it? Would you all fund a full time person and tools just to do this? Or do more reviews as I am doing of products?
Several proposals and ideas were shared and discussed in the FTC monster thread, and it doesn’t necessarily have to be a costly or time-consuming process.

That said, if someone is already short on time, every minute matters. And why spend your free time on something you’re not fully committed to if it takes away from something you enjoy?
 
Several proposals and ideas were shared and discussed in the FTC monster thread, and it doesn’t necessarily have to be a costly or time-consuming process.
Are you volunteering?
 
Agreed, longevity testing is not possible and relying on customer feedback also is not the way to go nor the amount of complaints is an indicator.
Only when for instance a reviewed subject fails or has issues and the replacement also does the same it would be a problem.
Excessive heat could be an indicator but may require opening up gear and FLIR under (sensible) load conditions. Not possible when it is a loaner and only a possibility when it pops up during testing.

Also 'subjective impressions' is rather pointless. I know readers want/demand it but is personal as well as room dependent for transducers, quick remarks is fine with me, after all Amir is a trained listener and has references which is quite important.

Another issue is immunity to RF and ground loops as well as possible ESD issues. This can't be tested (properly) as it would require an EMC test lab... winging it is not Amirs style and would be against that anyway without real test equipment and conditions.

I might volunteer to write a page about what SINAD means and doesn't mean (would do so for my own website but can be copied here and Amir could edit whatever he wants :) )

Otherwise I think what is done now suffices. Sometimes more than 'standard' measurements might be called for.

Viable suggestions for 'improvement' of reporting (IMHO) that would benefit ASR could be:

1: Links under reporting to explanations about those specific measurements (no disclaimers for godsake, it suggests the wrong thing) so only need to be written once and not open for comments.
2: A summary with measured specs (could AP spit that out ?)
 
Last edited:
I strongly agree that YouTubes that aren't music (and maybe some of those) or books should have a summary. It is not a good use of time to listen to them without a strong reason. The message board software does a good job summarizing links. Posters can run YouTubes through an AI summarizer and post the summary along with the YouTube.

Personally since I have worked on wireless standards, and in optical communications, I'm very used to a spectrum plot. So in audio electronics for a 1kHz test, I look at the noise floor, how far down the harmonics are, and for power supply noise. The distortion vs output is another graph I look at. I enjoy Amir's ranked bar charts when making a buying decision, but I always read the full review.

For speakers, I look at all the graphs published on the standard ASR report. Erin's reviews are good too. I'm pretty obsessive looking at microphone polar and frequency curves.

I have studied perceptual psychology and the ear, but I'm not as deep as JJ.

So of course it is possible to have a different harmonic distortion or noise spectrum for the same single SINAD summary number. But that doesn't mean we have calibrated ears for noise and distortion unless it is really bad. As many have said in this thread, the harmonics and noise below a certain level are not going to be audible.

Where you will hear differences are in guitar pedals, analog equipment, and plugins used in the recording studio which are designed to produce a lot of distortion, and analog gear pushed out of its linear range, like guitar amplifiers. There are entire branches of electronic music based on noise, and adding noise in the lo-fi sound artistic style. Recording engineers and many musicians have calibrated ears at high distortion.
 
GoldenSound talks through some myths (according to them) and misconceptions about audio/hifi measurements.

The Headphone Show



NB FOLLOWING SUMMARY ADDED BY MODS COPIED FROM A POST DOWN THREAD FROM THE OP .

The video is disappointing in reality,

but I thought it could open up debate In fact, the conclusion is "don't rely only on measurements" and "don't rely only on the ears" :facepalm:

GoldenSound talks through some myths (according to them) and misconceptions about audio/hifi measurements.

The Headphone Show



NB FOLLOWING SUMMARY ADDED BY MODS COPIED FROM A POST DOWN THREAD FROM THE OP .

The video is disappointing in reality,

but I thought it could open up debate In fact, the conclusion is "don't rely only on measurements" and "don't rely only on the ears" :facepalm:
that video make me feel like this

talking-about-minecraft.gif
 
I accidentally clicked on this thread and then skimmed the last page or two. Whether SINAD matters always devolves into some sort of subjective debate, but hope most ASR regulars understand its pros and cons, advantages and limitations. I use it as a quick pass/fail or go/no-go sort of assessment, then dig deeper if need be.

Reliability has also been debated ad nauseum and I agree is not within ASR's scope. Proper reliability testing is challenging, time-consuming, and too often a test to destruction. Not at all practical for ASR, especially considering equipment is usually donated by owners who likely expect (or at least hope for) its return in working condition. The FTC power test is not a reliability test (unless the device fails, I suppose) but potentially an indicator if the amp overheats or whatever. But I explicitly excluded ASR performing the 5-minute FTC power test because of the potential for damage to a member's device and length of time required for testing (plus the rule and test conditions are poorly defined IMO).

These past few months all too often the "fun" part of the original mission statement seems to missing.
 
Back
Top Bottom