• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Standards for Audio Electronics Measurements?

Mihalis

Member
Forum Donor
Joined
Dec 19, 2020
Messages
84
Likes
81
It’s fair to publish minimum expectations.

a) What about Multitone? You know the whole argument, music is not a 1kHz testtone.

b) For Amps. What about powercube „stability“? And overly strong frequency dependent SINAD?
What is the argument against multi tone and also assessing different signals against different loads? Or what is the argument that one tone at 1khz tells us everything? The way I read these measurements is: so far so good, when it comes to these measurements there is no problem. Αnd not: if these measurements are ok, it means this is an amazing sounding product. No?
 

daniboun

Major Contributor
Joined
May 2, 2020
Messages
1,884
Likes
2,212
Location
France (Lyon)
No, I am asking if we should announce what our expectations are with respect to measured performance. What those specs are would be the second phase after we answer that question.

As it is not a question of defining a standard, we will have understood it) I think that it would rather be useful to attest to the good compliance of the product.
After the product has been measured, you could just summarize things in a table to see if the technical characteristics are therefore somewhat consistent


Here, two examples:


1702976383245.png





1702976823926.png
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,996
Likes
6,866
Location
UK
If you only test or review stuff you already know satisfies your parameters, then 'reviewing' is simply a matter of verification?

It would seem redundant, other than reporting that these devices measure according to your own notions.

It sounds like you are already having some electronic ennui. Knowing the results in advance and merely putting on an verification sticker would seem mind bogglingly boring.

Creating guidelines, as well. This might not be a hobby requiring hegemony.

I don't know if that would raise happiness levels.
I agree, now I've been thinking about it I don't think it's necessary to somehow let audio companies know "the standards" before products are reviewed here. Hasn't there already been a change in the industry since Amir started reviewing all these audio products, I think there has! So thinking about it, I reckon it's fine that there's some shocking products reviewed here on ASR (I'm not sure why Amir hasn't published some of the bad ones he's measured which he admitted to), and these audio companies can already tell from visiting this website what kind of measurements they need to aim for - afterall they're professionals and if they check in on sites like ASR they will see what they need to aim for in order to get good reviews. I now think that creating some kind of standards guidelines is redundant, the industry is already changing, it's their responsibility to make the changes, and some of them are already doing it! I think we're just overthinking the whole thing in this thread now. It's unnecessary. (There's also plenty of good measuring products to choose from, stimulated by this site too.)
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,679
Likes
241,096
Location
Seattle Area
I'm not sure why Amir hasn't published some of the bad ones he's measured which he admitted to)
I have explained this before. Every product a member sends is reviewed without exception.

If a company sends me something, and i find issues in measured performance, I run it by them. If there is a solution, I deploy it. Otherwise I give them a choice to give up on the review. This is a courtesy I provide. I hate having someone volunteer to send me something, and I then tell the world to not buy it. People who do subjective reviewing can get away with writing positive stuff even when the device is no good. I don't have that luxury.

Keep in mind that majority of gear is sent to me by members. Of the gear that is sent by companies, almost all of them have excellent performance. So the number of reviews you don't see is very small. But enough to be extra work for me.

The proposal here would avoid this issue altogether. If a company reaches out to me to test something, I can point them to these recommendations. That way even if they didn't know about them before, they would before a review.

So again, every product sent by a member (or bought by me) gets reviewed. So you will continue to see many reviews if products don't perform.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,996
Likes
6,866
Location
UK
I have explained this before. Every product a member sends is reviewed without exception.

If a company sends me something, and i find issues in measured performance, I run it by them. If there is a solution, I deploy it. Otherwise I give them a choice to give up on the review. This is a courtesy I provide. I hate having someone volunteer to send me something, and I then tell the world to not buy it. People who do subjective reviewing can get away with writing positive stuff even when the device is no good. I don't have that luxury.

Keep in mind that majority of gear is sent to me by members. Of the gear that is sent by companies, almost all of them have excellent performance. So the number of reviews you don't see is very small. But enough to be extra work for me.

The proposal here would avoid this issue altogether. If a company reaches out to me to test something, I can point them to these recommendations. That way even if they didn't know about them before, they would before a review.

So again, every product sent by a member (or bought by me) gets reviewed. So you will continue to see many reviews if products don't perform.
Ah, well I see, that is a good courtesy you do the companies that reach out to you by sending a product, so I can understand the distinction you're making. You could just e-mail them roughly what you're looking for re measurements (you already know what you're looking for) when they contact you re sending in a product, you probably don't even need to get any consensus in this thread.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,727
Likes
38,928
Location
Gold Coast, Queensland, Australia
After the product has been measured, you could just summarize things in a table to see if the technical characteristics are therefore somewhat consistent

This goes to holding the manufacturer to account for their published minimum specifications. Something I and others have been advocating for many years.
 
Joined
Sep 12, 2023
Messages
13
Likes
34
Man published TS parameters vs properly measured will, with some woofers, require double the box volume for QB3 alignment.
Manufacturers want you to believe that a smaller box is suitable, which it is , if a high box Qt is desirable
 

HarmonicTHD

Major Contributor
Joined
Mar 18, 2022
Messages
3,326
Likes
4,836
What is the argument against multi tone and also assessing different signals against different loads? Or what is the argument that one tone at 1khz tells us everything? The way I read these measurements is: so far so good, when it comes to these measurements there is no problem. Αnd not: if these measurements are ok, it means this is an amazing sounding product. No?
I think you misunderstood.
I am all for adding multitone and load dependency. Armir does that in today’s tests anyhow so why not also specify the expected value when he would consider a product of being ok/good.
 
Last edited:

dtaylo1066

Addicted to Fun and Learning
Joined
Jan 12, 2019
Messages
660
Likes
827
I think this is more valuable than SINAD.

In a way, having something tested by a nationally recognized testing lab, is something we should be able to see from a photo or spec sheet, but it would be nice to say that to get a golfing panther in 2024 for an AC mains powered device, your power supply or device should have UL, CSA, ETL, CQC certification or whatever other NRTL makes sense for the country of manufacture.

While flawed, EPA tests all cars for efficiency, MPG, and lists what the annual fuel costs for an average driver in a year would be. A tremendous amount of car purchasing relates to those numbers.

Others care more about 0-60 performance, or braking distance, or hauling power, or a car's internal cubic feet of space. All measurable, all available, and easy to compare between models.

While the car companies put our specs and information about their models, we are far more reliant on, and believe more in, the veracity of independent testing by car rags when it comes to car performance.

Bad performance specs in Car and Driver or Road and Track and a certain number of car enthusiasts are not buying.
 

daverosenthal

Member
Forum Donor
Joined
Jan 5, 2020
Messages
40
Likes
104
To answer the original question, yeah, I think it is a good idea for ASR to provide a basic "prerequisites for recommendation" to companies. (and not just SINAD, but also safe grounding, meeting claimed specs, etc.). Those companies could comb through reviews from the past and figure it out themselves, but ASR might as well make it easy. The main upside as I see it is that manufacturers would be more willing to submit devices if they could mitigate the chance of some "gotcha" measurement catching them out. If this lead to more mainstream (e.g. top selling on crutchfield) hifi brands/components go through the review process here, I think that would be a big win.

I think the main challenge would be that manufactures would still feel the risk as, presumably, clearing these bars would be necessary, but not sufficient, for recommendation.
 

nagster

Senior Member
Joined
Jan 24, 2021
Messages
368
Likes
602
So I keep running into companies who send me products with best intentions but miss key aspects that cause me to not recommend. And example is a very nice dongle I recently received that had balanced output but was limited to just 2 volts out. If you don't know, I like to see minimum of 4 volts out from such a port as otherwise, you can find unbalanced dongles at lower cost that do that.

Another example is channel balance issue with had with an AIYIMA amp where there was almost 1 dB differential.

Yet another is expected SINAD for an amplifier. Yes, we don't rate amps on that one number but if SINAD is say, 60 dB, the rest are going to follow.

In many cases decision makers are not knowledgeable in these companies so proudly show me the gear, only then realizing they could have built something better, but didn't.

Note that my focus here is for electronics only. And I am thinking about the fewest key criteria that passes the "acceptable" mark from me, and by implication, from you all. They can do better of course to get higher praise but want to establish what we like to see.

An example for a dongle:

Output voltage: at least 2 volts on unbalanced, 4 volts on balanced.
SINAD: 100 dB or better, 1 kHz, 22.4 kHz bandwidth
SNR at 50mv: 85 dB (?)
SNR at full 2/4 volt output: 110 dB (?)
Output impedance < 1 ohm

Example for Amplifier:
SINAD >= 80 dB
SNR >= 110 dB (?)
Channel balance < 0.5 dB
Crosstalk > 70 dB @20 kHz


This would be presented as general guidelines for companies to adopt (or not). The point of this thread is not to discuss the specifics although you can, but determine if it is time for us to do this. Hate to have companies ready to produce performant products based on objective measurements but not know clearly what those measurements should be.

What say you?
・Recommendation and non-recommendation.
・amirm panther
・ASR member survey
Regarding the above items of each review, for example, how about publishing the aggregate results when SINAD: less than 100 dB and the aggregate results when 50 mV SNR: less than 85 dB as reference information or conveying them to the manufacturer?
Although it is different from presenting guidelines, there is a possibility that the manufacturer will understand this naturally.
The disadvantage is that the initial aggregation process requires effort.
 

JohnBooty

Addicted to Fun and Learning
Forum Donor
Joined
Jul 24, 2018
Messages
637
Likes
1,595
Location
Philadelphia area
I have explained this before. Every product a member sends is reviewed without exception.

If a company sends me something, and i find issues in measured performance, I run it by them. If there is a solution, I deploy it. Otherwise I give them a choice to give up on the review. This is a courtesy I provide. I hate having someone volunteer to send me something, and I then tell the world to not buy it. People who do subjective reviewing can get away with writing positive stuff even when the device is no good. I don't have that luxury.

Keep in mind that majority of gear is sent to me by members. Of the gear that is sent by companies, almost all of them have excellent performance. So the number of reviews you don't see is very small. But enough to be extra work for me.

The proposal here would avoid this issue altogether. If a company reaches out to me to test something, I can point them to these recommendations. That way even if they didn't know about them before, they would before a review.

So again, every product sent by a member (or bought by me) gets reviewed. So you will continue to see many reviews if products don't perform.
Thanks for elaborating on (or, I guess, repeating) this - I had the same concern as others and you 100% addressed that.

I agree with your policies and I think that establishing "ASR baseline standards" is a good move.

At first I thought "ASR baseline standards" might greatly reduce the number of products sent to you by manufacturers. But now that I think about it, it might actually even increase the amount. Rather than wondering about your minimum standards, now they can be sure.
 

Mihalis

Member
Forum Donor
Joined
Dec 19, 2020
Messages
84
Likes
81
I think you misunderstood.
I am all for adding multitone and load dependency. Armir does that in today’s tests anyhow so why not also specify the expected value when he would consider a product of being ok/good.
agreed.
 

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,784
Likes
6,227
Location
Berlin, Germany
Balanced automatically gives you double the output voltage. If it's below 4 V the circuit either has a problem, or unbalanced also is below 2 V - so requirement not met.
It's not that easy. Doubled voltage of balanced vs. unbalanced is only achieved when one leg of the balanced output is derived by an inverter from the other leg... which of course is used many times as it's the cheapest trick to obtain a balanced output.

But when you have transformer- or servo-balanced output the voltage stays the same no matter whether you short out one leg to ground or not. Lot's of RME interfaces have servo-balanced outs (the natural choice when the output connector type is 1/4" TRS).

@amirm
The definition of a balanced output vs. unbalanced output is not a voltage spec or how it is distributed between pins. The only required feature of line-level balanced output is impedance balance and that should be checked for. The receiver does the job by subtracting the pin voltages and re-referencing them to the local ground which is the whole idea of balanced connections (and which must be checked as well because some devices with balanced inputs don't even do the proper subtraction). The common-mode voltage shall not matter.

Bottom line: There is absolutely no reason to insist on a balanced output to have exactly double (or at least more) voltage between pins than unbalanced. Same thing for balanced inputs, there is no reason to insist they must have only half of the voltage sensitivity.

What matters is if the voltage levels, balanced or not, are adequate for the use case / device class. Consumer market items should reach 2Vrms on output and shall not clip with 2Vrms input (not clip before/at any ADC or volume control). Pro gear traditionally is spec'd to have a +4dBu "nominal" level with ample headroom for peak (20dB, so +24dBu max). For consumer stuff, there's the -10dBV spec for "nominal" level, thus only 16dB of headroom.
 
Last edited:

KSTR

Major Contributor
Joined
Sep 6, 2018
Messages
2,784
Likes
6,227
Location
Berlin, Germany
How about a simple “Meets or exceeds minimum ASR performance thresholds” or “Does not meet minimum ASR performance thresholds”. No Pass or Fail language and no certification or approval endorsement.
Seconded. ASR shall not play God.
We've already seen the side effects of that, low-priced gear that's beed purposely optimized to pass local 1kHz SINAD expectations but neglecting almost everything else like build quality, product safety, customer support...
 
  • Like
Reactions: DDF
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,679
Likes
241,096
Location
Seattle Area
@amirm
The definition of a balanced output vs. unbalanced output is not a voltage spec or how it is distributed between pins. The only required feature of line-level balanced output is impedance balance and that should be checked for. The receiver does the job by subtracting the pin voltages and re-referencing them to the local ground which is the whole idea of balanced connections (and which must be checked as well because some devices with balanced inputs don't even do the proper subtraction). The common-mode voltage shall not matter.
For heaven's sake... I know all this. I have even done a video on all of this. The industry calls this balanced. If I call it differential, etc., they are not going to understand. This was a short-cut example, not something to pick on this way. :( And I certainly was not talking about inputs on a dongle.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,679
Likes
241,096
Location
Seattle Area
We've already seen the side effects of that, low-priced gear that's beed purposely optimized to pass local 1kHz SINAD expectations but neglecting almost everything else like build quality, product safety, customer support...
My job is to judge performance. With no goalposts, this industry has sunk to as low as they can possibly get. We have been able, through numerous reviews, managed to move this titanic of a ship to correct its heading a few degrees. Guidelines for performance may act as an auto-pilot for the industry to correct itself. And give targets to engineers and marketing people to follow when they have none currently. Once the whole industry is engineering proper products, then you can sort them based on other factors. Right now you have all of those problems and not knowing how something performs.
 

BobbyTimmons

Senior Member
Joined
Dec 26, 2019
Messages
355
Likes
403
My job is to judge performance. With no goalposts, this industry has sunk to as low as they can possibly get. We have been able, through numerous reviews, managed to move this titanic of a ship to correct its heading a few degrees. Guidelines for performance may act as an auto-pilot for the industry to correct itself. And give targets to engineers and marketing people to follow when they have none currently. Once the whole industry is engineering proper products, then you can sort them based on other factors. Right now you have all of those problems and not knowing how something performs.
I feel we also need to do consumer advocacy for improvements in warranty policies and mean time between failure. For electronics those are probably the two most important things aside from performance.
 

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
3,923
Likes
6,058
My job is to judge performance. With no goalposts, this industry has sunk to as low as they can possibly get. We have been able, through numerous reviews, managed to move this titanic of a ship to correct its heading a few degrees. Guidelines for performance may act as an auto-pilot for the industry to correct itself. And give targets to engineers and marketing people to follow when they have none currently. Once the whole industry is engineering proper products, then you can sort them based on other factors. Right now you have all of those problems and not knowing how something performs.

I feel we also need to do consumer advocacy for improvements in warranty policies and mean time between failure. For electronics those are probably the two most important things aside from performance.

Wildcard idea. Because this is not certification but just guidelines, what if you had guidelines that are not part of the objective testing?

This actually helps prevent misuse of the guidelines as certification because “obviously” these things are not tested. But it also helps the industry know what makes sense.

1) We believe a __DAC/amp/turntable___ should have a MTBF of x years. Or an AFR of < 1.5%.

Hard drives break all the time, but we don’t think of it as negligent, or consumer unfriendly. It’s the nature of the product. There’s no way to test this, but having a number will help new companies understand what they should be targeting if they don’t want consumer opinion to sway negative.

2) Components operating at greater than 5V should use AC power supplies that have been certified by a Nationally Recognized Testing Laboratory.

Maybe 5V is too low of a threshold but it’s a reasonable thing to ask. It may not affect the “panther” but you do mention it and if certification ends up costing the consumer $5 more, it’s probably a good idea to prevent scenarios like PS Audio’s power thingy having full 117v on the chassis.

3) User replaceable batteries are preferred (?)

4) Products that are updated regularly (like AVRs) should maintain the measured audio performance +/- x%, if pricing follows inflation.

We hate measurement regression even if the newer product still meets the minimum guidelines. This sort of says that if you are going to charge more, it should perform better.

5) Components should be finished to sufficient standards to prevent the user from bleeding or cutting themselves on sharp edges.

I recall you talking about products that had sharp edges or burrs that could cut you?

6) Products greater than ___ lbs should consider handles for safe movement.

7) Products that carry tipping hazard warnings should have provisions for wall tethering without damaging the finish.

This is for you, Harman. The JBL Studio 590 has the actual safely pamphlet saying that the 590 can be a tipping hazard. Not every speaker has that warning from Harman (so it’s not like a California Prop 65 warning). They recommend tethering, but that involves drilling into the cabinet. It would have been so much better if they had a small M8 thread to allow easy mounting. Alternatively, they can add some weight on the cabinet base, etc.

8) Products requiring warranty service should have turnaround times of 30 days or less, not including the shipping times. If repairs are expected to be greater than 30 days and the product is a currently shipping model, companies should strongly consider supplying a replacement unit.

30 days? 60 days?
 

li’l ‘lectric

Member
Forum Donor
Joined
Jan 13, 2021
Messages
8
Likes
34
I wonder if this problem couldn't be tackled from the other direction: instead of minimum expectations for manufacturers, perhaps we could produce a testing manual that targets enthusiasts. For example, here's what you can measure with basic tools, or with a modest investment that'll save money by avoiding purchases, or a kit which costs rather a lot but will help you spend money on audible (or at least measurable) improvements.

Amir's experience could help us identify low-hanging fruit: if the manufacturer had just done this and that with a $20 multimeter, everyone would now be much happier. Or, here's how you use a $200 interface to test DAC noise, and no it might not tell you whether it is -100 or -130 but it will show problems at -70.

That way, we'd have more people performing audio science reviews, maybe even getting some people engaged in real science for the first time. We could also have multiple people testing multiple samples, addressing to some extent the problem of variation. But we'd also winnow out a lot of commodity items that aren't worth Amir's thorough evaluation. When a start-up reaches out to ASR for a review, Amir could outsource initial testing and the community would still have some decent idea of what those test results represent. And manufacturers, it seems, could also stand to learn some basic principles and methods of testing.

So, everybody wins?
 
Top Bottom