• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Standards for Audio Electronics Measurements?

Red_Red

Member
Joined
Oct 10, 2018
Messages
45
Likes
64
Location
Europe
Sorry Amir, getting confused from what I am reading. Are you saying what the required spec of the product needs such that ASR can do a standardize measurement or are you saying what is the minimum measured performance ASR are expecting for a product to be an adequate and competently designed product?
Ideally, in fact, there should be an international standard at least on output/input voltages but ASR can insist on the establishment of such a standard but in no way replace it.

So it seems that Amir is talking about the minimum required to have a chance of being "recommended".

Understand that:
- if this minimum requirement is not achieved, the product will in any case not be recommended (except possibly for a peculiar use).
- if this minimum is reached the product may also not be recommended in the case of obvious design errors highlighted by the other measures.
- if no errors appear it will in all cases be recommended.


But clearly, if no standard clearly yet exists for hi-fi, it is obvious that both manufacturers and users would benefit and that the opportunity is given to define such a standard. Which is also a proposal suggested by Amir in his AES paper.
 

CleanSound

Major Contributor
Forum Donor
Joined
Apr 30, 2023
Messages
1,654
Likes
2,517
So it seems that Amir is talking about the minimum required to have a chance of being "recommended".

I suspect most manufacturers who sends anything in to be measured:

1) Very well know that Amir stack rank the gear in segments of excellent, good, etc. If you send something in that is average or below, well it doesn't take a genius to know that you will unlikely pass mustard.

2) Manufacturers who sends stuff in to be reviewed, already knows how it measured and knows if it will be recommended or not by ASR. There is a reason why Topping and SMSL are the most proliferated manufacturers in terms of sending products to ASR for review, they already know their products will be recommended.

On the other hand if the equipment is sent in by members, they don't know how it measures, otherwise they would not even send it in in the first place.

So I personally don't see the value of this.
 

Thomas_A

Major Contributor
Forum Donor
Joined
Jun 20, 2019
Messages
3,501
Likes
2,540
Location
Sweden
So what would the standards be based on? If audibility is one level, distortion would be dependent both on level and distribution. If it is noise, headphone use or loudspeakers in ITU standard room?
 

CedarX

Addicted to Fun and Learning
Forum Donor
Joined
Jul 1, 2021
Messages
544
Likes
910
Location
USA
What we want: good overall audio products...
1702919141779.png

Amir's reviews & tests today: indicators of good overall design...
1702919165720.png

Let's instead define key criteria that pass the "acceptable" mark...
1702919190651.png

What we'll get...
1702919301853.png

Mmm ...... It passes ...... But ain't good ...... Let's add a few additional criteria...
1702919322782.png

Man !!!??? ...... That's still bad !!! ...... We need to add more criteria...
 

Attachments

  • 1702919215885.png
    1702919215885.png
    63.1 KB · Views: 24

Lambda

Major Contributor
Joined
Mar 22, 2020
Messages
1,798
Likes
1,535
It should also be tested if an output is truly Balanced and symmetrical Some DACs only output in one channel at say 4V instead of on hot and cold +-2V
For inputs its should be tested if and how they deal with this. Is often assumed XLR will cancel out noise if it’s on both lines but tarts not true.

Hypothetical worst cases you can have DAC outputting 4V rms on the XLR hot pin and an AMP only Listening to XLR cold pin.
they could both measure perfectly fine but would not at all work together at all

So input Common mode rejection as well as output balance should be measured
 

GXAlan

Major Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
3,954
Likes
6,113
That's the risk so we need to weigh the benefit against issues like that.

I think the risk is pretty high. What about magazines like Stereophile or HiFi News? The recommendations often have zero correlation with measurements and companies sending gear in may not be able to predict if it is recommended or not? Admittedly, these advertiser supported magazines tend to be more forgiving, but we do see Stereophile calling out problems as does HiFi News when it arrives.

My bias is that when manufacturers don’t know what the benchmark is, sometimes it leads to confusion or disappointment, but the trend would be for the manufacturer to try extra hard for an A+.

That is, think about a high school student in a pass/fail scenario or a student with proper grades. In a pass/fail scenario, the incentive is lower to go for the A if the student doesn’t think they are going to be valedictorian.
 

Red_Red

Member
Joined
Oct 10, 2018
Messages
45
Likes
64
Location
Europe
I sincerely hope that this does not happen for a long, long time. Given that ASR has now set the standard for excellence in Audio - Journalism, I suspect that they do have a plan of succession, either formally or informally. Your post reminds me of an old French saying : 'The graveyards of the world are littered with the bones of irreplacable men'.
It seems to me that Amir has been thinking about this for a while now. It's obvious that with the growing influence of ASR on the industry and the audio community, being "alone" to steer the ship must be problematic. The time and energy devoted to all this should obviously be distributed among several reviewers rather than resting (almost entirely) on the shoulders of one man. But this raises obvious problems such as the material resources and skills required to carry out such analyzes (admittedly it is not that complicated but training would be necessary), and then there is the financial support that all this implies. Some people have the skills and the time to do this kind of thing but not the means to afford indisputable measuring devices.
 

mns3dhm

Member
Joined
Apr 26, 2022
Messages
14
Likes
60
Setting performance standards for various types of audio gear to use as a reference is probably overdue. Using the reference as a standard sets the stage for scoring or grading the relative performance of individual products under review. As a next step, using price relative to performance would allow the creation of a value metric that would estimate how much bang you're getting for your buck. Amir indicated this would be applied to electronics, but it would be great to see standards or references for loudspeakers as well. Finally, I'd suggest more work be done at ASR to make it more straightforward for users to query any database that would be established as a result of this effort.

There will be manufacturers that will push back on this idea, and they should be encouraged to provide their input regarding standards, the measurement processes, etc.
 

anphex

Addicted to Fun and Learning
Forum Donor
Joined
May 14, 2021
Messages
709
Likes
1,013
Location
Berlin, Germany
I have by no means the expertise to be able to really contribute to this, but besides all the usual metrics I'd love to see a completeness rule as in if a manufacturer publishes one metric all other ones must also be published to be ASR-certified. I've seen a few gadgets here that perform well but then have completely weird behaviour in other metrics. Manufacturers conveniently wouldn't publish those. So if all data is published you can be sure it's a well working product all round and performs well in all categories.

Conversely if all data is published and one measurement goes complete circus mode because of a very bad implementation the product should also not get a pass even if all others metrics are great. I remember MOTU was an example if this with their UltraLite MK5 pre firmware fix.

This would greatly serve the overall trust in a product.
 

Timcognito

Major Contributor
Forum Donor
Joined
Jun 28, 2021
Messages
3,612
Likes
13,628
Location
NorCal
Lazy me but I didn't read all responses but just in case nobody said it. Look at all ASR current tests and assign numerical levels to them excellent, good, average, fair, poor or something like that or a range and an average. Not sure if it should be on a curve against tested device statistics or absolute.
 

asov87

Member
Joined
Jan 6, 2022
Messages
13
Likes
52
Amir is one person reviewing lots of devices, my personal opinion/2 cents on this, there should be a bare minimum threshold that devices should meet, otherwise Amir’s time is wasted with “junk” instead of actually investing that time reviewing devices that would benefit the community/consumer.

What should those bare minimum thresholds be?… that’s for the technical engineers to come to an agreement, that’s in everyone’s benefit :)
 

Gorgonzola

Major Contributor
Forum Donor
Joined
Jan 27, 2021
Messages
1,044
Likes
1,428
Location
Southern Ontario
So I keep running into companies who send me products with best intentions but miss key aspects that cause me to not recommend. And example is a very nice dongle I recently received that had balanced output but was limited to just 2 volts out. If you don't know, I like to see minimum of 4 volts out from such a port as otherwise, you can find unbalanced dongles at lower cost that do that.

Another example is channel balance issue with had with an AIYIMA amp where there was almost 1 dB differential.

Yet another is expected SINAD for an amplifier. Yes, we don't rate amps on that one number but if SINAD is say, 60 dB, the rest are going to follow.

In many cases decision makers are not knowledgeable in these companies so proudly show me the gear, only then realizing they could have built something better, but didn't.

Note that my focus here is for electronics only. And I am thinking about the fewest key criteria that passes the "acceptable" mark from me, and by implication, from you all. They can do better of course to get higher praise but want to establish what we like to see.

An example for a dongle:

Output voltage: at least 2 volts on unbalanced, 4 volts on balanced.
SINAD: 100 dB or better, 1 kHz, 22.4 kHz bandwidth
SNR at 50mv: 85 dB (?)
SNR at full 2/4 volt output: 110 dB (?)
Output impedance < 1 ohm

Example for Amplifier:
SINAD >= 80 dB
SNR >= 110 dB (?)
Channel balance < 0.5 dB
Crosstalk > 70 dB @20 kHz



This would be presented as general guidelines for companies to adopt (or not). The point of this thread is not to discuss the specifics although you can, but determine if it is time for us to do this. Hate to have companies ready to produce performant products based on objective measurements but not know clearly what those measurements should be.

What say you?

I think a standard set of measurement parameters would be good idea, but minimum standards maybe not. So, e.g. I wouldn't want to buy an amplifier based on knowing only the amp was "better than" the minimum standard for each of those parms, (as highlighted above).

I personally believe that the harmonic distortion spectra of an amp are important. I think you've got to eyeball each spectrum, there is no single number that could work.
 

AudioSceptic

Major Contributor
Joined
Jul 31, 2019
Messages
2,741
Likes
2,643
Location
Northampton, UK
No, I am asking if we should announce what our expectations are with respect to measured performance. What those specs are would be the second phase after we answer that question.
So, what happens in practice?

A. You post these basic specs here on this website and expect anyone submitting a product to check here first
or
B. They contact you first, asking if they can submit a product, you send them the specs and tell them, "Please read these specs and only submit your product if you expect to meet or exceed them"?

I have to say that I do get a guilty pleasure by seeing overpriced and overrated products tested by you and being shown to be poor performers. We wouldn't see that, or very few, if A or B were followed.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,052
Likes
6,916
Location
UK
No, I am asking if we should announce what our expectations are with respect to measured performance. What those specs are would be the second phase after we answer that question.
It's a bit of a complicated one I reckon. First would we base expectations on what is proven as audible differences, and I don't think we have the data for that, or should it instead be based on what we often define as "good engineering" which is the view that it's nice to have some overkill in the specs to ensure everything is transparent which is useful if you don't really know where the boundaries of transparency are situated. And leading on from that, is it wise to have an "ASR approved" spec if we're shaky on the reasoning & data behind it - given that we tout ourselves on being "scientific & provable" a lot of the time. If there is indeed data & research out there that shows where the boundaries could be situated for the various sound quality variables, then I imagine different studies/research will show slightly different boundaries of audibility, therefore "ASR approved" would make sure it goes a bit further on each of the variables than all current studies to ensure transparency. If the data is out there on this one then it's worth setting a standard, if there's no good data on audibility then it's a bit more shaky.

EDIT: and my last point is how do you "[present] as general guidelines for companies to adopt", to quote you. I mean in what way do they receive or see the guidelines?
 

Thomas_A

Major Contributor
Forum Donor
Joined
Jun 20, 2019
Messages
3,501
Likes
2,540
Location
Sweden
Thinking more about this, I am not sure if settings standards, whatever they relate to, would do anything meaningful. You can have an engineering view of the measured parameter (what is possible to achieve) and an audibility perspective (what is needed for transparency). These will come up with very different numbers - so then what number is the most important and to whom?
 

Sokel

Master Contributor
Joined
Sep 8, 2021
Messages
6,275
Likes
6,405
It's a bit of a complicated one I reckon. First would we base expectations on what is proven as audible differences, and I don't think we have the data for that,
Yes,I have asked for data like this lots of times to no avail.
But a very good hint is the Klippel listening test as one would think is tested by people who has a good relationship with audio,and I count about 19000 test already done.

klippel.PNG
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,052
Likes
6,916
Location
UK
Thinking more about this, I am not sure if settings standards, whatever they relate to, would do anything meaningful. You can have an engineering view of the measured parameter (what is possible to achieve) and an audibility perspective (what is needed for transparency). These will come up with very different numbers - so then what number is the most important and to whom?
I'm thinking it would be based around audibility, as that's the most practical implication.
 

dtaylo1066

Addicted to Fun and Learning
Joined
Jan 12, 2019
Messages
663
Likes
833
Nothing wrong with the concept. Just remember ASR readers represent a very small percentage of audio consumers. Most consumers do not care if what they listen to meets an ASR consensus of what is the threshold of "hi-fi."

The majority of consumers don't even care if they are listening to lossless music, and for a lot of their listening it is just a background thing or in autos.

Narrow down the audience and an ASR approved sticker or certification may have merit among some producers and consumers.

Given that most of the products reviewed here do not meet a recommended status, you will not motivate many producers to submit products, and hence will not affect or reach their consumers.

Finally, if such a system gained traction, Amir would be swamped and we would need to raise money for 3 or 4 staffers to test products. Amir has tested a lot of stuff, but it is a drop in the bucket of the amount of gear out there.
 
Top Bottom