• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Why you can't trust audio measurements by GoldenSound

Status
Not open for further replies.

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,785
Likes
242,537
Location
Seattle Area
So he should be using your settings? How about, both him and you (and everybody else who posts measurements) publish details of the test setup, DUT configuration and analyser settings, so that the results can be replicated, compared and trusted?
That is probably the right solution and you see Wolf doing that. I have documented my tests enough that people should easily be able to replicate them even without me documenting it more. As I noted above, I plan to do a detailed video on all the tests so if they want to fully clone them, they can.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,785
Likes
242,537
Location
Seattle Area
He should not be making the changes to your way of doing things?!
There is more then one valid way to do things.
He can make any changes he wants. But then he shouldn't do a video to say, "measurements are not reliable because on purpose I have chosen to do things differently to produce the same data!" He has created a problem by deviating what I do and then complains about said variations. My suggestion to them is to recreate the same core measurements as I do and then expand and do whatever else they want to show. This way SINAD is SINAD. Him picking 20 kHz bandwidth instead of 22.4 kHz default in AP is a bad idea. It creates a difference for the sake of difference.
 

theREALdotnet

Major Contributor
Joined
Mar 11, 2022
Messages
1,209
Likes
2,084
??? What choices? I use the exact same template for each device of that type so operator choice does not enter the equation.

You made a choice of configuration settings in the analyser, other reviewers may have made different choices. Even you use a standard set of settings every time, without knowing what they are, the measurement results across different review sites can look confusingly different to the readers. This puts water on the mills of those who think measurements are nonsense and folly in the first place.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,785
Likes
242,537
Location
Seattle Area
Sure this is what we want to know.
But SINAD at 1K 4V 44.1khz with 20khz bandwidth is not the only or way to determine this.
What is "this?" The goal of the measurements is to see if a product is well engineered. 1 kHz tone has been used for beginning of time and places harmonics where our hearing is most sensitive. I see no reason to abandon this and pick a random other frequency like 16 kHz or whatever. 1 kHz harmonics can also be almost fully captured with default measurement bandwidth. Increase the test frequency and more of the harmonics fall outside of the measurement bandwidth, producing false data.
 

Vacceo

Major Contributor
Joined
Mar 9, 2022
Messages
2,705
Likes
2,864
Is it a coincidence that you describe the sound heard at every live rock concert I have attended.
Not being descriptions favored as having anything to do with accurate reproduction by most audiophiles it may be confusing.
I wouldn't have it any other way.
That describes every record mastered by Scott Burns. And it will be such in a Genelec or and old and cheap Aiwa speaker.

What matters here is not how the source sounds, but how the gear reproduces the content of the source. The gear does not care if what I feed it is some random hipster pop, a long dead pothead from the 70, Chopin on the piano or Nasum growling.

That is precisely why what Amir provides is like steel. Because in no man, no woman, no beast you can trust. Steel you can trust
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,785
Likes
242,537
Location
Seattle Area
You made a choice of configuration settings in the analyser, other reviewers may have made different choices. Even you use a standard set of settings every time, without knowing what they are, the measurement results across different review sites can look confusingly different to the readers. This puts water on the mills of those who think measurements are nonsense and folly in the first place.
I had to create a lot of standards on my own since they didn't exist otherwise. I have been tormented to death over them across nearly 1,200 reviews/measurements now. Nothing has stood the test of time better than this set which by the way, has evolved based on input from the membership. What few other measurements show from others at this point is in the noise, pun intended. If they want to add value, they would not mess with the measurement of core values (SINAD, SNR, IMD, etc.). But if they do mess with them, then I rather not hear about them complaining. And certainly not expanding that to clickbait level of saying measurements in general are unreliable. That is just false.
 

theREALdotnet

Major Contributor
Joined
Mar 11, 2022
Messages
1,209
Likes
2,084
I had to create a lot of standards on my own since they didn't exist otherwise. I have been tormented to death over them across nearly 1,200 reviews/measurements now. Nothing has stood the test of time better than this set which by the way, has evolved based on input from the membership. What few other measurements show from others at this point is in the noise, pun intended. If they want to add value, they would not mess with the measurement of core values (SINAD, SNR, IMD, etc.). But if they do mess with them, then I rather not hear about them complaining. And certainly not expanding that to clickbait level of saying measurements in general are unreliable. That is just false.

I don’t think he meant to accuse your measurements of being unreliable. It’s just generally the case that any published measurement (or experimental result) is nothing but hearsay, unless it is documented well enough for someone else to replicate it.

Perhaps the few review sites trying to enhance the audio review landscape by conducting actual measurements can band together and come up with a set of standards for doing them? I’m sure there is nothing Paul McG loves to see more than infighting between objective audio review sites.
 
OP
L

Lambda

Major Contributor
Joined
Mar 22, 2020
Messages
1,797
Likes
1,535
There's no need to demonize anyone here
I don’t think this is done by anyone?
It is certainly not my intention.

IIRC it will bury the cogent stuff in stuff that makes no difference.
It will burr nothing since you have to specifically look for it, download it unpack the zip, open the PDF...

"stuff that makes no difference."
is also a strawman argument? how do we know it makes no difference if it is not there.

We now SINAD over 120dB Makes no Difference? but sill DACs get ranked by this number only.
there is no Parametric search to list them by something different it is only SINAD.
If someone searches for a DAC he is likely to be going down this list (manually) till he finds something in his budget and with the features he wants.

More Data is always better.

You don’ŧ know what i’m looking for nor dose Armin.
Maybe i want to compare a DAC reviewed by ASR to some other DAC reviewed only by a third party.
Maybe the uses other measurements. Maybe thy are directly comparable or better comparable with more data.


I am hugely against dumping a whole confusing PDF auto generated by AP software.
I know. but i don’t get why.
It is only for the ones intrests and 99% of users can ignore them.

Go ahead and download the ones Schiit produces and tell me if you can figure out what is what. Tons of noise is in there that has nothing to do with performance.
I see no problem with the data they present!
this alone is worth a lot and actually a must have:
1650530883472.png

just press strg+f and type in what you want to know...

But then he shouldn't do a video to say, "measurements are not reliable because on purpose I have chosen to do things differently to produce the same data!"
The whole point of this was to show measurements are only valid if you provide all the necessary detail about how they are captured.
like shiit dose (se picture above)

He has created a problem by deviating what I do and then complains about said variations.
He (only?) derivatives in ways not mentioned/specified in the published review.

My suggestion to them is to recreate the same core measurements as I do and then expand and do whatever else they want to show.
Sure but why not also create the same measurements as he dose? sure he would share his project files,
they can be easily loaded and create an automated report in in "goldensound" style.
For example DACs at full volume is an totally valid test.
or CCIF
as well as testing for intersampel over clipping.
index.php



What is "this?" The goal of the measurements is to see if a product is well engineered.
sure.
But is DAC wit SINAD of 123dB that dose this to the signal at 0dBFS:
index.php

Better engineered then a DAC with 120dB SINAD that never clips?

is a DAC with USB and Signal ground connected better engineered then one with isolated usb ground just because it maybe has 3dB more SINAD? (no)


What is better Better engineered:
DAC with 4.5V rms at 0dBFS and 120dB SINAD at 4V rms (and maybe clippig at 0dBFS)
or
DAC with 7.5V rms at 0dBFS and 115dB SINAD at 4V rms (and no clippig at 0dBFS)
 
Last edited:

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,141
Likes
36,785
Location
The Neitherlands
I searched, but didn't find anything about GoldenSound opting out of a blind test.

Does anyone have a link?

The best part is he had a blind test device built and received it in Oct. 2021.

One could ask themselves why it never got mentioned after that.

Good video from GoldenSound explaining how measurements can be altered intentionally and unintentionally and how comparable they are.

The whole 'you can't trust measurements' is based on SBAF accusations (where he is a member) about 'measurements being borked on purpose by a, to us, well known forum member. Both Marv as Goldensound have an AP and the goal was to prove a certain individual's measurements wrong. I don't see much of it. Marv tried to discredit the L30 but made rookie measurement mistakes and failed to correctly interpret the measured results.

Of course, everyone can make mistakes with measurements, use different filters, use different references and standards.
That is the most difficult part. For someone to know what was done correctly or not actually requires substantial knowledge of measurement (protocols), interpretation of results and a complete description of all test parameters and setup.
 
OP
L

Lambda

Major Contributor
Joined
Mar 22, 2020
Messages
1,797
Likes
1,535
The whole 'you can't trust measurements'
Is click bait and bullshit to make content appealing to specific audience...
Of course, everyone can make mistakes with measurements, use different filters, use different references and standards.
That is the most difficult part. For someone to know what was done correctly or not actually requires substantial knowledge of measurement (protocols), interpretation of results and a complete description of all test parameters and setup.
This is the relevant part of the video/ the discussion.

Sure you can make your own references ,standards and measurement (protocols).
But then they are only directly comparable the same standards and protocols.
This is why you have to specify standards and protocols, and Public them and mention to the measurements that they are done to this specific protocol.
How else do we know your not changing them?
The protocol must be specified tight enough that even with bad intention you can’t uses it wrong to get misleading results.

and a complete description of all test parameters and setup.
And this is not always provided with every review.
 
Last edited:

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,570
Likes
25,449
Location
Alfred, NY
Project files.

Not only do they standardize a measurement suite, but they allow ANYONE to check/replicate. If they're available, this is transparency. If they're not, you're dealing with incompetence or dishonesty. Amir has provided them on request. And when we have on a couple occasions measured the same specific units, we have gotten essentially identical results. I note again that I have no financial or professional connection to this site, I am only beholden to the magazine I write for.

The pdf "reports" give less information than the actual data. They're really aimed at record keeping for production QC.
 

Mart68

Major Contributor
Forum Donor
Joined
Mar 22, 2021
Messages
2,719
Likes
5,184
Location
England
The best part is he had a blind test device built and received it in Oct. 2021.

One could ask themselves why it never got mentioned after that.
My suspicion is that he tried it and realised he could not perceive the unmeasurable differences he thought he could. Since admitting that would wreck his whole (very shrewdly judged) business model he decided instead to shelve it.

Maybe he will prove me wrong on that, I would love him to, but I'm not hopeful.
 

Blockader

Senior Member
Joined
Nov 17, 2021
Messages
324
Likes
811
Location
Denmark
He thinks he can hear those differences even in volume matched blind testing conditions. For example, he claimed that he did volume matched blind testing to hear differences between X18 and x26 pro and he was able to hear some miniscule differences. (and he said that he struggled to pick them apart from each other unless a couple specific tracks were on) And he can hear those differences because *Human hearing beats Fourier uncertainty principle*

Human Fourier Uncertainty
 
Last edited:
OP
L

Lambda

Major Contributor
Joined
Mar 22, 2020
Messages
1,797
Likes
1,535
Project files.

Not only do they standardize a measurement suite, but they allow ANYONE to check/replicate.
Aren't Project files proprietary?

The pdf "reports" give less information than the actual data. They're really aimed at record keeping for production QC.

I don’t advocate for replacing what we have now with only PDF report! but the addition.

The "useful" information density would go down but the total amount of information goes up.
The review contains the "useful" selected data in a condensed form.

And as far as i understand it what the automated report contains is very customizable.
So no problem to add what ever the industry or other Reviewers do to the Automation to keep comparability to there data.?

For example (component) manufactures specify (almost always) Spectral noise density. because it is directly comparable.
Not interesting for 99%* but i see no harm in including it in a automated test.
Or Manufactures specify often THD and noise Separately and "A" watertight. why not have a "A" watertight number in the PDF as well so it can be compared to the manufactures specification.

*I also estimate 98% only look at the SINAD number and read the conclusion.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,570
Likes
25,449
Location
Alfred, NY
Aren't Project files proprietary?
.
No, anyone with APx software can read them. And when I've asked Amir for his, he provided them and they contained exactly what was expected and allowed me to replicate his test.

I have the same policy- if anyone questions my measurements and wants to examine the project files, I will provide them. Any honest reviewer would do the same thing. The pdf report is redundant and can be much more misleading because it doesn't contain the vital test parameters or the actual graphs. In my case, I use the Bench Mode, so there's no report generated anyway, just actual data.
 
OP
L

Lambda

Major Contributor
Joined
Mar 22, 2020
Messages
1,797
Likes
1,535
No, anyone with APx software can read them
This is what i mean by proprietary. it only works with one vendors specific software.

The pdf report is redundant
But PDF is some what "open" standard that can be oped by anyone.

much more misleading because it doesn't contain the vital test parameters or the actual graphs.
Sure it would need to be set up in the right way to capture al the relevant information.

In my case, I use the Bench Mode, so there's no report generated anyway, just actual data.
This this just an assumptions:
If you review for example a DAC you would first test it in bench mode to see if it behaves as expected.
You would maybe change The analyzer input range the DAC volume and maybe the dBFS output level.
But i assum for the final "screenshot" you keep bandwidth, fft size, window and everything the same.
(unless you have fund a specif reason not to an mention it explicitly)

So my assumption is you can then load some sort of proprietary "automation file" that contains all your normal setting for a DAC test. and it generates a PDF report with all the graphs you want and all the measurements you usually do in Bench Mode?
 

theREALdotnet

Major Contributor
Joined
Mar 11, 2022
Messages
1,209
Likes
2,084
Status
Not open for further replies.
Top Bottom