• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master Thread: Are measurements Everything or Nothing?

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,217
Likes
3,813
@Thorsten Loesch

As the subject is often subjective here, I will write my little experience.
I'm not going to comment on numbers, initially they were very important.
I bought DACs from Gustard, SMSL and Oppo.

They are great devices, very detailed, but something is missing when listening to the music. It gets tiring after listening to it for a while.
When I bought an old Musical Fidelity DAC (I still managed to find a new one) the M1SDAC and then the iFi ones, when I heard it there was something magical for me.
It probably can't be measured, but it's what I really want to hear.

I'm very sure it can be 'measured' using the very sorts of sensory analysis methods that Mr. Torsten selectively endorses.

Step one would see if it persists when you don't know you are listening to the MF DAC or one of the others.
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,217
Likes
3,813
I was stuck on the idea of refuting the central point, when as far as I have been able to make out, there hasn’t been one.

Yes it's been rather a Gish Gallop, or maybe a howitzer firing 'contrarian' bullets, or maybe a cloud of thought balloons. I haven't settled on a metaphor.

The obvious Ad Hominem attacks against @Thorsten Loesch in this thread suggest a total lack of refutation against his central point. Speaks volumes...
Am I mistaking you for someone else, or are you the guy here who pops up regularly ASR thread reporting truly *outstanding* claims of hearing differences that by all boring old normie audio science thinking should be well nigh impossible? If you are that guy I would hardly be amazed that you'd find Mr. T a cool glass of water in the desert.


Here's a thing. When a "contrarian"s Day 1 go-to -- in any number of fields of science -- is to corral Richard Feynman to his side and start dishing out 'cargo cult' accusations, I kinda know what I'm dealing with, after decades of witnessing the play.
 

Reynaldo

Active Member
Joined
Mar 17, 2021
Messages
232
Likes
101
Location
Brazil, Blumenau SC
I'm very sure it can be 'measured' using the very sorts of sensory analysis methods that Mr. Torsten selectively endorses.

Step one would see if it persists when you don't know you are listening to the MF DAC or one of the others.
Honestly this is a discussion of tests, measurements I don't take very seriously.
Always behind everything there is the commercial factor. As you wrote the show must go on.

Most of my devices that I bought I did a quick research, talked to some friends of the hobby and that's it.

What I believe is that the quality of any stereo goes beyond what is measured.
That's why I found Torsten's statements very interesting, because he looks at several aspects.
 

krabapple

Major Contributor
Forum Donor
Joined
Apr 15, 2016
Messages
3,217
Likes
3,813
'Quality' can mean any number of things, and in the hands of marketers, it usually does.

'Customer satisfaction' is something that certainly 'goes beyond' mere audio performance measurements, but it too can be 'measured'. We can , using science, even explore how well CS tracks measured audio performance. (As regards loudspeakers, ask Sean Olive about doing that). That might be something a company cares about. Or not so much, e.g., in the case of some bling-y 'high end' brands.

Mr. T seems to be , at various times, talking about customer satisfaction. At other times, not. It's presented as some mysterious secret element beyond our current ken.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,796
Likes
242,745
Location
Seattle Area
Also notable, with 100 trials and 58 correct we have a lower than 1% risk of all statistical errors and can be mollified that g*ds of statistics have been suitably mollified.
It is but any audiophile claiming to hear "night and day difference" in sighted tests but in a controlled test, only gets 58 right should be downright embarrassed! When taking ABX tests, I strive for nearly 100% correct answer with vanishingly small p.

foo_abx 1.3.4 report
foobar2000 v1.3.2
2014/07/10 18:50:44

File A: C:\Users\Amir\Music\AIX AVS Test files\On_The_Street_Where_You_Live_A2.wav
File B: C:\Users\Amir\Music\AIX AVS Test files\On_The_Street_Where_You_Live_B2.wav

18:50:44 : Test started.
18:51:25 : 00/01 100.0%
18:51:38 : 01/02 75.0%
18:51:47 : 02/03 50.0%
18:51:55 : 03/04 31.3%
18:52:05 : 04/05 18.8%
18:52:21 : 05/06 10.9%
18:52:32 : 06/07 6.3%
18:52:43 : 07/08 3.5%
18:52:59 : 08/09 2.0%
18:53:10 : 09/10 1.1%
18:53:19 : 10/11 0.6%
18:53:23 : Test finished.

----------
Total: 10/11 (0.6%)

And this was in a situation where the difference sighted was extremely small. If difference is large, the person should ace such tests so there is no need to perform any statistical analysis!

Now, if we had a random group of listeners with no claims in this regard, then sure, statistical analysis would play a strong role together with allowance of a more reasonable p value.

So, do you have such outcome for difference in fuses with speakers as you mentioned?
 

Rottmannash

Major Contributor
Forum Donor
Joined
Nov 11, 2020
Messages
2,996
Likes
2,647
Location
Nashville
It is but any audiophile claiming to hear "night and day difference" in sighted tests but in a controlled test, only gets 58 right should be downright embarrassed! When taking ABX tests, I strive for nearly 100% correct answer with vanishingly small p.

foo_abx 1.3.4 report
foobar2000 v1.3.2
2014/07/10 18:50:44

File A: C:\Users\Amir\Music\AIX AVS Test files\On_The_Street_Where_You_Live_A2.wav
File B: C:\Users\Amir\Music\AIX AVS Test files\On_The_Street_Where_You_Live_B2.wav

18:50:44 : Test started.
18:51:25 : 00/01 100.0%
18:51:38 : 01/02 75.0%
18:51:47 : 02/03 50.0%
18:51:55 : 03/04 31.3%
18:52:05 : 04/05 18.8%
18:52:21 : 05/06 10.9%
18:52:32 : 06/07 6.3%
18:52:43 : 07/08 3.5%
18:52:59 : 08/09 2.0%
18:53:10 : 09/10 1.1%
18:53:19 : 10/11 0.6%
18:53:23 : Test finished.

----------
Total: 10/11 (0.6%)

And this was in a situation where the difference sighted was extremely small. If difference is large, the person should ace such tests so there is no need to perform any statistical analysis!

Now, if we had a random group of listeners with no claims in this regard, then sure, statistical analysis would play a strong role together with allowance of a more reasonable p value.

So, do you have such outcome for difference in fuses with speakers as you mentioned?
I took that test and scored terribly-basically guessing. I'm impressed you scored that high. It makes me wonder how discriminating my hearing really is...
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,796
Likes
242,745
Location
Seattle Area
BUT, FWIW, in my home system I can do that and there is silence, relative to the background noise in my relatively quiet house (quiet if the pool pump is not running).
And I have heard the opposite even when being a few feet away from the speaker. The fact that in your one instance you can't hear something just means your speakers may be so insensitive that it is not an issue. It is not remotely evidence of inaudibly as plenty of people complain about noise at the tweeter. Indeed, almost all active studio monitors have hiss/noise near the tweeter. So much so that the major companies spec that dBSPL.

You also made some mistakes stating noise levels in homes. I suggest reading this article I wrote on how to properly analyze noise in rooms: https://www.audiosciencereview.com/forum/index.php?threads/dynamic-range-how-quiet-is-quiet.14/

That survey showed that home listening spaces exist that have inaudible noise despite what an SPL meter may read:

index.php


An SPL meter is dumb and shows levels that are dominated by low frequencies which are inaudible. Unless you perform a spectrum analysis per above research, you don't know how noisy your room really is as far as audibility.

In addition, peer reviewed research shows that directional noise from a speaker is more audible than ambient noise. And that we hear below the noise floor of the room.

All of this points to getting as quiet of electronics as we can. Fortunately, this doesn't cost anything. Heck, it is an inverted equation right now with "high-end" gear having far more noise than "low-end." Great for us audiophiles who follow science and engineering together with measurements. Bad news for folks who promote non-performant high-end gear. Or low-priced gear that doesn't come close to performance of state of the art gear we test which don't cost more than a dinner out for two these days.
 

Killingbeans

Major Contributor
Joined
Oct 23, 2018
Messages
4,100
Likes
7,597
Location
Bjerringbro, Denmark.
When Chinese DAC makers release a product every few months to get a higher number on SINAD, is it good faith practice?
Many users associated this with better quality devices.

Supply and demand. Same way as snake-oil and "voiced" products give supply to other demands.

Ultra high SINAD products makes people with FOMO happy. Snake-oil makes people who love the idea of magic in the world happy. And voiced products makes people who think of DSP and EQ as "impure", and rather want to mix 'n' match in the hope of stumbling upon the fabled "synergy", happy.

One thing I really like about these SINAD junkie products is that they do what much of the high-end industry have been claiming to do for decades. They actually move closer to functioning as "wire-with-gain". Does it really matter? Not if you ask me. I believe that DAC performance reached a point where human hearing wasn't able to appreciate further improvement a long time ago.

But I like the honesty. "So, you want products with actual high performance? Well, here you go". It's not all perfect though. If you look closer, the new breed of DACs usually also include a lot of the same catering to myths that western artisan products do. But who can blame them? People want that stuff, so they deliver.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,796
Likes
242,745
Location
Seattle Area
Another important aspect of noise is equalization. To the extent you boost a range, especially the bass, and then pull the overall level to make sure there is no clipping, you just lost good few dBs of noise margin. You would have to crank up your amplifier volume to compensate and hence the loss.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,796
Likes
242,745
Location
Seattle Area
When Chinese DAC makers release a product every few months to get a higher number on SINAD, is it good faith practice?
Many users associated this with better quality devices.
They have many reasons to release new DACs:

1. The factory fire at AKM caused a complete redesign of every DAC in their lineup. Many had just switched all up from ESS, only to have to go back to that well again.

2. Their retailers demand fresh new products at every price point. And they want both combo DAC+HP amp and stand-alone DAC + amp.

3. While they are at it, they work on optimizing noise and distortion. Do we want to penalize them for this? Why or why?
 

pma

Major Contributor
Joined
Feb 23, 2019
Messages
4,626
Likes
10,829
Location
Prague
foo_abx 1.3.4 report
Have you also tried the test with a newer Foobar version? There are some important differences:

1) you do not see the result until you finish all trials, i.e. you see your probability score no sooner than after the last trial,
2) the test gives you a checksum that might be verified by anyone else for validity.

Due to (1) it is much more difficult than it was with the old version you used in your test - speaking based on experience.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,583
Likes
25,472
Location
Alfred, NY
Yes, or looked at another way, it can be a question that should be answered.
If there's anything there, bring evidence. If there's not, it's your brain fart and really not up to anyone else to lift a finger about.

Evidence. That's where it starts. Not ignorant handwaving or playing make-believe.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,796
Likes
242,745
Location
Seattle Area
Have you also tried the test with a newer Foobar version? There are some important differences:

1) you do not see the result until you finish all trials, i.e. you see your probability score no sooner than after the last trial,

Yes, and the change they made is completely improper. They came about after I passed these tests. The ran off and took out the interim reports which is totally wrong.

When differences shrink, they routinely become very specific to small section of the music, often one second or even less! To find these segments in a full 3+ minute track is impossible without feedback from the tool that you are on the right track. Our goal with blind tests should not be to make it hard to get to the truth. The fact that I found a segment and passed the ABX test is a good thing, not a bad thing. To impede my ability to do so goes against us being unbiased in finding whether these differences exist.

For a constant difference in the entire clip, the new version is fine. But otherwise, totally wrong.

2) the test gives you a checksum that might be verified by anyone else for validity.
This one I don't mind and have published tests using it.

foo_abx 2.0 beta 4 report
foobar2000 v1.3.5
2015-01-05 20:26:27

File A: On_The_Street_Where_You_Live_A2.mp3
SHA1: 21f894d14e89d7176732d1bd4170e4aa39d289a3
File B: On_The_Street_Where_You_Live_A2.wav
SHA1: 3f060f9eb94eb20fc673987c631e6c57c8e7892f

Output:
DS : Primary Sound Driver

20:26:27 : Test started.
20:27:01 : 01/01
20:27:09 : 02/02
20:27:16 : 03/03
20:27:22 : 04/04
20:27:28 : 05/05
20:27:34 : 06/06
20:27:40 : 06/07
20:27:51 : 07/08
20:28:01 : 08/09
20:28:09 : 09/10
20:28:09 : Test finished.

----------
Total: 9/10
Probability that you were guessing: 1.1%

-- signature --
7a3d0c1aaaf8321306ff6cfdd1f91ff68f828a54
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,483
Likes
18,547
Location
Netherlands
When differences shrink, they routinely become very specific to small section of the music, often one second or even less! To find these segments in a full 3+ minute track is impossible without feedback from the tool that you are on the right track. Our goal with blind tests should not be to make it hard to get to the truth. The fact that I found a segment and passed the ABX test is a good thing, not a bad thing.
It’s supposed to be a double blind test. The fact that you can know how well you did during the test seems like cheating.

In school the teacher also doesn’t grade you while your making the test, allowing you to anticipate on the next answers based on the knowledge that you were right or wrong before. Obviously that makes it easier
To impede my ability to do so goes against us being unbiased in finding whether these differences exist.
Your supposed to do this on your own, just like you had to do your test all on your own. You claim audible differences, you’ll pick them out without any help. Worlds of differences should be no problem, right? Obviously, I’m not talking about you specifically ;). And yes, it will be harder when differences get smaller. But give it a few more rounds, and you’ll still figure it out.

I can also understand your point of view: you come from the world of codecs, and there it it vital to identify where the audible differences are to de found. The old way would make that easier for sure. As a work principle, it’s more efficient as well.
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,483
Likes
18,547
Location
Netherlands
Why? The test is still double blind.
Is it? Doesn’t knowing the results bias you? Because next time you know what to look for. You have been influenced by the test itself.
 

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,583
Likes
25,472
Location
Alfred, NY
Is it? Doesn’t knowing the results bias you? Because next time you know what to look for. You have been influenced by the test itself.
Knowing what to look for is a feature, not a bug. You're still just using your ears, you just know what to listen for.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,796
Likes
242,745
Location
Seattle Area
Is it? Doesn’t knowing the results bias you? Because next time you know what to look for. You have been influenced by the test itself.
That is a constant in either case. Getting the results at end only serves to substantially slow me down compared to interim results.
 
Top Bottom