• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Can Amir, as a trained listener, spot audible difference between DACs?

Are you interested if Amir, as trained listener, spot audible difference between DACs?

  • Yes

    Votes: 32 36.8%
  • No

    Votes: 40 46.0%
  • Maybe

    Votes: 15 17.2%

  • Total voters
    87
OP
Krunok

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
Walk in the park.... PS Audio PerfectWave DirectStream DAC versus Matrix Audio Sabre-X MQA Pro. Outputs captured using AP analyzer:

======================

WASAPI (push) : Speakers (DX3 Pro), 24-bit
Crossfading: NO

09:37:27 : Test started.
09:37:44 : 01/01
09:37:54 : 02/02
09:38:03 : 03/03
09:38:12 : 04/04
09:38:18 : 05/05
09:38:24 : 06/06
09:38:33 : 07/07
09:38:38 : 08/08
09:38:43 : 09/09
09:38:50 : 10/10
09:38:55 : 11/11
09:39:01 : 12/12
09:39:09 : 13/13
09:39:14 : 14/14
09:39:24 : 15/15
09:39:32 : 16/16
09:39:32 : Test finished.

----------
Total: 16/16
p-value: 0 (0%)

-- signature --
bbaf4e8d400e7e010aecfa8a9da2a61e16fa2025

================

Zero chance of guessing.

Next! :D

You are probably aware that now you have to do at least one more test not for this to look like vendetta against PS (crap)DAC. :D
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
In a recent thread J_J was asked what SINAD would ensure equivalent performance. Going from memory I hope I'm not misquoting, but he said something at or better than -110 db might do it. The person questioning wanted to know about lesser levels. J_J's opinion seemed to be to be sure without knowing more -110 db was a good number, and only if you properly characterized the error spectrum of two bits of gear could you say so with lesser specs. I don't know that he has tested that in particular, but his seems an informed opinion.
I remember that thread too.

I'll have to do more reading to understand the background. Main thing that comes to mind is equal loudness spectrum (comparative sensitivity around 3kHz—4kHz and comparitive quiet of homes in that range).
 
OP
Krunok

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
I remember that thread too.

I'll have to do more reading to understand the background. Main thing that comes to mind is equal loudness spectrum (comparative sensitivity around 3kHz—4kHz and comparitive quiet of homes in that range).

How quiet actually? Wouldn't you need to crank music to 110dB and have practically 0dB of home noise to notice SINAD of -110dB? With a single sine tone of course, as music would do the masking..

Oh yes, amp SINAD wouldn't make it easier as well.
 

pozz

Слава Україні
Forum Donor
Editor
Joined
May 21, 2019
Messages
4,036
Likes
6,827
How quiet actually? Wouldn't you need to crank music to 110dB and have practically 0dB of home noise to notice SINAD of -110dB? With a single sine tone of course, as music would do the masking..

Oh yes, amp SINAD wouldn't make it easier as well.
I don't know. There are likely obvious things I'm overlooking.
 

direstraitsfan98

Addicted to Fun and Learning
Forum Donor
Joined
Oct 1, 2018
Messages
826
Likes
1,226
Now I am curious what Amir listened to in his test...
I am surprised nobody asked this.
He is going to make me laugh when the test signal was the one he posted earlier.
I would not be surprised if it was real music though.
He listened to something he trained for, of course. I believe Amir said that this test is what he trained for when working at Microsoft. Well guess what. I learned how to audibly listen for slight variations in tempo, tone, frequency, etc just from playing in a band for years. If I had spent hundreds of hours in front of a computer memorizing tones, I could pass this test 16/16 as well. A monkey can learn sign language. Doesn't prove a thing. It's why I voted for 'No' in the topic poll as this test is completely pointless. Too many variables at play for this test to work.

All of this is missing the point anyway which is that people with damaged hearing should not be including subjective opinions on how something sounds in their equipment measurement reports. I've seen quite a few measurement threads of gear done by Amir where he makes his final conclusions on how something sounds based on some listening tests.

In actuality when we really get down to it, it's less to do with Amir (or anyone his age who has age related hearing loss) and more to do with your own cranium, your own head, your own ears and hearing things the way you do. Everyone hears things different. So focus on the measurements. Amir does a great job on that. He doesn't need to prove or disprove he can pass a hearing test for us to read measurements.
 

BDWoody

Chief Cat Herder
Moderator
Forum Donor
Joined
Jan 9, 2019
Messages
7,039
Likes
23,179
Location
Mid-Atlantic, USA. (Maryland)
Walk in the park.... PS Audio PerfectWave DirectStream DAC versus Matrix Audio Sabre-X MQA Pro. Outputs captured using AP analyzer:

======================

WASAPI (push) : Speakers (DX3 Pro), 24-bit
Crossfading: NO

09:37:27 : Test started.
09:37:44 : 01/01
09:37:54 : 02/02
09:38:03 : 03/03
09:38:12 : 04/04
09:38:18 : 05/05
09:38:24 : 06/06
09:38:33 : 07/07
09:38:38 : 08/08
09:38:43 : 09/09
09:38:50 : 10/10
09:38:55 : 11/11
09:39:01 : 12/12
09:39:09 : 13/13
09:39:14 : 14/14
09:39:24 : 15/15
09:39:32 : 16/16
09:39:32 : Test finished.

----------
Total: 16/16
p-value: 0 (0%)

-- signature --
bbaf4e8d400e7e010aecfa8a9da2a61e16fa2025

================

Zero chance of guessing.

Next! :D

Nor any equivocation...

Simply results.
 

BDWoody

Chief Cat Herder
Moderator
Forum Donor
Joined
Jan 9, 2019
Messages
7,039
Likes
23,179
Location
Mid-Atlantic, USA. (Maryland)
He listened to something he trained for, of course. I believe Amir said that this test is what he trained for when working at Microsoft. Well guess what. I learned how to audibly listen for slight variations in tempo, tone, frequency, etc just from playing in a band for years. If I had spent hundreds of hours in front of a computer memorizing tones, I could pass this test 16/16 as well. A monkey can learn sign language. Doesn't prove a thing. It's why I voted for 'No' in the topic poll as this test is completely pointless. Too many variables at play for this test to work.

All of this is missing the point anyway which is that people with damaged hearing should not be including subjective opinions on how something sounds in their equipment measurement reports. I've seen quite a few measurement threads of gear done by Amir where he makes his final conclusions on how something sounds based on some listening tests.

In actuality when we really get down to it, it's less to do with Amir (or anyone his age who has age related hearing loss) and more to do with your own cranium, your own head, your own ears and hearing things the way you do. Everyone hears things different. So focus on the measurements. Amir does a great job on that. He doesn't need to prove or disprove he can pass a hearing test for us to read measurements.

You do it then.
Let's see your prowess as you continue to denigrate his...

I would bet a fair amount he would beat you in any test you could come up with.

It isn't about preference...once again. That seems to be a sticking point...
 

audimus

Senior Member
Joined
Jul 4, 2019
Messages
458
Likes
462
So, would you be interested to know if our host, as a trained listener, can spot a difference between one of the best measured DACs (whichever is available to him) and a DAC on the low end of the green tier DACs in a proeprly conducted blind test?

Or maybe even between one of the best measured DACs and one at the low end of orange tier DACs? :)

Rather than ask of an arbitrary pairing, a reasonable and formal listening study done in the interests of science would be to compare the topmost to each one in the bottom tier. Only this kind of a test would establish the validity and usefulness of the SINAD table to audibility. Until then, it is only an engineering quality rating and the buckets quite arbitrary.

There are several types of tests:
1. Can any such pairs (topmost vs each of the bottom tier) be distinguished from each other in a statistically valid fashion?
2. If the answer to test 1 is yes, are there describable sonic quality differences that can be established in a statistically valid fashion.
3. If the answer to test 2 is yes, do the devices in each zone show up with the same description between them when tested against each other in a statistically valid fashion.
4. If the answer to test 1 is yes, how far up the rating table would you have to go before the answer is no longer yes in a statistically valid fashion.

The default until proven otherwise is NO to test 1 and so the rest moot.
 

BDWoody

Chief Cat Herder
Moderator
Forum Donor
Joined
Jan 9, 2019
Messages
7,039
Likes
23,179
Location
Mid-Atlantic, USA. (Maryland)
You know what, forget I said anything. I’ll never be able to do anything good enough to satisfy you so I’m out. Have a nice life.

That's one option...

The other is to recognize that it isn't just him that it's not good enough for...it doesn't tell you anything either.
Yes...measuring levels and yes unsighted testing are both critical for meaningful results.
 
OP
Krunok

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
Rather than ask of an arbitrary pairing, a reasonable and formal listening study done in the interests of science would be to compare the topmost to each one in the bottom tier. Only this kind of a test would establish the validity and usefulness of the SINAD table to audibility. Until then, it is only an engineering quality rating and the buckets quite arbitrary.

There are several types of tests:
1. Can any such pairs (topmost vs each of the bottom tier) be distinguished from each other in a statistically valid fashion?
2. If the answer to test 1 is yes, are there describable sonic quality differences that can be established in a statistically valid fashion.
3. If the answer to test 2 is yes, do the devices in each zone show up with the same description between them when tested against each other in a statistically valid fashion.
4. If the answer to test 1 is yes, how far up the rating table would you have to go before the answer is no longer yes in a statistically valid fashion.

The default until proven otherwise is NO to test 1 and so the rest moot.

I agree, assuming of course Amir accepts the idea to perform such tests.

But I also thik that arbitrary pairing is, although not systematic, equally valid and usefull. Also entertaining..
 

audimus

Senior Member
Joined
Jul 4, 2019
Messages
458
Likes
462
I agree, assuming of course Amir accepts the idea to perform such tests.

Just to clarify, I didn’t mean to aim it at Amir in particular. Just saying such testing is necessary before the SINAD table can be used validly in relation to audibility and until such tests are done, any adhoc pairing listening test is not very meaningful.
 

FrantzM

Major Contributor
Forum Donor
Joined
Mar 12, 2016
Messages
4,372
Likes
7,863
It is about Science for better Audio reproduction in one's home. It is about debunking myths that have come to define the hobby of music reproduction through electronics means. It is about attracting mroe people to the hobby.
Amir is at the forefront. He is the leader at ASR and one of the leader of the anti-High-priced-low-performance-BS products of many if not most High End Audio companies. That he is a trained listener is not the issue. It is his understanding and his mastery of standard measuring protocols that his shield against the BSers. They can hate him cannot infirm his findings. That is the underlying, implicit but silent challenge of peer-reviewing:: Can , conduct the same tests, under the same or similar ) conditions and show different results that can infirm his findings? You can hate him if you will but can you show flaws in the tests, in the conclusions. .You can ? More power to you ... and us... You can't? Then STFU!! Same way too.. You can tell reliably under blind conditions the differences between competently designed DACs then you are exceptional... Else? Please don't come with the lame excuse of listener fatigue.. Just STFU and enjoy your overpriced stereo (most of the time) system. The real beauty of this. for a person who follows ASR is that she/he can build an entire audio system, able to wipe the floor of a New York 42nd street peep-show :)with your audio system, at the price of one of your power cords ... :cool:.

Most people can be trained to become better listeners.. some can become so-called "trained' listeners. Some may have more abilities than others, e.g a 20 years old is likely to do better at tests with >16 KHz contents.. than the vast majority of the geezers on ASR and the subjectivists fora... Becoming trained listeners does not make anyone, knowledgeable and able to conduct standard audio tests the proper way. This is a different area.

Thus I voted : No!!
 

garbulky

Major Contributor
Joined
Feb 14, 2018
Messages
1,510
Likes
827
He listened to something he trained for, of course. I believe Amir said that this test is what he trained for when working at Microsoft. Well guess what. I learned how to audibly listen for slight variations in tempo, tone, frequency, etc just from playing in a band for years. If I had spent hundreds of hours in front of a computer memorizing tones, I could pass this test 16/16 as well. A monkey can learn sign language. Doesn't prove a thing.
It does though...! It means that the two dacs are differentiable in level matched blind testing. It's hard to find more convincing proof that it is possible for somebody to do so - and that these dacs sound different to a human.
I'm not sure if @amirm used a piece of music or a tone. But it would be even stronger if he was able to differentiate it with a piece of music rather than a tone as that would have implications for real world use. If @amirm wants to let us know what he used, that would be great.

Note - the dbt cannot differentiate which one sounds better. But it does show that it sounds different provably in some way.
 

Speedskater

Major Contributor
Joined
Mar 5, 2016
Messages
1,639
Likes
1,360
Location
Cleveland, Ohio USA
In the past j.j. (J.J. Johnston) has in his writings, posts and lectures mentioned that a skilled/trained AB/X tester can hear extremely small differences in volume, frequency response or latency. Differences so small that they cannot explain how A & B are different. Often these differences are caused by uncontrolled variables in the test setup rather than differences in the Units Under Test (UUT).

http://www.aes.org/sections/pnw/jj.htm
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,624
Location
Seattle Area
What a strange topic.
My impression too. It is like a director of a movie waking up one morning and seeing someone say, "let's see if he can act on screen!"
 

beefkabob

Major Contributor
Forum Donor
Joined
Apr 18, 2019
Messages
1,652
Likes
2,093
My impression too. It is like a director of a movie waking up one morning and seeing someone say, "let's see if he can act on screen!"

Usually it's actors who say, "I can direct!"

Or Pharrell saying, "I can do more than produce!"

But really, there's all this research about what people can and can't hear, and some of it disagrees with other bits. It's also done by a bunch of eggheads we don't know or trust personally. You, sir, have that trust. You have convinced us through your technical wizardry that you understand the technical side. You have convinced us through your various posted trials that you can hear pretty well. So we want to know, can you tell the greens apart? Can you tell the greens apart? A mid green from a mid blue? I don't think many of us actually care if YOU can tell them apart because we're interested in YOU. It's more that we don't quite have a clear picture on this research, but we trust you. If you cannot tell any 110 SINAD DAC from a 120 SINAD, or cannot tell a 105 SINAD from a 120 SINAD, they we have a more personal sense of where the line of inaudible differences lie. Also we don't have a shit ton of DACs lying around to compare, nor the pull with these companies to compare them.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
16,004
Likes
36,218
Location
The Neitherlands
Guys....
look at the presented data of the ABX test in post #31
and then ask yourself which test files he was alluding to.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,678
Likes
38,775
Location
Gold Coast, Queensland, Australia
It's really easy to pick a signal or piece of music to highlight audible differences and then show how clever we are by being able to accurately distinguish one piece of gear from another.

That has been done since HiFi was invented. It's done every day in audio stores all over the world.

I'll be perfectly honest. I cannot reliably hear the difference between the world's first CD player (the Sony CDP-101) and a TOTL AU$6000 two piece statement Marantz CD12/DA12LE player, on normal musical content. And a whole bunch of my other players all sound the same. Not using test discs, not super low level test tones, not listening for some key artefact or noise in between tracks, or some other atypical musical content designed to tease out one flaw from another- that's just being a smart-ass and proves nothing.
 
Last edited:
Top Bottom