• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Reality CheckMate

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,736
Likes
241,876
Location
Seattle Area
What nonsense. Do these people not spent 5 minutes researching what a controlled test is? Not knowing which is which and the declaring you liked this and that is completely worthless. I guess I need to do another video on how you do a controlled test. :(
 

JSmith

Master Contributor
Joined
Feb 8, 2021
Messages
5,235
Likes
13,549
Location
Algol Perseus
It's funny that this comparison is the same DAC line, the expensive line of course, not comparing with their cheaper non-multibit models.

Further, each DAC in this line uses their "proprietary digital filter" and unlike some DAC's there is no user selectable filter choice.

Their product site makes it clear there are three "flavours" of this DAC;
Yggdrasil is available in three different “flavors,”
  • Yggdrasil Less is More. Even better performance for lower cost. The most affordable Yggdrasil uses four TI DAC8812 16-bit D/A converters. Many think this is the best sounding flavor, hence less bits, more better…less is more.
  • Yggdrasil More is Less. The best-measuring integrated multibit DAC, ever. This Yggy uses four TI DAC11001 20-bit D/A converters. If you’re one who thinks multibit DACs can’t measure well, this one’s for you—approaching -120dB THD+N.
  • Yggdrasil OG. The Yggdrasil you’ve loved for years, same as it ever was. The original Yggdrasil with four AD5791 20-bit D/A converters remains in the line, because it provides an exceptionally engaging performance.
Besides one being a 16-bit and the other two 20-bit, they each employ a different TI DAC chip.

I believe this is for marketing purposes only and these are deliberately designed to be slightly different, each using different "filters"...;
"...a time- and frequency-domain optimized digital filter with a true closed-form solution"

The "blind test" is nothing but pure marketing for their pricey DAC line, hence why it's on that site.

Some of their cheaper DAC's measure quite well, so I would recommend those if one is interested in this brand and ignore the fluff.



JSmith
 
OP
rebbiputzmaker

rebbiputzmaker

Major Contributor
Joined
Jan 28, 2018
Messages
1,099
Likes
463
What nonsense. Do these people not spent 5 minutes researching what a controlled test is? Not knowing which is which and the declaring you liked this and that is completely worthless. I guess I need to do another video on how you do a controlled test. :(
How does that invalidate the listener's opinion?
 

JeffS7444

Major Contributor
Forum Donor
Joined
Jul 21, 2019
Messages
2,371
Likes
3,559
How does that invalidate the listener's opinion?
Big problems:
  • Switch box settings may have been unknown, but they were consistent, did not allow for "X" in which source doesn't actually switch, and listeners could see which setting was being used.
  • Initial listening was done as part of a group (what is this, a social club?).
  • DUTs left in plain sight (do LEDs provide visual cues?)
Barring significant changes to the setup, they could have kept listeners isolated until results were submitted, and randomized input assignment of switch box between listeners.
 

MarkS

Major Contributor
Joined
Apr 3, 2021
Messages
1,079
Likes
1,516
How does that invalidate the listener's opinion?
Because it does not show that the actual sound heard was in any way different among the three DACs. We only know that the listener THOUGHT it was different. To SHOW that it was different, the listener would need to be able to identify which DAC was in use by sound alone. This was not done.
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,758
Likes
39,083
Location
Gold Coast, Queensland, Australia
How does that invalidate the listener's opinion?

It doesn't. And neither does it invalidate his preference. But what value is one person's preference? No different to any other individual's preference. Put a whole bunch of people together, who have similar preferences and maybe you get somewhere. Maybe.
 

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
997
Likes
1,564
Do these people not spent 5 minutes researching what a controlled test is?
They would have to care. And they don't:
shiit.png
 
OP
rebbiputzmaker

rebbiputzmaker

Major Contributor
Joined
Jan 28, 2018
Messages
1,099
Likes
463
It doesn't. And neither does it invalidate his preference. But what value is one person's preference? No different to any other individual's preference. Put a whole bunch of people together, who have similar preferences and maybe you get somewhere. Maybe.
Of course, in the thread they actually talk about different people having different preferences. He’s just one listener with his opinion, but it is valid. Even if the test was done differently it would still end up being one persons opinion.
 

Chrispy

Master Contributor
Forum Donor
Joined
Feb 7, 2020
Messages
7,955
Likes
6,103
Location
PNW
How does that invalidate the listener's opinion?

An individual's opinion is only so meaningful with poor comparison methods.....but he's still entitled to an opinion, albeit not particularly useful to others. If one person swears a particular food item or wine is the best and you try it do you always agree?
 
OP
rebbiputzmaker

rebbiputzmaker

Major Contributor
Joined
Jan 28, 2018
Messages
1,099
Likes
463
An individual's opinion is only so meaningful with poor comparison methods.....but he's still entitled to an opinion, albeit not particularly useful to others. If one person swears a particular food item or wine is the best and you try it do you always agree?
Depends, after you try yourself then you might see if your opinions align with the other person. Someone you might be familiar with might have similar sensibilities so it could be a good guide.
 

Chrispy

Master Contributor
Forum Donor
Joined
Feb 7, 2020
Messages
7,955
Likes
6,103
Location
PNW
Depends, after you try yourself then you might see if your opinions align with the other person. Someone you might be familiar with might have similar sensibilities so it could be a good guide.

It's a very time consuming way of going about things in audio, tho, and not particularly productive. Opinions are like assholes, everyone's got one....
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,736
Likes
241,876
Location
Seattle Area
How does that invalidate the listener's opinion?
Because it is trivial to have an opinion about anything that plays music. It is no different than you saying it will rain tomorrow here. That you have an opinion is not of any value. What has value is if you are right vast majority of time. Otherwise it is a guess and who cares about a guess?

A test like this needs to be administered by a third party, not the person himself scoring himself. You play A and B and see which one the tester says sounds better. You repeat this a dozen times with sequence randomized. Then you perform a statistical analysis to see if the outcome is random or has better than 95% chance of being correct. This is why every controlled test that is published has a ton of statistical analysis. If that is missing, you need run, run far away from anything the person says.

Here is me passing an MP3 ABX test:

foo_abx 1.3.4 report
foobar2000 v1.3.2
2014/07/19 19:45:33

File A: C:\Users\Amir\Music\Arnys Filter Test\keys jangling 16 44.wav
File B: C:\Users\Amir\Music\Arnys Filter Test\keys jangling 16 44_01.mp3

19:45:33 : Test started.
19:46:21 : 01/01 50.0%
19:46:35 : 02/02 25.0%
19:46:49 : 02/03 50.0%
19:47:03 : 03/04 31.3%
19:47:13 : 04/05 18.8%
19:47:27 : 05/06 10.9%
19:47:38 : 06/07 6.3%
19:47:46 : 07/08 3.5%
19:48:01 : 08/09 2.0%
19:48:19 : 09/10 1.1%
19:48:31 : 10/11 0.6%
19:48:45 : 11/12 0.3%
19:48:58 : 12/13 0.2%
19:49:11 : 13/14 0.1%
19:49:28 : 14/15 0.0%
19:49:52 : 15/16 0.0%
19:49:56 : Test finished.

----------
Total: 15/16 (0.0%)

See the statistical analysis of 0.0% probability of being wrong? This will tell you that my outcome has very high chance of being reliable.

Here is a counter example of me seeing if I can reliably detect a "grounding box" being attached to the system:

foo_abx 1.3.4 report
foobar2000 v1.3.2
2016/02/14 08:50:25

File A: C:\Users\Amir\Documents\Test Music\Entreq 2 digital\test_4_output_entreq.wav
File B: C:\Users\Amir\Documents\Test Music\Entreq 2 digital\test_4_output_no_entreq.wav

08:50:25 : Test started.
08:52:22 : 01/01 50.0%
08:52:30 : 01/02 75.0%
08:52:43 : 02/03 50.0%
08:52:51 : 02/04 68.8%
08:53:03 : 02/05 81.3%
08:53:32 : 02/06 89.1%
08:53:58 : 03/07 77.3%
08:54:12 : 03/08 85.5%
08:54:27 : 03/09 91.0%
08:54:31 : Test finished.

----------
Total: 3/9 (91.0%)

I got 3 answers right so you may think I actually "heard" what the device did. Quick statistical analysis shows that by missing 6 other trials, there is 91% chance that I was guessing. In this instance, your conclusion should be that I did not detect this box being there. Not that "I have an opinion that should be listened to."

This type of test should have had two phases: phase 1 would be if any difference is reliably detected per above. Once that is established, then a second test for preference would then be performed.

There is just nothing here that is remotely reliable. Audiophiles routinely think A sounds better than B even though we can prove there is no difference in sound waves coming out of a device. The he didn't know A and B's identity is not important in this context.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,736
Likes
241,876
Location
Seattle Area
Depends, after you try yourself then you might see if your opinions align with the other person.
Whether you do or do not per my last post is not material in the least. You can both be wrong or both be right or some other alternative. It is all chance unless you follow a protocol where reliable insight can be extracted from the testing.

The sin you are committing here is why all this subjectivist nonsense continues to exist.
 

MarkS

Major Contributor
Joined
Apr 3, 2021
Messages
1,079
Likes
1,516
I argue that non-level-matched blind tests are fine as long as the listener must adjust volume to preference starting from zero after each switch, with a control that has no visual or tactile indication of volume level. This type of test is far easier to set up, and far closer to normal "audiophile" listening evaluations. Any audiophile who claims to hear "obvious" differences between components should be able to pass such a test.
 
OP
rebbiputzmaker

rebbiputzmaker

Major Contributor
Joined
Jan 28, 2018
Messages
1,099
Likes
463
It might have been a blind test but not controlled.
Once I read "the steel drum was really clear at low volume" I knew it wasn't level matched and stopped reading.
That does not say anything about level being matched or not. This discussion contains way too much of EEIS.
 
OP
rebbiputzmaker

rebbiputzmaker

Major Contributor
Joined
Jan 28, 2018
Messages
1,099
Likes
463
@rebbiputzmaker :
You're confusing two different things. One is opinion. The other is scientific fact. The nature of the two are not the same. Opinions are emotional. Facts are not. Facts are derived from rational (non-emotional) processes. Opinions are derived from emotional (non-scientific) processes.

READ THAT AGAIN: OPINIONS ARE DERIVED FROM EMOTIONAL (NON-SCIENTIFIC) PROCESSES.


Opinions are wonderful things. They help humans function efficiently in this world. Everyone has opinions. But they have no place in the scientific method.

The reason the scientific method was created was to arrive at conclusions that were accurate, devoid of emotion and devoid of the variances of opinion.

Jim Taylor.
Are we still talking about comparing Dacs playing music?
 
Top Bottom