• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required as is 20 years of participation in forums (not all true). There are daily reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Bait and switch issue? [POLL]

Is there a bait and switch issue? (you can select up to 2 choices)

  • Yes

  • No

  • Maybe Yes

  • Maybe No

  • Others

  • I would like Amir to do an ASR gathering to further investigate


Results are only viewable after voting.
Status
Not open for further replies.
OP
Pdxwayne

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,249
Likes
1,162
What filter settings are you using on these DAC's... same, comparable, or different?


JSmith
The details settings, including filter, is available in the table at this post:

 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
11,902
Likes
13,392
Location
Riverview FL
Yes but the effect of placing speakers at different distances away from your perspective, say a 5 foot difference, the result will be one speaker louder than the other. You are not going to hear a delay where one speaker is audible and then the other right after it. The two speakers are still sensed as "in time" but at different volumes/sound pressures. Does that make sense?

Sometimes something "makes sense" but doesn't hold up so well when tried.

Put on a pair of headphones and delay one side.

Play music - the stereo moves toward the un-delayed side, just as it does with speakers, here, for me.

Play a mono click-track - the phantom center moves toward the un-delayed side, enough delay makes it seem as if the delayed side is not making sound at all, until finally you begin to hear a doubling of the click with enough delay. The SPL at both ears remains the same.

Panning of a sound with delay is much more effective than with volume difference. It makes a "richer" sound in my head, as opposed to just louder on one side. My ear/brain hears cancellation of source frequencies to "locate" the sound with delay, no cancellations occur with volume pan.

That's the result of my experimentation.

YMMV, of course.

PS: This reference mentions 1.1ms delay creating a 30 degree off center hearing impression.
 
Last edited:

Jimbob54

Master Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
8,590
Likes
10,497
I strongly suspect in the case of that online timing test there is some interaction in play between the browser, windows audio settings and different dacs used that creates the different results observed. If I try and play them just using chrome on my phone it glitches as often as plays properly.

I don't think it's a sound footing to base any conclusions about playback equipment on.
 

captainbeefheart

Active Member
Joined
Apr 18, 2022
Messages
276
Likes
357
I strongly suspect in the case of that online timing test there is some interaction in play between the browser, windows audio settings and different dacs used that creates the different results observed. If I try and play them just using chrome on my phone it glitches as often as plays properly.

I don't think it's a sound footing to base any conclusions about playback equipment on.

I was doing the same thing and noticed that, I just thought it was with my particular computer/network. This could also be a very likely explanation, often the simplest answers are the correct ones.
 

danadam

Addicted to Fun and Learning
Joined
Jan 20, 2017
Messages
597
Likes
834
I strongly suspect in the case of that online timing test there is some interaction in play between the browser, windows audio settings and different dacs used that creates the different results observed. If I try and play them just using chrome on my phone it glitches as often as plays properly.

I don't think it's a sound footing to base any conclusions about playback equipment on.
AFAICT, the sync file has 44100 sampling rate and the delayed ones have 48000 sampling rate. I suspect this can cause glitches when switching between them.

I already asked about it on the page, I'll let you know when I get the answer.
 
OP
Pdxwayne

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,249
Likes
1,162
All, logically, if @Jimbob54 suspicion is correct, if it is only the interaction of browser, windows audio settings and different dacs, then pairing the DAC with any HPA would give same reaction, right?

You can check my table at
to see that this is not the case.

Using different HPA, especially H16, gives different clues too. I wonder why?
 
Last edited:
OP
Pdxwayne

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,249
Likes
1,162
I strongly suspect in the case of that online timing test there is some interaction in play between the browser, windows audio settings and different dacs used that creates the different results observed. If I try and play them just using chrome on my phone it glitches as often as plays properly.

I don't think it's a sound footing to base any conclusions about playback equipment on.
I wonder about the glitches using phone too.

Using Chrome on my Windows 10 laptop is a lot more stable in term of giving consistent clues.

As you can see in my video in

I could do 20/20 on single take, there was no retry due to glitches.
 

Jimbob54

Master Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
8,590
Likes
10,497
I wonder about the glitches using phone too.

Using Chrome on my Windows 10 laptop is a lot more stable in term of giving consistent clues.

As you can see in my video in

I could do 20/20 on single take, there was no retry due to glitches.
I actually don't doubt you can discern what you say. I've never doubted your hearing. It's just a bit of a stretch to conclude what you want to conclude. And I don't know how we got from your various tests to a suggestion that we are dealing with golden samples and parts swapping.

And I take your point logically that even if there is some oddness from the interplay with the dac, windows and browser, that SHOULDNT change if the only component changed is headphone amp between runs of tests if volume matched.
 
OP
Pdxwayne

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,249
Likes
1,162
I actually don't doubt you can discern what you say. I've never doubted your hearing. It's just a bit of a stretch to conclude what you want to conclude. And I don't know how we got from your various tests to a suggestion that we are dealing with golden samples and parts swapping.

And I take your point logically that even if there is some oddness from the interplay with the dac, windows and browser, that SHOULDNT change if the only component changed is headphone amp between runs of tests if volume matched.
Hmm, I wonder what else I could conclude.....

Assuming that all my devices still measured as transparent by Amir, assuming that manufacturers did not bait and switch, why the differences found in at least 6 of 9 devices I bought?

The d30pro issue is still unsolved.
The SMSL HO200 different still a mystery.
The timing tests clues differences still unsolved....

Can I conclude instead Amir's measurements are incomplete, thus can't be used to claim devices are truely transparent? But what other important measurements are missing?
 
Last edited:

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
11,711
Likes
26,062
Location
The Neitherlands
Can I conclude instead Amir's measurements are incomplete, thus can't be used to claim devices are truely transparent? But what other important measurements are missing?

Most measurements/reviews are incomplete in the sense that more aspects could be measured.
Some measurements are missing which can be for lots of reasons.
A full characterization of a device cannot be summed in 1 number or a few measurements.
What those measurements can do is show technical performance. Something measures well or it doesn't.
Drawing conclusions based on measurements requires certain knowledge.

When you claim there are differences then it is not up to someone else to prove you wrong or right. You can only 'prove' things to yourself and even then it depends on how 'correct' those tests are done.
I mean even magicians seem to pull of the impossible with people standing right in front of it so if one wants to one can make others believe anything they want.

Don't worry, I think you are very serious about this... a bit too serious.

If you want a meet with Amir I would not do it publicly but PM him instead.
 

Jimbob54

Master Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
8,590
Likes
10,497
Hmm, I wonder what else I could conclude.....

Assuming that all my devices still measured as transparent by Amir, assuming that manufacturers did not bait and switch, why the differences found in at least 6 of 9 devices I bought?

The d30pro issue is still unsolved.
The SMSL HO200 different still a mystery.
The timing tests clues differences still unsolved....

Can I conclude instead Amir's measurements are incomplete, thus can't be used to claim devices are truely transparent? But what other important measurements are missing?
I have no idea. I dont think you can conclude anything based on what you have to date. I would have thought getting reliable measurements from each of your chains and establishing if they are showing anything would be a start. You seem to be in the unenviable position of having a perceived itch you cant scratch.

Well, actually, you can conclude anything you like- but I think you want to prove something to the wider world.

EDIT- @solderdude said it better above
 
OP
Pdxwayne

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,249
Likes
1,162
Most measurements/reviews are incomplete in the sense that more aspects could be measured.
Some measurements are missing which can be for lots of reasons.
A full characterization of a device cannot be summed in 1 number or a few measurements.
What those measurements can do is show technical performance. Something measures well or it doesn't.
Drawing conclusions based on measurements requires certain knowledge.

When you claim there are differences then it is not up to someone else to prove you wrong or right. You can only 'prove' things to yourself and even then it depends on how 'correct' those tests are done.
I mean even magicians seem to pull of the impossible with people standing right in front of it so if one wants to one can make others believe anything they want.

Don't worry, I think you are very serious about this... a bit too serious.

If you want a meet with Amir I would not do it publicly but PM him instead.
I can prove it to others too. Amir just need to be my helper when I do blind test.
; )

Amir already publicly said what I found is not important enough for him to spend time as he is too busy.......Hmm....So 6 out of 9 items have issues/differences, all bought because of his recommendation, and he is too busy to investigate or comment more?

Also publicly, he said if I can pay him $200 per device, he can measure my devices. So, if I want to check my Topping and Gustard combos are working the same as his golden samples, I will need to pay him $800.....Well.....
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
11,711
Likes
26,062
Location
The Neitherlands
I would have PM'ed Amir about what you want. Instead you forced his hand openly which was what I was hinting at.
Openly daring or challenging Amir never works out well. I am sure you know that.

I am sure you don't need Amir to do a proper blind test. The keyword being proper.
Personally I have bought nothing on any-ones recommendations.
I look at available data here and elsewhere, count in the price and buy what I need.
Measurements of the E30 made me buy one though.
Can't hear any differences with the other DACs I own when tested blind. My ears are old though and have no 'exotic' DACs.
 
Last edited:

RHO

Major Contributor
Joined
Nov 20, 2020
Messages
1,040
Likes
905
Location
Belgium
I can prove it to others too. Amir just need to be my helper when I do blind test.
; )

Amir already publicly said what I found is not important enough for him to spend time as he is too busy.......Hmm....So 6 out of 9 items have issues/differences, all bought because of his recommendation, and he is too busy to investigate or comment more?

Also publicly, he said if I can pay him $200 per device, he can measure my devices. So, if I want to check my Topping and Gustard combos are working the same as his golden samples, I will need to pay him $800.....Well.....
Maybe if someone is willing to lend you their X16-H16 combo you could check if you can find a difference between your units and the other ones?
Run the same tests with both devices, blind (not knowing if it is your X16/H16 or theirs) and see if the results differ.
You could also run a blind comparison between both stacks if you do find differences during your timing tests.
If you find differences during the timing tests but cannot tell the stacks apart, do those timing tests even matter?
 
OP
Pdxwayne

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,249
Likes
1,162
Maybe if someone is willing to lend you their X16-H16 combo you could check if you can find a difference between your units and the other ones?
Run the same tests with both devices, blind (not knowing if it is your X16/H16 or theirs) and see if the results differ.
You could also run a blind comparison between both stacks if you do find differences during your timing tests.
If you find differences during the timing tests but cannot tell the stacks apart, do those timing tests even matter?
Some people already doubting that I can sense a difference between Topping vs Gustard combos. They want blind tests between those combos, which unfortunately I can't do.

Sending me another pair of Gustard won't really convince the same crowd, unless I can do a proper blind test, which I can't do anytime soon.

In addition, people will still doubt even if I managed to do a proper blind tests at home. I was told by one member already he doesn't trust anything done at home.
: (

That was why I suggested doing blind tests at Amir's place. That way, I hope people doubt less.
; )
 

RHO

Major Contributor
Joined
Nov 20, 2020
Messages
1,040
Likes
905
Location
Belgium
Some people already doubting that I can sense a difference between Topping vs Gustard combos. They want blind tests between those combos, which unfortunately I can't do.

Sending me another pair of Gustard won't really convince the same crowd, unless I can do a proper blind test, which I can't do anytime soon.

In addition, people will still doubt even if I managed to do a blind tests at home. I was told by one member already he doesn't trust anything done at home.
: (

That was why I suggested doing blind teats at Amir's place. That way, I hope people doubt less.
; )
I would try to do a blind test between the Topping and Gustard first. I assumed you already did that. That could already eliminate the usefulness of those timing tests... or validate them a little more.

You won't be able to convince everyone. We can't even convince some that measurements are meaningful. Why would you think you could get 100% of ASR on board with your experiments and findings?

It is clear these things are important to you and I can see why. I don't doubt you can pass these tests you conduct. I think it's a good thing you try to find why there are differences. But don't get frustrated over them. Take your time and conduct more test if/when the opportunity arrives.

Why is it hard to do a blind test at home?
 
OP
Pdxwayne

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,249
Likes
1,162
.....
Why is it hard to do a blind test at home?
I already did blind tests between DAC twice at home, with helps of wife and daughter....

My wife gave a firm NO to another blind tests when I asked her a short while back. So she can't be bothered.

My daughter is busy with her high school senior's end of year stuffs, including preparing for IB exams, etc.....So I won't bother her. Maybe sometimes in summer is possible, if she doesn't balk at all the steps needed to do in order to switch between combos...

; )
 
Last edited:
  • Like
Reactions: RHO

captainbeefheart

Active Member
Joined
Apr 18, 2022
Messages
276
Likes
357
There were never any step response tests done in the review. We are dealing with percussive instruments in the tracks so it's possible you are hearing an issue with gear that cannot respond well to very small rise times and or are marginally stable. Testing step response between a purely resistive load then adding some reactance can tell a lot how an amplifier will behave.

The testing done by Amir is very thorough but I feel not doing step response testing is missing a pretty important part of any control system, it's stability and it's ability to reach one steady state when starting from another.

So testing is actually incomplete and there are more tests that could tell us very important information.
 
OP
Pdxwayne

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,249
Likes
1,162
There were never any step response tests done in the review. We are dealing with percussive instruments in the tracks so it's possible you are hearing an issue with gear that cannot respond well to very small rise times and or are marginally stable. Testing step response between a purely resistive load then adding some reactance can tell a lot how an amplifier will behave.

The testing done by Amir is very thorough but I feel not doing step response testing is missing a pretty important part of any control system, it's stability and it's ability to reach one steady state when starting from another.

So testing is actually incomplete and there are more tests that could tell us very important information.
Regarding timing tests clues, I wonder which kind of clue would tell me that there is an issue of step response.....

Use the chart here as reference:

Should I assume Gustard H16 is actually doing what it should do and Topping L30 and Sabaj a10h are not?
 

captainbeefheart

Active Member
Joined
Apr 18, 2022
Messages
276
Likes
357
Should I assume Gustard H16 is actually doing what it should do and Topping L30 and Sabaj a10h are not?

Tough to say without any evidence. Best way is to compare input/output step response with square waves of different frequencies and amplitudes, measure the difference in rise times to see which is faster and if any anomalies like overshoot or ringing are present.
 
Status
Not open for further replies.
Top Bottom