• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Blind test: we have a volunteer!!!

Status
Not open for further replies.

BrEpBrEpBrEpBrEp

Active Member
Joined
May 3, 2021
Messages
201
Likes
245
:rolleyes:

Of all the peoples I've visited, on six continents, the ones who reminded me the most of Americans, *by far*, are Aussies.
It's 2121, and the world is ash. After WW3 erupted between the USA and Australia in the great ASR blind test off-topic controversy of 2021, there's nothing left... But now, we have the technology. Using only the highest quality audiophile silver cables, we've developed a method to travel in time. We must go back, and prevent this great tragedy from ever occurring.
 

Blaspheme

Senior Member
Joined
Apr 14, 2021
Messages
461
Likes
515
Going back in time then ... [TARDIS noises] ... back at post #298 GO asked for clarification on the test idea:
Beyond correctly identifying the devices I'm not sure what you'd want me to do differently to identify something you don't believe exists.
For two dozen pages now people have been discussing blind test details, procedures and gotchas. The 'objectionable' observations in GO's video can't be proven directly via the methods discussed: GO can use them in a blind experiment, but can't prove they were the reason he detected difference (if he does so) to a third party. Logically, I assumed detecting difference would be a proxy for the observations GO described in the subjective vernacular. I assume I'm not the only one. Because what would be the point of this discussion otherwise?

If the headline challenge is non-rhetorical, the question needs to be addressed.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,597
Likes
239,670
Location
Seattle Area
Going back in time then ... [TARDIS noises] ... back at post #298 GO asked for clarification on the test idea:

For two dozen pages now people have been discussing blind test details, procedures and gotchas. The 'objectionable' observations in GO's video can't be proven directly via the methods discussed: GO can use them in a blind experiment, but can't prove they were the reason he detected difference (if he does so) to a third party. Logically, I assumed detecting difference would be a proxy for the observations GO described in the subjective vernacular. I assume I'm not the only one. Because what would be the point of this discussion otherwise?

If the headline challenge is non-rhetorical, the question needs to be addressed.
They can and have been done in research for speakers.
 

Blaspheme

Senior Member
Joined
Apr 14, 2021
Messages
461
Likes
515
They can and have been done in research for speakers.
Citation please (really, I'm interested, and it could help address this question).

In this context, how is it done for amps? Assuming difference-as-proxy wasn't what you had in mind.
 
Last edited:

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
It seems that the debate has widened from "can similarly measuring devices sound different" to "are these subjective differences similar in a population of observers."

The first question has been debated for decades and is typically tested by DBTs, though they have methodological problems beyond purely technical one. Since the definition of "similar measuring" can also be squishy, the general consensus seems to be "sometimes."

The second question has no accepted test methodology, but presumably it would require statistically significant numbers of people doing both DBTs and independent subjective reviews, followed by correct linguistic analysis to compare subjective impressions. It's definitional that GS can't provide this research and certainly can't do it alone.

The second question can be rephrased as "does subjective audio review has a right to exist?" As someone with feet in both objective and subjective camps, I would answer with a yes. Perhaps on ASR, the more accepted answer is Hell NO!
 

symphara

Addicted to Fun and Learning
Joined
Jan 24, 2021
Messages
632
Likes
592
The second question can be rephrased as "does subjective audio review has a right to exist?" As someone with feet in both objective and subjective camps, I would answer with a yes. Perhaps on ASR, the more accepted answer is Hell NO!
I think this is an exaggeration. I often notice in Amir’s reviews a section dedicated to listening tests, which is purely subjective. As in, his opinion.

DBT tests are just very hard to set up. It’s much more than one knowledgeable guy with expensive equipment.

It’s not that Amir or ASR are against them, but you need a complex procedure, participants, equipment etc.
 

Blaspheme

Senior Member
Joined
Apr 14, 2021
Messages
461
Likes
515
It seems that the debate has widened from "can similarly measuring devices sound different" to "are these subjective differences similar in a population of observers."

The first question has been debated for decades and is typically tested by DBTs, though they have methodological problems beyond purely technical one. Since the definition of "similar measuring" can also be squishy, the general consensus seems to be "sometimes."

The second question has no accepted test methodology, but presumably it would require statistically significant numbers of people doing both DBTs and independent subjective reviews, followed by correct linguistic analysis to compare subjective impressions. It's definitional that GS can't provide this research and certainly can't do it alone.

The second question can be rephrased as "does subjective audio review has a right to exist?" As someone with feet in both objective and subjective camps, I would answer with a yes. Perhaps on ASR, the more accepted answer is Hell NO!
I did notice that drift appearing. It's outside the scope of the original challenge.
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
I think this is an exaggeration. I often notice in Amir’s reviews a section dedicated to listening tests, which is purely subjective. As in, his opinion.

DBT tests are just very hard to set up. It’s much more than one knowledgeable guy with expensive equipment.

It’s not that Amir or ASR are against them, but you need a complex procedure, participants, equipment etc.
I understand the difficulty.

However, this thread was started and titled thusly, which implies a premise that this is possible.

However, with a considerable sum on the line, very difficult to address objections arose. Further, the level of proof was specified to be impossible for one person to meet.

That's why I have been suggesting that a sincere apology be made/accepted and the bet rescinded (you two know who you are).

Then a collegiate and fun test framework can be established and two charities are named.

The event can be advertised as "Scientific Fundraising" and serious money can be raised for good causes.
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,702
I did not misunderstand. I just wanted to clarify that it is not only a simple A/D conversion as you stated. Amir's proposed online test includes an additional D/A conversion and an amplifier variable.

Gotcha, I probably should have said it differently but assumed that those extra things are implied, given that we have to convert back to analog to hear it. At least we can directly stream a digital signal to our brain ;).
 

richard12511

Major Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
4,335
Likes
6,702
including GO. I am sure they exaggerate these perceived subtle differences to convey a point and make their reviews more entertaining.

If that's true, then I agree that Amir's proposed test has little value. I'm not sure that's true, though. From reading GO's posts and watching his videos on youtube now for awhile, I get the sense that he really believes that he hears significant differences between components. Who knows, maybe he really does.
 

Sharur

Senior Member
Joined
Apr 10, 2021
Messages
476
Likes
214
Here is the problem: I need to verify the two choices don't have any objective difference. I don't have an AHB2 but do have Topping A90, Atom, etc. Otherwise people will wonder what an AHB2 does to a high impedance headphone.
What does an AHB2 do to a high impedance headphones besides having loads of power and a relatively flat frequency response?
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
If that's true, then I agree that Amir's proposed test has little value. I'm not sure that's true, though. From reading GO's posts and watching his videos on youtube now for awhile, I get the sense that he really believes that he hears significant differences between components. Who knows, maybe he really does.
Going by my own example over 3 decades in this hobby, we do tend to exaggerate differences.

Well designed equipment of similar specifications does sound similar. But in this perfectionist hobby, we are encouraged to pay good money to get small benefits. So, as a form of validation, we do look for these small differences, and when we do hear small improvements (which *only* cost $X), we are inclined to declare them to be much larger than they really are.

Having said above, "proving" reality of your experience to another human being is a notoriously difficult proposition. Perhaps we should reread Thomas Aquinas to see if this is something he tackles.

:)
 

Thomas_A

Major Contributor
Forum Donor
Joined
Jun 20, 2019
Messages
3,461
Likes
2,448
Location
Sweden
Just do a simpöe recording, DA-AD and compare to the original in ABX. If there is a difference then make further tets to rule out the AD.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,597
Likes
239,670
Location
Seattle Area
Citation please (really, I'm interested, and it could help address this question).
You setup a blind test and switch speakers and ask the tester to give an overall score and one each for different aspects, bass, mid-range treble. Here is an example from Sean Olive test of different EQ systems:

Harman EQ Preferences.PNG


As you see, listeners rated different frequency bands and their ratings were then shown as a mean and error bar (distribution). Our blogger split the performance of the amplifier into bass, mid-range and treble as well so fits this methodology. So he would simply repeat his testing except this time it would be blind. We would then have others in the room take the same test and provide similar scores. A statistical analysis then shows whether the results are significant, what the distribution is, etc.

Ordinarily we don't do this for electronics because the differences about timbre, etc. are ruled out. But our blogger has ruled them in so if we follow his lead, then this is the type of data that needs to be collected.
 
Status
Not open for further replies.
Top Bottom