• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Limitations of blind testing procedures

Status
Not open for further replies.

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
632
You said: "5.) wrt independent research for backup- you shouldn´t ask more from me than you do yourself. You´re using often your own listening test results and quite often there is no independent research to back it up."
And I asked for a link and that is what you show? What independent research is needed for results of a listening test? That is data by itself. I can provide the files and you or anyone else can take the test. It needs to no "reference" to back it.


Here you go. In letters with respect to famous Meyer and Moran test of CD vs high res audio, this was sent to AES and published in the Journal: http://www.aes.org/e-lib/browse.cfm?elib=15249


View attachment 7873

[...]

View attachment 7874

View attachment 7875


Meyer and Moran set up multiple testing facilities where listeners could come and do the testing on their own. Professor Dranove objected to that practice calling the test statistics "incorrect."

Meyer and Moran responded with this:

View attachment 7876

See? They agree with it. They go on to say:

View attachment 7877

Again, a nod of agreement.

This is exactly the same scenario as your amplifier test where you sent out samples for listeners to listen in different situations/systems, etc.

You talk big about hygiene of statistics but quick to dismiss them all when it comes to conveying your own point of view and test you ran. If you believe in such things, you better practice it better than anyone else not the other way around.
.

The now old, in digital audio years, Meyer Moran tests are frequently resurrected as "proof" of something or other. But, their validity has been pretty thoroughly discredited from a number of standpoints, not just statistical, by a number of knowledgable authors, including you, Amir. I think Meyer himself at one subsequent point retreated and said his tests had not been a scientific study at all.

But, because it was superficially a science-like AES paper and it used ABX, some people still think it is valid. I also read the M&M Boston Audio Society papers that preceded the AES one. I also corresponded directly with Moran in an old sa-cd.net forum and by PM. M&M were indeed very serious at the time. But, cooler heads have since revealed it was really just fake news, and M&M have accepted that. It did not survive ultimate peer review. But, many look no further than the AES rubric, so the paper and its conclusions will not die.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,542
Location
Seattle Area
This thread had a most interesting discussion concerning Robert W. (Bob) Carver and his work with amplifiers.

On the Bob Carver Corp website http://www.bobcarvercorp.com/
there is a video in which Bob describes (among other topics) what made his amplifiers cheaper and better
.
In the segment between 2.5 and 5.5 minutes Bob describes how he uses a triac to control the portion of the line voltage to the power transformer (probably maximizing the current). By my (limited) understanding, this would give a very rippled AC that would require much work to be a smooth DC for the amplifier to work with. Also that transformer could not be used for heater voltage.

My understanding of power supplies was that one wanted a smooth DC at constant voltage. When I built power supplies I specifically used a full-wave (bridge) rectifier with multiple chokes and electrolytic caps to smooth the voltage as much as possible.

Am I missing something here on the necessary power/voltage needed for amplifiers? Thanks.
Let's review their block diagram:

upload_2017-7-25_18-54-1.png



The triac as mentioned in the video (labeled as "regulator" above), chops off the DC waveform. In other words, it is like a light dimmer! Its goal here is to provide voltage regulation to three rails on the right. Being a "class G" amplifier, three different voltages are used to track the musical peaks and with it, much increased efficiency. It is odd that Bob so much focused on the voltage control of the transformer output instead of touting the real hero which is the class G design.

As anyone has heard, and I have experienced with my Carver receiver, you are indeed correct that chopped off AC waveform creates tons of distortion there. Filtering helps with that and maybe the actual circuit employs more than just the standard electrolytic filter cap.

Net, net, efficiency is achieved at the expense of noise. I should put my receiver on the bench one of these days and see how it measures!
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
You said: "5.) wrt independent research for backup- you shouldn´t ask more from me than you do yourself. You´re using often your own listening test results and quite often there is no independent research to back it up."
And I asked for a link and that is what you show? What independent research is needed for results of a listening test? That is data by itself. I can provide the files and you or anyone else can take the test. It needs to no "reference" to back it.

I´ll cover the external backup thing later....

Here you go. In letters with respect to famous Meyer and Moran test of CD vs high res audio, this was sent to AES and published in the Journal: http://www.aes.org/e-lib/browse.cfm?elib=15249
<snip>
Meyer and Moran set up multiple testing facilities where listeners could come and do the testing on their own. Professor Dranove objected to that practice calling the test statistics "incorrect."

No. Short form, you obviously misunderstood Prof. Dranove´s argument/reasoning. He objected not to the practice of "multiple facilities where...." instead he objected to Meyer/Moran´s usage of a test statitistics that assumes _independent_ samples while, as Dranove correctly pointed out, the samples of the participants doing more than one trial were most likely _not_ independent.

Longer explanation (maybe we should start another thread about null hypothesis significance tests NHST), if one wants to do a NHST he has to choose (leaving aside the philosophical and practical differences between Fisherian, NeoFisherian, Neyman/Pearson and Jeffrey approaches to experimental hypothesis testing) a statistical model and a test statistics to evaluate the observed data.
If you choose, according to you statistical model, an exact test like the binomial, then one of the assumptions of the statistical model is the independence of the samples. If the samples are not independent you violated the assumptions and therefore the correct statement by Dranove that Meyer/Moran should have used for the assessment of all results together another model and test statistics which does not assume independent samples.

Why does it matter?
Because surprisingly a NHST does not test the nullhypothesis, but instead evaluates the observed data (i.e. the test results) under the assumption that the null hypothesis is true. If the probability for the test results to occur, if the null hypothesis is true, is lower than a predefined level of significance, then the experimenter concludes that the experimental results provide evidence against the null hypothesis.
If the assumptions of the analysis are violated right from the beginning then it gets difficult.

Ironically the letter by Dranove supports my argument, as you might remember that i already mentioned in this thread that the samples of our participants were independent and were independent from my initial test.

Meyer and Moran responded with this:

View attachment 7876

See? They agree with it. They go on to say:

View attachment 7877

Again, a nod of agreement.

According to Prof. Dranoves comments on the sa-cd.net forum, mentioned by Fitzcarraldo215, Meyer/Moran did try external expertise by mathematician and apparently he confirmed Dranove´s concerns, which included additional critical points like the surprisingly small number of trials with >= 7 hits in 10 trials. (exspected number was 9, observed number was 3)
But overall the data reported by Meyer/Moran (or the reporting) was remarably inconsistent and, as said before, i didn´t understand how it could pass any sensible peer review process. But of course peer review is done by humans and can´t garantuee correctness.

This is exactly the same scenario as your amplifier test where you sent out samples for listeners to listen in different situations/systems, etc.

I hope it is now obvious why it isn´t what you thought it is.

You talk big about hygiene of statistics but quick to dismiss them all when it comes to conveying your own point of view and test you ran. If you believe in such things, you better practice it better than anyone else not the other way around.

Maybe it would be better to drop these personal remarks?!
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,542
Location
Seattle Area
According to Prof. Dranoves comments on the sa-cd.net forum, mentioned by Fitzcarraldo215, Meyer/Moran did try external expertise by mathematician and apparently he confirmed Dranove´s concerns, which included additional critical points like the surprisingly small number of trials with >= 7 hits in 10 trials. (exspected number was 9, observed number was 3)
You asked me for a reference on why results of different tests cannot be combined and I provided a Journal of AES one. For your counter you tell me about some forum discussion???

Nothing in your posts invalidates what I said which I might add, continues to be your words rather than quoted references.

Anyway, here is professor Dranove's first post in that forum:

upload_2017-7-26_9-24-20.png


In a nutshell, he is saying you can't combine the results of tests of people who potentially can tell the difference in a test, with a mass of people who cannot. It can generate what is known as Simpson's paradox. It is like if I take 100 people to diagnose an engine problem and I put one real mechanic in there. The mechanic finds the problem but the rest cannot but in the combined statistic, the data would overwhelmingly show the problem cannot be found.

So contrary to your post, he did not just complain about lack of independence.

If you read the original paper (have you?) you see the issue as I discussed, i.e. multiple venues and setups were used. Some of those setups may have been less revealing than others. If so, you cannot combine their results with others. That again, is like what professor Dranove is giving an example of.

In your amplifier test where the listener hooked up the amp after being shipped to them, could similarly experience other faults that would result in outcomes that are unique in that test configuration, different than what others tested.

A proper test would have gathered everyone in one place and tested with identical setup which prior to the test, was confirmed objectively that all is as it should be. No level differences. No miswiring. No "tells." Etc. That you throw out this rigor with a smile is sure sign that you are shopping for results. If an outcome of a test is not to your liking, you write a mountain of criticism. But if it is yours (were these tests of your amps?) all of a sudden all that is require is a sticky note on an amp to make a test valid? I don't think so.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,542
Location
Seattle Area
Maybe it would be better to drop these personal remarks?!
Personal? No. You questioned my listening tests results. I am questioning your beliefs and comments which are completely contradictory. I didn't call you anything outside of context of discussion.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
You asked me for a reference on why results of different tests cannot be combined and I provided a Journal of AES one. For your counter you tell me about some forum discussion???
Are you really bringing out eristics at this point of our discussion? ;)
I mean, i pointed out where you misunderstood Prof. Dranoves argument, explained why his argument matters wrt to the basic properties of statistical analysis and you dismiss it all by telling all i did was "for your counter you tell me about some forum discussion???"

No, that wasn´t all i did.

Nothing in your posts invalidates what I said which I might add, continues to be your words rather than quoted references.
As i just referred to the basic principles of null hypothesis significance tests i wasn´t aware that you´d need external reference; i´d had assumed you would remember after reading my post.
But anyway, please tell me the statements that you want to be confirmed by external reference:
-) its a matter of NHST
-) the statistical analysis of any experiment is based on a statistical model and the according test statistic
-) Meyer/Moran used the binomial distribution for their analysis
-) the exact binomial test assumes _independence_ of the samples
-) the participants in the Meyer/Moran experiment did only one trial per person, but (mainly) more than five, sometimes even 10 trials per person
-) it presents a problem if the assumption of the statistical model (used for the analysis of the observed data) were violated right from the beginning

I´m more than happy to supply the references that you´ll request.

Before commenting Dranove´s post that you´ve mentioned now, lets go back to your scan of Prof. Dranoves letter to the editor of the JAES; where he wrote (bold letters done by me):
"On a related note, it appears that Meyer and Moran treat all 500 listening tests as independent observations for the purpose of statistical testing, when in fact all tests by a given subject have correlated results."

Prof. Dranove clearly pointed to the fact that the assumption of _independent_ samples was violated and he concluded therefore:
"This means that their test statistics are incorrect"

That was the point that he objected; i hope that you now realize that your description:

" Meyer and Moran set up multiple testing facilities where listeners could come and do the testing on their own. Professor Dranove objected to that practice calling the test statistics "incorrect."

was incorrect.


Anyway, here is professor Dranove's first post in that forum:

<snip>

In a nutshell, he is saying you can't combine the results of tests of people who potentially can tell the difference in a test, with a mass of people who cannot. .....

No, in an nutshell he just says that (despite the problemic violation of the independence assumption) that you can combine the result to assess the group overall, but you can´t conclude from these overall results, that _nobody_ could/can hear a difference.
I recently already mentioned the problem of groupwise results used to deduce to single individuals of the group without knowing their individual results.

It can generate what is known as Simpson's paradox. It is like if I take 100 people to diagnose an engine problem and I put one real mechanic in there. The mechanic finds the problem but the rest cannot but in the combined statistic, the data would overwhelmingly show the problem cannot be found.

So, that is another (well known problem) that Prof. Dranove addressed. And of course Simpson´s paradox can occur, but as you might know if you look into the literature, nobody concludes that one cannot combine the results of different independent experiments - and that was what you originally stated - but you have to combine results correctly.

So contrary to your post, he did not just complain about lack of independence.
Ah, eristics again? This time it would be a classical strawman, as i did not wrote, Prof. Dranove "did just complain about lack of independence".
Quite to the contrary, i wrote (bold letters now used):
"According to Prof. Dranoves comments on the sa-cd.net forum, mentioned by Fitzcarraldo215, Meyer/Moran did try external expertise by mathematician and apparently he confirmed Dranove´s concerns, which included additional critical points like the surprisingly small number of trials with >= 7 hits in 10 trials. (exspected number was 9, observed number was 3)"

If you read the original paper (have you?) you see the issue as I discussed, i.e. multiple venues and setups were used. Some of those setups may have been less revealing than others. If so, you cannot combine their results with others. That again, is like what professor Dranove is giving an example of.

Prof. Dranove only pointed out, that one has to be carefull when combining results and draw conclusions; surely nothing i´d ever expressed to disbelieve. It just does not confirm what you stated in your post back then.

In your amplifier test where the listener hooked up the amp after being shipped to them, could similarly experience other faults that would result in outcomes that are unique in that test configuration, different than what others tested.

As said before, surely everything can happen. But afai understand (maybe i was mistaken) you and others, like BE718, objected strongly using the "everything is possible approach".
That this all happens in favour at the end to exactly the same preamplifier and in addition again to exactly the same preamplifier that i preferred is not likely, but, as said before, even an unlikely event can happen.

But please, then stick to your argument instead of pointing to imagined basic rules that prohibit the combination of test results from different experiments.
I hope, at this point it is obvious that Prof. Dranoves letter and post does not contain support for your assertion; in practice it is the contrary.

A proper test would have gathered everyone in one place and tested with identical setup which prior to the test, was confirmed objectively that all is as it should be. No level differences. No miswiring. No "tells."

First of all, that would have been not a proper, but a totally different test.
As said before, the rationale behind our test concept was, to create no artificial experimental environment, but instead let the participants (who didn´t know about participating in a controlled listening experiment) do, what they usually do when comparing two different "boxes" and in exactly the same way as they usuall do.

That the experimental control isn´t as strict as in a labor situation i´ve clearly pointed out in the description.

Etc. That you throw out this rigor with a smile is sure sign that you are shopping for results. If an outcome of a test is not to your liking, you write a mountain of criticism. But if it is yours (were these tests of your amps?) all of a sudden all that is require is a sticky note on an amp to make a test valid? I don't think so.

The next try on eristics. "...that you are shopping for results" is just an insult, as is the personal remark that follows.

I´m fine with your "i don´t think so", but, as said before, please stick to it, and refrain from using misguided/sloppy/faulty statistical reasoning.
 
Last edited:

Purité Audio

Master Contributor
Industry Insider
Barrowmaster
Forum Donor
Joined
Feb 29, 2016
Messages
9,051
Likes
12,150
Location
London
Who is "Jakob ' in real life, do we know which manufacturer, obviously one with a lot of spare time.
Keith
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,542
Location
Seattle Area
I mean, i pointed out where you misunderstood Prof. Dranoves argument, explained why his argument matters wrt to the basic properties of statistical analysis and you dismiss it all by telling all i did was "for your counter you tell me about some forum discussion???"
Your opinion and judgements as a person posting under an alias is of little value (especially since you wear your bias so easily on your sleeve). If you post your real name and formal qualifications then that might be different but for now, what you opine is not important. What you can demonstrate by quoting direct references would be of value. Heaven knows nobody comes here to read some unknown person's opinion about things, and in 12 paragraphs no less.

That is why I am asking you to provide direct quotes and peer reviewed references. Don't say "he said in some forum this and that." Quote him like I did so that we can read and know the full context.

BTW did you answer me how many published blind listings you have studied? Or whether you have read Meyer and Moran's original paper?

And what is your position in all of this in a nutshell? That any blind test provided has problems with it? Except yours? You know, the one where you put post-it notes on an amp you sent to a few people?

Please be brief and to the point.
 

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,298
Location
uk, taunton

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,204
Likes
16,986
Location
Riverview FL
If you post your real name and formal qualifications then that might be different

Ah, Sighted vs Blind post reading?

And an objection to being blinded?

---

Let me provide a suitable reply to my comment to save everyone the trouble:

CJZnGLeUEAAN0UW.jpg
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,542
Location
Seattle Area
So, that is another (well known problem) that Prof. Dranove addressed. And of course Simpson´s paradox can occur, but as you might know if you look into the literature, nobody concludes that one cannot combine the results of different independent experiments - and that was what you originally stated - but you have to combine results correctly.
Which you did not by handing amps to people to hook and test on their own with no supervision to make sure they are testing the same way you were. And of course we are putting all the other sins aside of a poster claiming to have run proper test with zero documentation of any sort to say any hygiene was provided.

Let's start by giving us the names/models of amplifiers, what system they were plugged into, what music was used, etc.
 
Last edited:

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
632
Pure and simple, I think that anyone here affiliated in any way with any audio manufacturer or any other entity in the audio business, even on a consulting basis, absolutely needs to identify himself and the nature of his affiliation. That is common practice in all other audio forums I know of. Failure to do so should result in immediate suspension, no ifs and or buts.

That is one problem with Jacob1863. The second problem is the delicate question of when does an alternative point of view in a thread go over the line to trolling, as it continues to try to make the same long-winded, nit picking, but empty and baseless arguments? In other words, when is enough enough?
 

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,298
Location
uk, taunton
Yes more information and less semantically oriented jousting would be terrific..
A. I'm losing the will to live reading through it all
B. Any coherent narratives to the arguments seem to be getting lost in the tit for tat.
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
632
Yes more information and less semantically oriented jousting would be terrific..
A. I'm losing the will to live reading through it all
B. Any coherent narratives to the arguments seems to be getting lost in the tit for tat.
"Seems"? No, it has definitely beyond all doubt gotten totally lost.

And, the Internet anonymity disguising our friend Jakob's true identity is a problem. It is clear that if we knew his identity and the products he was affiliated with, he would not dare to offer such opinions in this environment. His credidibility is far beyond totally shot here. But, as we know, certain folks everywhere like to yank Amir's chain and they know exactly how to do it. And, Amir will always attempt respond based on facts and objectivism. So, the pot can always easily be stirred.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Well , having a lot of spare time to sit on forums all day o_O

@Jakob1863 id like you to compose a introduction of yourself, your background and present interests in audio please. We have a thread that new members use to do this ..
http://audiosciencereview.com/forum/index.php?threads/introductions-anyone.65/page-10

It helps :)
Nice advice, but......

Unfortunately you´ve taken a moderator job but can´t or don´t want to control the forums founder posting habits; i understand the difficulties.
As i was always interested in social science these group dynamics, belief structures and the willingness to question those beloved beliefs, participating in this forum is/was an interesting experience.

It seems to me that this forum is sailing under false colors, as the membersoften aren´t so much interested in audio_science_review but more in groupwise mocking of "audiophiles" (whatever defines that group) and reciprocal affirmation of the various beliefs.

Opinions don´t matter without external reference, is an interesting variant of belief in authorities; i´d encouraged instead critical thinking combined with some comprehensive reading of introductory textbooks on statistical reasoning. Requesting external references for the most basic principles before wanting to start thinking about an argument is imo a startling message.

Fitzcarraldo215´s point of anonimity is a valid point of view, if i´ve overlooked an according paragraph in the TOS i do apologize, otherwise it might be useful to include it if it´s the forums policy. If it´s not the forum´s policy the moderation should prevent talking about anonimity .
Since i started participation in internet forums i strictly voted for anonymity (might be partly my german background and age :) ) mainly because i wanted to freely discuss audio related topics, of course under the premise to refrain from advertising and/or attacking other professionals work. And i know other professionals that do it the same way;some forums don´t request the "demasking", some do, so what´s the point?

Yes more information and less semantically oriented jousting would be terrific..
A. I'm losing the will to live reading through it all
B. Any coherent narratives to the arguments seem to be getting lost in the tit for tat.

Point taken; but a moderation taking care that "superposters" don´t provide false interpretations, strawmans, red herrings and whatever the eristic toolbox contains, could help too, you should promote posting in the best good will mode possible.
 

Thomas savage

Grand Contributor
The Watchman
Forum Donor
Joined
Feb 24, 2016
Messages
10,260
Likes
16,298
Location
uk, taunton
Nice advice, but......

Unfortunately you´ve taken a moderator job but can´t or don´t want to control the forums founder posting habits; i understand the difficulties.
As i was always interested in social science these group dynamics, belief structures and the willingness to question those beloved beliefs, participating in this forum is/was an interesting experience.

It seems to me that this forum is sailing under false colors, as the membersoften aren´t so much interested in audio_science_review but more in groupwise mocking of "audiophiles" (whatever defines that group) and reciprocal affirmation of the various beliefs.

Opinions don´t matter without external reference, is an interesting variant of belief in authorities; i´d encouraged instead critical thinking combined with some comprehensive reading of introductory textbooks on statistical reasoning. Requesting external references for the most basic principles before wanting to start thinking about an argument is imo a startling message.

Fitzcarraldo215´s point of anonimity is a valid point of view, if i´ve overlooked an according paragraph in the TOS i do apologize, otherwise it might be useful to include it if it´s the forums policy. If it´s not the forum´s policy the moderation should prevent talking about anonimity .
Since i started participation in internet forums i strictly voted for anonymity (might be partly my german background and age :) ) mainly because i wanted to freely discuss audio related topics, of course under the premise to refrain from advertising and/or attacking other professionals work. And i know other professionals that do it the same way;some forums don´t request the "demasking", some do, so what´s the point?



Point taken; but a moderation taking care that "superposters" don´t provide false interpretations, strawmans, red herrings and whatever the eristic toolbox contains, could help too, you should promote posting in the best good will mode possible.
I think Iv afforded you fair rein , as I said I would. People will read the arguments ( they might actually give up as these exchanges are hard on the eye ) and make their own minds up.

The content of a persons debate will be scrutinised as well as the spirit it's been delivered under, my job is to leave as open a platform as I can and I'm happy that's been achieved. Any ill you feel that's been delivered in amir s posts will stand the same scrutiny, the scrutiny of the readership.

You have had a open floor, those reading will make their minds up, my request stands and in no way was it a hostile one. It's polite to introduce yourself and given the nature of these types of discourse ( technically orientated etc) advantageous for folks to know the back grounds of those they are debating .

Given your persistence you will no doubt court a certain type of response from some of the members but I'm fair minded and have no issue with you so you needent concern yourself in this regard.

Just introduce yourself, and keep these comments in mind as should all members .
 

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,317
Location
Albany Western Australia
As said before, surely everything can happen. But afai understand (maybe i was mistaken) you and others, like BE718, objected strongly using the "everything is possible approach".
That this all happens in favour at the end to exactly the same preamplifier and in addition again to exactly the same preamplifier that i preferred is not likely, but, as said before, even an unlikely event can happen.

No. I have no problem with an open mind. What I object to is you using the possibility that anything can happen as a technique to imply something is happening or has any likelihood of happening.

Thus far you have provided no compelling evidence of...... well...anything at all. Just nebulous statements and semantics.

Do you have any specific points to make with supporting evidence that can be scrutinised?

Whilst having a closed mind is obviously not good, equally having a mind that accepts anything, so open your brain falls out on to the floor, is equally non productive.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,376
Likes
234,542
Location
Seattle Area
Two identical looking cases with labeling changed in random order between the participants and used for a preference test. One of the screws was invisibly sealed, but as the looks inside were extremely similar it would have probably not help to establish a preference for one of the units.

Every participant got the units for a couple of days and should tell afterwards which one (if one) he would prefer for listening.
Due to the ´hiding of the test´ it was not possible to ask the participants for a run of trials therefore the main problem was to find a group with consistent preference.

We were only able to find five listeners, who would in my opinion prefer the same preamplifier as i did (doing a controlled blind experiment , identifying my preferred unit correctly in 5 trials), if they were detecting a audible difference.
Let me go back to this experiment of yours. You say there were two identical boxes? If so, these were not two commercial units or else they would look completely different, right?

Were both of these boxes of your build/design?

You say this is closer to what audiophiles typically do. In what occasion do people get two identical looking boxes to evaluate for better fidelity? Don't you think they don't know at all times they are under a test microscope in such a situation?

Is it correct that each participant only produced one vote/trial?
 
Status
Not open for further replies.
Top Bottom