• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Is the entire audio industry a fraud?

Chrispy

Master Contributor
Forum Donor
Joined
Feb 7, 2020
Messages
8,012
Likes
6,158
Location
PNW
Why is it so hard to get my position right?

I criticise Audio ABX as practiced on a number of grounds, all strait forward and all related easy to understand flaws, including Methodology and use of statistics.

The result of these flaws is that the Audio ABX test is very heavily weighted towards retiring a false negative (aka type B/II/2 statiscal Error).

It simply means that if an Audio ABX returns a null result we cannot have material confidence that this result is not in error.

Thus using the Audio ABX test without changes, as it was proposed by Clarke et al over 4 decades ago is equal to using an extremely dull knidlfe to slice ripe tomatoes.

More, all these varied criticism I mentioned have been raised repeatedly and have not led to the ABX Proponents changing anything.

This is not Science, this Cargo Cult Science. It is Cargo Cult Science because it doesn't correct in the back of valide and we'll meaning criticism and because "the planes don't land", as Richard Feynman said.

Unmeasurable differences? Not sure who stated that.

I referred to differences not covered by standard testing as performed now.

I remember a time when nobody tested digital products for jitter. In fact there was no way to test jitter at the time.

It took decades before testing for jitter because accepted widely and for a long time certain groups in Audi argued jitter doesn't matter, cited ABX tests returning null results and so on.

I am sure that everything audible can be subjected to objective quantification using some form of instrumentation.

I am sure that with sufficient research we can determine quite reliably the audibility limits of various types of signal alteration and that doing so is necessary to inform engineering choices.

To do so requires controlled blind testing that has a high sensitivity and is designed to offer greater amount of data and different statistics.

But instead of moving forward with such an endeavor we are stuck with Audio ABX and a rerun of the 1970's lower THD is always better cult.

This is running in circles and not even leveling up once completing the circle.

Only hamsters keep running while staying place and enjoy that.

Thor
My first thought is because your writing is as bad as a subjective equipment reviewer....but then that's what you are/were. You're just not focused, you're just out there?
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,158
Likes
3,664
Location
bay area, ca
Why is it so hard to get my position right?

I criticise Audio ABX as practiced on a number of grounds, all strait forward and all related easy to understand flaws, including Methodology and use of statistics.

The result of these flaws is that the Audio ABX test is very heavily weighted towards retiring a false negative (aka type B/II/2 statiscal Error).

It simply means that if an Audio ABX returns a null result we cannot have material confidence that this result is not in error. ...
It depends on what needs to be proven by said test.

The Archimago test I alluded to doesn't prove there isn't a difference between 192/24 and 44/16, but it shows that those that state that they can reliably tell a difference *fail* to do so in a controlled environment they agreed to participate in. IMO, if I am convinced about something I check the parameters of the test prior to taking part and making a fool of myself.

We know there *is* a *theoretical* performance envelope advantage of 192/24 vs 44/16... but pretty much every test out there shows there's no *practical* benefit to it. It's a bit like saying a Bugatti Veyron sucks because a 1968 Apollo moon rocket shames it in acceleration.

The ABX tests we always talk ar indeed designed to prove "positive negatives" inasmuch as the tests subjects claimed an ability to perceive something that -under controlled circumstances- they weren't able to do. But we *do* know that a higher sampling/digitization rate/encoding does indeed allow to capture more spectrum more completely - Shannon/Nyquist didn't lie, nor did the test set out to prove that false.

Just because some observatory has the tech that allows us to witness details in the Andromeda Nebulae doesn't mean I can see the same thing with my human eyes, and if I claim to be able to do so and agree to a test and fail... who's fault is it? The clear thing here is that because *technology* is able to do something doesn't mean our human senses can consume it without help. Try soldering a 7nm chipset sometime... :) Our ears clearly are no different.
 

MarkS

Major Contributor
Joined
Apr 3, 2021
Messages
1,089
Likes
1,539
This is running in circles and not even leveling up once completing the circle.
Break the circle by listening for yourself.

Just do whatever it is you do to decide that Amp A sounds better than Amp B. But do it without knowing which is which.

Of course, this means that you have to have someone else do the swap. And that person has to hide things well enough to remove visual clues.

It's also necessary that you not have precise visual or tactile info about the volume setting. Volume must be set by ear only, after starting at zero volume with each listening session.

Listening sessions can be as many and as long or short as you need.

Can you hear the difference? Try it for yourself and find out!
 

Vacceo

Major Contributor
Joined
Mar 9, 2022
Messages
2,720
Likes
2,877
I think if you found some speakers that you loved even 20 years ago and have been able to make them work in your room, there is very little to gain by buying expensive new speakers today... not that most of the often discussed speakers on this forum are not technically superior, but since even the best speakers are a set of compromises there is not much to gain by swapping out speakers that work for you. Obviously the corollary is that if you bought speakers that don't quite cut it for you, then by all means jump in and upgrade.

To your point though once you find the speakers that satisfy your needs in your listing space, there is not much to gain by getting the newer technically superior speakers.
Currently I'd go for better subwoofers due to the fact that I like infrasonics and my current one hardly dig below 20 Hz. For the speakers, I'd have to try them in room and measure to actually see what kind of potential improvement I get.
 

Andretti60

Active Member
Joined
Oct 1, 2021
Messages
223
Likes
360
Location
San Francisco Bay
… Colleges and universities found that when they lowered their tuition enrollment went down; increasing it increases enrollment. …
You are having this backwards.
College enrollment goes up and down for different reasons, mostly because of the economy. With a strong economy, young people don’t see any reasons to go to college if they can get a good job right away, with a decent salary.
in this case (low enrollment) colleges have to cut down tuition to motivate the youths.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,909
Likes
37,973
Why is it so hard to get my position right?

I criticise Audio ABX as practiced on a number of grounds, all strait forward and all related easy to understand flaws, including Methodology and use of statistics.

The result of these flaws is that the Audio ABX test is very heavily weighted towards retiring a false negative (aka type B/II/2 statiscal Error).

It simply means that if an Audio ABX returns a null result we cannot have material confidence that this result is not in error.

Thus using the Audio ABX test without changes, as it was proposed by Clarke et al over 4 decades ago is equal to using an extremely dull knidlfe to slice ripe tomatoes.

More, all these varied criticism I mentioned have been raised repeatedly and have not led to the ABX Proponents changing anything.

This is not Science, this Cargo Cult Science. It is Cargo Cult Science because it doesn't correct in the back of valide and we'll meaning criticism and because "the planes don't land", as Richard Feynman said.

Unmeasurable differences? Not sure who stated that.

I referred to differences not covered by standard testing as performed now.

I remember a time when nobody tested digital products for jitter. In fact there was no way to test jitter at the time.

It took decades before testing for jitter because accepted widely and for a long time certain groups in Audi argued jitter doesn't matter, cited ABX tests returning null results and so on.

I am sure that everything audible can be subjected to objective quantification using some form of instrumentation.

I am sure that with sufficient research we can determine quite reliably the audibility limits of various types of signal alteration and that doing so is necessary to inform engineering choices.

To do so requires controlled blind testing that has a high sensitivity and is designed to offer greater amount of data and different statistics.

But instead of moving forward with such an endeavor we are stuck with Audio ABX and a rerun of the 1970's lower THD is always better cult.

This is running in circles and not even leveling up once completing the circle.

Only hamsters keep running while staying place and enjoy that.

Thor
Why not in a straightforward and detailed manner gives us an example of the right way to do a blind listening test? Or give us some differences not covered by standard testing as performed now. Hopefully not straw manning the "standard testing".

Not posting un-truths would help as well. Jitter was known prior to the introduction of CD, and was being tested for in some manner within a decade. Besides how could people cite ABX results for jitter not mattering if they had no way to measure jitter? Seems you left something out there. Oh, and now decades later just how important has jitter turned out to be?
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,158
Likes
3,664
Location
bay area, ca
Why not in a straightforward and detailed manner gives us an example of the right way to do a blind listening test? Or give us some differences not covered by standard testing as performed now. Hopefully not straw manning the "standard testing".
...

A blind test doesn't prove that there aren't measurable differences between equipment. It does however conclusively prove that those who claim to be able to hear such differences with recording technology or ever more over-engineered devices consistently fail to do so in that controlled environment.
 

ahofer

Master Contributor
Forum Donor
Joined
Jun 3, 2019
Messages
5,075
Likes
9,235
Location
New York City
One would think that in such a large group some members would have the scientific curiosity to not sit back and keep saying, "confirmation bias", "marketing hype", etc. over and over and would be curious to search out the root causes. Where do such largely held "misconceptions" come from?

I think this is a much more complex topic, more appropriate to psychology and social science, and outside the expertise of the bulk of the forum. I've been able to add a little bit from my knowledge of behavioral finance.
Why is it so hard to get my position right?
I’ll leave that as an exercise for you, although I think you know perfectly well.
 
Last edited:

Inner Space

Major Contributor
Forum Donor
Joined
May 18, 2020
Messages
1,285
Likes
2,941
I think this is a much more complex topic, more appropriate to psychology and social science ...
Well said. I love threads like this, because they all pose the same basic question: why? Why do people need to think this stuff so desperately? And why don't they change it up from time to time? The same cast of characters shows up on every thread - the evidently young guy who boasts of spending lots of money, and who pities us paupers; the slightly older guy who all but admits he hates audio science simply because it spoils his fun: and now the avuncular chin-stroking sage, wearing an interesting hat, very satisfied with his enormous expertise, finding sciency-sounding new ways of saying the same old, same old.

Surely such behaviors can only be explained via self image, or self doubt, or self worth, or the compulsion to be different, or something. I think a lot of people here are fascinated about it, but as you say, we lack expertise in the field. It would be great to hook up with a psychology or social science website. We could give them examples, they could give us explanations.
 

Cote Dazur

Addicted to Fun and Learning
Joined
Feb 25, 2022
Messages
620
Likes
761
Location
Canada
maybe this industry is just a load of fraud?
It certainly can appear like it is, but there is more to it than just that. All involved are guilty, as it is so easy to convince ourself that something is there when there is nothing there. The ones who build, the one who sell and the one who consume all fall for the same trap. The lucky one on the consumer side learn to recognize the trap. The fraudulent one on the builder and selling side are the one who also know about the trap but decide to exploit that knowledge, but not all builder or seller are fraudulent.
In the past, my mind made me believe that some cables sounded different, that some amplifier made a difference, etc....., the same mind now, help me make better decision. Did the people mind who were showing me those cables and demonstrating the amplifiers, heard the same I did or where they trying to trick me. I do not know, but I do not believe it, so was it fraud? Not if we were all under the same mind trick. All complaisant in our search for better music.
I was myself honestly telling all my friends of the benefits of this “expensive” high fi items I had (have), not to fool them but to help them get the same pleasure I had listening to my music. And pleasure I had.
So, all fraud? No. Some fraud, yes, but mostly just bad advice from people who believe more in magic than science.
 

Thorsten Loesch

Senior Member
Joined
Dec 20, 2022
Messages
460
Likes
533
Location
Germany, now South East Asia (not China or SAR's)
It depends on what needs to be proven by said test.

Correct. The ABX Test was designed to reliably return Null Results unless the audible differences are "gross".

Now as a tool to show up subjectivist audiophile reviewers waxing lyrically how a mains cable causes day & differences (unless it fixes a ground loop or a missing earth issue of course) it is excellent.

It so good at returning null results, if you get someone who believes mains cables CANNOT MAKE ANY DIFFERENCE (not even those I mentioned, which they can) OR someone who believes that mains cables MUST MAKE AN AUDIBLE DIFFERENCE and instead of changing mains cables you swap speaker cable polarity, neither individual can hear what I would a quite "gross" audible difference.

The Archimago test I alluded to doesn't prove there isn't a difference between 192/24 and 44/16, but it shows that those that state that they can reliably tell a difference *fail* to do so in a controlled environment they agreed to participate in. IMO, if I am convinced about something I check the parameters of the test prior to taking part and making a fool of myself.

The problem is that is is a challenge. That is what it shares with and what makes it akin to the shell game.

Yes, I did a similar test once, ages ago, in german hifi forum long gone from the net.

There actually I submitted my music ripped was send back a CD and did the test at home. It was CD vs 128VBR MP3. It was not that easy to tell, but in the end I was convinced I had ID'd the altered tracks correctly. I posted my results and go the reply I had gotten it wrong and had zero corect ID.

I then posted my reveal, namely that I had used Reference Recordings tracks with HDCD, and showed that in fact my ID was correct, using the secret HDCD Code and a frieds Audio Synthesis DAC with HDCD. It promptly got me banned.

And yes, I identified the tracks by listening, because I knew what to listen for, having processed myself the same tracks as MP3 to be able to "train" myself in identification before I ever send the tracks off. I did not even have to check my "cheat codes", I just used this as insurance my interlocutor would deal honestly, which as I found, he did not.

We know there *is* a *theoretical* performance envelope advantage of 192/24 vs 44/16... but pretty much every test out there shows there's no *practical* benefit to it. It's a bit like saying a Bugatti Veyron sucks because a 1968 Apollo moon rocket shames it in acceleration.

Well, we may argue that in theory music and microphones exceed the dynamic range capabilities of Red Book CD, but the difference is NOT that large. I would suggest that 18 Bit (true 18Bit) at 64kHz would probably have been good enough.

It was actually discussed, AFIK but it was not implemented because it was decided that CD MUST HAVE 74 minutes playback time (instead of two times 25 Minutes - which was hell on concept albums, artistically speaking BTW), but an 18 Bit 64kHz CD would have been at 45 Minutes per CD.

And of course, in ABX tests (yeah, they are like Pomodoro in Italian cooking - get in bleedin' everywhere) there was no difference between 14 Bit 32kHz, 16 Bit 44.1kHz and 18 Bit 48Khz.

So I guess we should be thankful for what we got, it could have been 14 Bit / 32kHz (which is what is in use for NICAM and FM Radio digital audio distribution with a log remapping of the dataspace so transmitted is actually 10Bit/32kHz plus overhead).

WARNING, VIDEO EXTREMELY NOT SAFE FOR WORK (NSFW explicit with FFN)


So CD is just a smidgen below.

For 99% of the recordings we are given, it is good enough though.

Thor
 

Thorsten Loesch

Senior Member
Joined
Dec 20, 2022
Messages
460
Likes
533
Location
Germany, now South East Asia (not China or SAR's)
Why not in a straightforward and detailed manner gives us an example of the right way to do a blind listening test? Or give us some differences not covered by standard testing as performed now. Hopefully not straw manning the "standard testing".

1) Make sure the test is BLIND. That is, the test subjects should not have any awareness of what is being tested, so they cannot have any bias on the subject. This is specific to Audio (though is also useful in other contexts where strong emotions are attached to views on the subject of the investigation) as we had five decades or so of extreme polarisation.

2) Make sure the test minimises test induced stress, this involves protocol, environs and general interactions with listeners. They are not enemies to be defeated, but resources to employed in the search for knowledge. Make the listeners comfortable and relaxed, make them feel they are giving a real contribution, not matter what your personal view on the matter. If necessary employ someone to be nice if you cannot be.

3) Use a form of preference / performance ranking, it not only gives more information, humans are much more consistent in their preference than their ability to correctly identify a specific item. Collect as much as data as possible. Use questionaries' that assess the emotional / mental state of the subject as well. Test for reliable preference and reliable alteration of mood/emotional state as proxies to the presence of a potential difference, rather than attempting to test the difference directly.

4) Use whatever statistics you like, but be clear to everyone, your listeners as well as the audience of the test what the limitations and implications are UPFRONT.

Is that straightforward and detailed enough?

Thor
 

IPunchCholla

Major Contributor
Forum Donor
Joined
Jan 15, 2022
Messages
1,124
Likes
1,407
Correct. The ABX Test was designed to reliably return Null Results unless the audible differences are "gross".

Now as a tool to show up subjectivist audiophile reviewers waxing lyrically how a mains cable causes day & differences (unless it fixes a ground loop or a missing earth issue of course) it is excellent.

It so good at returning null results, if you get someone who believes mains cables CANNOT MAKE ANY DIFFERENCE (not even those I mentioned, which they can) OR someone who believes that mains cables MUST MAKE AN AUDIBLE DIFFERENCE and instead of changing mains cables you swap speaker cable polarity, neither individual can hear what I would a quite "gross" audible difference.



The problem is that is is a challenge. That is what it shares with and what makes it akin to the shell game.

Yes, I did a similar test once, ages ago, in german hifi forum long gone from the net.

There actually I submitted my music ripped was send back a CD and did the test at home. It was CD vs 128VBR MP3. It was not that easy to tell, but in the end I was convinced I had ID'd the altered tracks correctly. I posted my results and go the reply I had gotten it wrong and had zero corect ID.

I then posted my reveal, namely that I had used Reference Recordings tracks with HDCD, and showed that in fact my ID was correct, using the secret HDCD Code and a frieds Audio Synthesis DAC with HDCD. It promptly got me banned.

And yes, I identified the tracks by listening, because I knew what to listen for, having processed myself the same tracks as MP3 to be able to "train" myself in identification before I ever send the tracks off. I did not even have to check my "cheat codes", I just used this as insurance my interlocutor would deal honestly, which as I found, he did not.



Well, we may argue that in theory music and microphones exceed the dynamic range capabilities of Red Book CD, but the difference is NOT that large. I would suggest that 18 Bit (true 18Bit) at 64kHz would probably have been good enough.

It was actually discussed, AFIK but it was not implemented because it was decided that CD MUST HAVE 74 minutes playback time (instead of two times 25 Minutes - which was hell on concept albums, artistically speaking BTW), but an 18 Bit 64kHz CD would have been at 45 Minutes per CD.

And of course, in ABX tests (yeah, they are like Pomodoro in Italian cooking - get in bleedin' everywhere) there was no difference between 14 Bit 32kHz, 16 Bit 44.1kHz and 18 Bit 48Khz.

So I guess we should be thankful for what we got, it could have been 14 Bit / 32kHz (which is what is in use for NICAM and FM Radio digital audio distribution with a log remapping of the dataspace so transmitted is actually 10Bit/32kHz plus overhead).

WARNING, VIDEO EXTREMELY NOT SAFE FOR WORK (NSFW explicit with FFN)


So CD is just a smidgen below.

For 99% of the recordings we are given, it is good enough though.

Thor
Nope. This is just wrong. If you fail an ABX test it means your answers were likely arrived at by chance for the difference you are seeking to differentiate. Nothing more. If you disagree please specify, using any agreed upon scientific units you would like, how gross differences in ABX need to be for it to be a fair test and the math behind your reasoning.
 

Thorsten Loesch

Senior Member
Joined
Dec 20, 2022
Messages
460
Likes
533
Location
Germany, now South East Asia (not China or SAR's)
Nope. This is just wrong. If you fail an ABX test it means your answers were likely arrived at by chance for the difference you are seeking to differentiate.

No, it means you got a specific number of correct/incorrect trials in the test and then a specific set of statistical analysis was applied to draw a conclusion.

The conclusion has two confidence numbers attached, namely the risk of false positives and the risk of false negatives.

This, like so many things has a triangle with three sides were two exclude third:

small number (< 100's) of trials
low risk of false positives
low risk of false of negatives

Choose any two. Accept that the third item will be out of the picture.

Nothing more. If you disagree please specify, using any agreed upon scientific units you would like, how gross differences in ABX need to be for it to be a fair test and the math behind your reasoning.

I already posted that, in the other thread.

Thor
 

Mnyb

Major Contributor
Forum Donor
Joined
Aug 14, 2019
Messages
2,847
Likes
4,010
Location
Sweden, Västerås
We also does not have to prove the same things all over again for all product types . Or specific brand or model of a product.

All small signal devices can effectively be lumped together.

If you can’t hear any frequency deviations ( flat response in the audible range , for HT a bit in to the subsonic range for LFE ) or neither hear any distortion or noise , if these are in the inaudible range .

It can be a DAC a preamp a whatever. If it’s past audible levels it’s is transparent.

For power amps you must cater for its ability to drive the load and have low enough output impedance.

For a complete system gain structure can’t be to weird .

To move the goal posts. One simply has to prove that the levels for detecting fr deviations or distortion is lower than previously thought. Not that a particular product has magical properties. No one does this ?
All we see are even more claims that’s this that product is even better than previous years product ?

This is probably why these test are not done every year or are old ?
We know that adult humans don’t hear anything above 20kHz for example , like all scientific facts it may need confirmation now and then , but well established facts are not checked very often because it’s done.
 

ahofer

Master Contributor
Forum Donor
Joined
Jun 3, 2019
Messages
5,075
Likes
9,235
Location
New York City
The problem is that is is a challenge. That is what it shares with and what makes it akin to the shell game.

I've never bought this argument. Many tests have allowed listeners to switch back and forth at their leisure (Archimago, the original Clark amplifier test in Stereo Review,NWAvGuy). Others involved an entire system, sitting around with friends (Zipser, Matrix). Still others involved changing equipment without the subject knowing. This is not high pressure stuff, particularly for those who are interested in either result, nor should it be for those who do it for a living. In addition, it has been shown via proper scientific research that rapid comparisons are more likely to lead to success with subtle sound differences, which also casts some doubt on the 'high pressure" thesis, as the test is maximizing the subject's ability to discriminate.

I get your point that differences can be small, but that is generally not the claim I object to, it is the Motte to which listen-only audiophiles retreat when they fail an easy test, only to wander back into the Bailey to describe night-and-day, wife-running-in-from-the-kitchen, veil-lifting differences.
 
Last edited:

ahofer

Master Contributor
Forum Donor
Joined
Jun 3, 2019
Messages
5,075
Likes
9,235
Location
New York City
The same cast of characters shows up on every thread - the evidently young guy who boasts of spending lots of money, and who pities us paupers; the slightly older guy who all but admits he hates audio science simply because it spoils his fun: and now the avuncular chin-stroking sage, wearing an interesting hat, very satisfied with his enormous expertise, finding sciency-sounding new ways of saying the same old, same old.
A solid taxonomy! But you forgot the vendor rationalizing his living by any means possible.
 

bodhi

Major Contributor
Forum Donor
Joined
Nov 11, 2022
Messages
1,027
Likes
1,484
What the subjectivists often don't understand is that it's not that sceptics won't allow them to enjoy the hobby their own way. I'm sure most of sceptics agree that psychoacoustics can cause the brain to perceive music to sound better and that perception could be verified by some brain scanner.

There is huge amount of customer products that don't claim any measurable benefits and they still sell like hotcakes, many categories costing much more than high end audio electronics. They still give undeniable benefits of feeling better (again, the brain scanner would prove this) to their owners and the buyers don't have any interest in trying to think any other reasons to own them.

So, the more interesting question is why is high end audio different? Why can't subjectivist just say that they think that new power cord made them perceive the music as better sounding and leave it at that? No more arguments.
 

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,738
Likes
6,073
Location
US East
No, it means you got a specific number of correct/incorrect trials in the test and then a specific set of statistical analysis was applied to draw a conclusion.

The conclusion has two confidence numbers attached, namely the risk of false positives and the risk of false negatives.

This, like so many things has a triangle with three sides were two exclude third:

small number (< 100's) of trials
low risk of false positives
low risk of false of negatives

Choose any two. Accept that the third item will be out of the picture.



I already posted that, in the other thread.

Thor
The individual outcomes of a series of ABX tests are definitive. There is no false positive or false negative, as the outcomes are not hypotheses. The testee either has the X correctly identified in each of the trials, or not. There is no false anything.

The type 1 or 2 errors you talked about are applicable to hypothesis testing such as a blood test to quickly determine if a person has a certain disease (early stage). It (i.e. the hypothesis) may come back with a false positive or false negative, which you need to confirm/verify with a different and more definitive test (or when the disease has developed to become unmistakable).

An analogy of the ABX test is coin tosses. Each one is either definitively a head or a tail. There is no false head nor false tail. If, when you repeat the coin tosses 20 times, you get 10 heads and 10 tails (and you assign 1 to a head and 0 to a tail), you have an "expectation" of 0.5 and a standard deviation of 0.513. Plug it into a confidence interval calculator (like this one), and it will tell you that, with the 20 coin tosses, you can say you have 95% confidence that the probability of the next toss showing a head is between 0.275 and 0.725 (or 95% confidence in that the parameter p in the Bernoulli process is 0.275 < p < 0.725).

For an ABX test with 19/20 correct, the "expectation" is 0.95 and standard deviation is 0.484 0.224. Therefore you have 95% confidence that in the next trial, you have a better than 0.85 chance of getting the X correctly identified.
 
Last edited:

JustJones

Major Contributor
Forum Donor
Joined
Mar 31, 2020
Messages
1,750
Likes
2,472
Not having a background in the sciences even I have wondered about these night and day, you have to be deaf, veil lifting differences often claimed. To me, it shouldn't even take a blind test to pick A or B , sort of like telling the difference between a firecracker and a howitzer.
 
Top Bottom