• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

A USB Cable Test Designed to Satisfy Skeptics AND Believers—Feedback Welcome

Guestuser

Member
Joined
Mar 26, 2022
Messages
13
Likes
8
Location
Seattle
What would it take to end the cable debate? This is the question I keep thinking about as I read the cable discussions that inevitably devolve into religious arguments. Although individual hobbyists have conducted their own personal experiments and audio scientists have conducted peer-reviewed research, the current evidence still leaves the debate open. So, what kind of test would it take to satisfy both skeptics and believers?

In my day job, I've worked in medical research for 25+ years, supporting randomized controlled clinical trials to evaluate new treatments for diseases such as cancer, Alzheimer's, and diabetes. Randomized controlled trials are considered the "gold standard" by doctors and regulatory agencies worldwide, and the level of rigor applied to human clinical trials is among the highest in any industry. When you want to put a new drug in a human for the first time, you don't have room for errors. This is the level of control that comes to my mind when designing a debate-ending USB cable test.

So if we were to design an audio ABX test that is conducted to the same (or near) control level as a randomized controlled trial for clinical research, AND we want that ABX test to avoid the listener fatigue and stress-induced performance anxiety of previous audio research, I'm proposing a modified ABX test that puts an end to the debate. Over the past day or so, I used some of my clinical trial design tools to draft a modified ABX protocol for testing USB cables, and this is what I came up with. [full protocol link at the bottom]

Plain Language Summary

What is this study about?

This experiment checks if pricey audiophile USB cables really sound better than regular, cheaper cables when playing digital music.

Why is it tricky to test?

People often get nervous when they know they're being tested, which can make it harder to notice small changes in sound quality. Even a slight difference in volume can fool you into thinking one cable sounds 'better' just because it's louder.

How does this test work?

The study uses two identical digital-to-analog converters (DACs), each connected to a different cable. Both play at the same time, and an electronic switch lets you instantly choose between them. This way, there's no need to unplug or reconnect any cables. Listeners hear the same 90-second music clip before and after a short break. During the break, someone might secretly switch the DACs, or might leave them as they are. The listener just rates how 'easy' and enjoyable the music feels.

The test is done twice: first with basic DACs, then with high-end isolated DACs. This helps show if cable differences only matter when using cheaper digital-to-analog converters.

What makes this test fair?

The volume is matched exactly for each test. The person collecting the ratings doesn't know which cable is being used. Some tests include 'fake' switches to check for bias. Finally, the cables are swapped between the DACs and tested again to make sure the results are accurate.

Providing Feedback

If you are interested in poking holes in the methodology, feel free to read it and post a comment or message me directly. If you have a question or a specific issue you think needs to be addressed, please reference the section number in your comment.

Next Steps

Right now, I'm just focused on drafting a gold-standard protocol and haven't thought through all the next steps of conducting the test. If you're interested in facilitating the test or using this protocol in a publication, please DM me.

Complete Protocol

Full protocol published on Google Docs: https://docs.google.com/document/d/...ltGERprVcUAj83xrGoJh9oLU3h8752Iy5BY2lb7-0/pub
 
..., the current evidence still leaves the debate open.
The fact that people who are less informed about a topic still talk about it doesn't mean that the "debate is still open". There will always be people opposing verified facts, no matter how well established those may be.

There is also no reason to do ABX-trials with USB cables. It's a digital signal. Apart from the possibility of broken or out of spec cables and small differences in shielding and therefore ground noise, they all perform exactly the same. And you can easily measure it: Either digitally directly at the receiving end of the cable, or indirectly out of the DAC using an additional ADC. Results will be infinitely more reliable than using human hearing to guesstimate them.

Listening trials introduce lots of new variables and potential pitfalls which simply muddy the waters in cases like this. There are valid cases for ABX testing, like establishing the average distortion threshold of listeners, or the preference target for a type of headphone. USB cables and digital signals in general - including for example network streams - are easily tested using digital equipment.
 
From a methodology standpoint;
I like the elimination of USB source lock timing. That would be a big tell. You would need to switch DACs over the course of the experiment to eliminate individual variations in the DAC’s themselves. You have provided for that. Someone involved with the test will know the DAC’s have been switched, which is not double blind.

If the tester knows which sequence is being run, even if they don’t know which cable is connected to which DAC, it’s not truly double blind. That’s the advantage of a true ABX test. No one knows what X is until the test is complete. It’s a minor point, but you will get skewered for it by those who talk ABX but have never set one up or participated in one.

Statistically, you need a large number of trials to determine if there is a perceptible difference, especially if the difference is minor. Cable differences will fall into the minor category, or the non-existent category.

What you are proposing would take a long time to generate enough decision samples over a large enough test cohort to be statistically significant.

Is there a way to eliminate the stress factors you have identified while keeping the test shorter and double blind?

Others in the literature have suggested long term ABX testing, using people’s own systems, where listeners are free to take as long as they want over as many trials as they want to decide if they prefer A or B. The challenge with this methodology is getting a large enough cohort to participate and the cost of the ABX equipment needed. And the variability of the performance of the systems themselves.
 
USB is a standardized protocol.

So either the cable transmits what it's supposed to, as mentioned above, or it transmits nothing at all.

This is one of those ludicrous debates in subjective forums, or simply a misleading sales tactic to sell at a higher price.

As for me, the USB cable I use with my converter cost €15.

But I'm not stopping anyone from doing comparative tests where there's nothing to compare.

For example, when I download photos from my camera, the colors are more vibrant with a certain USB cable.;)
 
I'm all-in for controlled blind listening tests, but not in this case...

In my day job, I've worked in medical research for 25+ years, supporting randomized controlled clinical trials to evaluate new treatments
This isn't a perfect analogy but USB cables aren't a "new treatment". All we have to do is transmit the digital signal accurately so it's more-similar to pharmaceutical production... You don't do controlled clinical trials on the pills coming off of the production line. You just have to validate the chemistry.

If you did trials on the production, the "statistics" would cause some production batches to fail even though there's nothing wrong.

ABX tests are also statistical and if you repeat the cable test enough times with enough participants, somebody is going to "guess" 10 out of 10 correct (etc.).
 
I’m firmly in the cables don’t matter camp but I believe the argument made by some audiophiles is not whether the cable reliably transmits data as expected, but more that is picks up and transmits noise into the receiving device. For absolute clarity, I am not claiming this just chiming in with what I often hear from audiophiles on various forums.
 
Skip the test and just go straight to the manufacturer and ask them how they burn-in test their "omg audiophile better-sound cable" to ensure it isn't broken and perform just like a $2 cable.
 
I’m puzzled by all of the expressions of futility and pointlessness here in response to the idea of rigorous and well-designed tests with the potential to expose and embarrass the audiophile claims of people currently hunkered down in the fog bank of unchallenged subjectivity.

Apathy and bare-ass cynicism are not scientific values.
 
I did my part in clinical trials when I was younger. Why did we bother with clinical trials? Because it changes practice. If new evidence emerges to support a different type of intervention, doctors would adopt it. The stronger the evidence, the faster the adoption.

Maybe you should ask yourself whether doing a gold standard double blind placebo controlled test would have any persuasive power. Remember that tests like these show a strong tendency to prove the null hypothesis, even when a difference exists. You would need to do a statistical power calculation and recruit enough subjects. And you would likely need to test each subject to make sure they have normal hearing, and not self-reported normal hearing. Ask yourself how many people were recruited in many landmark medical studies. AFFIRM had to recruit 4000 patients to prove there was no difference between the two study cohorts. How many would you need to recruit? If your study is statistically underpowered, people will dismiss it. And even if it was adequately powered, people would still dismiss it! If they cared about science, they wouldn't believe in silly things like audible differences between USB cables!

I put it to you that you are wasting your time. If you want to publish and make a name for yourself in the audio world, there are plenty of things you could study.
 
The fact that people who are less informed about a topic still talk about it doesn't mean that the "debate is still open". There will always be people opposing verified facts, no matter how well established those may be.

There is also no reason to do ABX-trials with USB cables. It's a digital signal. Apart from the possibility of broken or out of spec cables and small differences in shielding and therefore ground noise, they all perform exactly the same. And you can easily measure it: Either digitally directly at the receiving end of the cable, or indirectly out of the DAC using an additional ADC. Results will be infinitely more reliable than using human hearing to guesstimate them.

Listening trials introduce lots of new variables and potential pitfalls which simply muddy the waters in cases like this. There are valid cases for ABX testing, like establishing the average distortion threshold of listeners, or the preference target for a type of headphone. USB cables and digital signals in general - including for example network streams - are easily tested using digital equipment.
Some people believe USB cables transmit only binary digital data (0s and 1s), and others assert that USB cables can still affect the output stage of a DAC (antenna effect, etc.). This test is designed to control for both scenarios by using shielded and unshielded cables, as well as two different DACs: one bus-powered DAC with no galvanic or chassis isolation, and another DAC with a separate power supply, galvanic isolation, and chassis isolation. If there is no effect with any combination of cables and DACs, the "digital only" part of the debate will now have data to support or disprove it.

Edit: The problem with audio analysis of the DAC output using an audio analyzer is that the analyzers themselves are extremely shielded and resilient against RFI/EMI interference. The objections I've seen are that not all DACs have this type of shielding, and therefore, the antenna effect could affect the DAC output stage regardless of whether the data stream is stable. This protocol directly addresses that objection by including a bus-powered DAC with no shielding.
 
Last edited:
From a methodology standpoint;
I like the elimination of USB source lock timing. That would be a big tell. You would need to switch DACs over the course of the experiment to eliminate individual variations in the DAC’s themselves. You have provided for that. Someone involved with the test will know the DAC’s have been switched, which is not double blind.

If the tester knows which sequence is being run, even if they don’t know which cable is connected to which DAC, it’s not truly double blind. That’s the advantage of a true ABX test. No one knows what X is until the test is complete. It’s a minor point, but you will get skewered for it by those who talk ABX but have never set one up or participated in one.

Statistically, you need a large number of trials to determine if there is a perceptible difference, especially if the difference is minor. Cable differences will fall into the minor category, or the non-existent category.

What you are proposing would take a long time to generate enough decision samples over a large enough test cohort to be statistically significant.

Is there a way to eliminate the stress factors you have identified while keeping the test shorter and double blind?

Others in the literature have suggested long term ABX testing, using people’s own systems, where listeners are free to take as long as they want over as many trials as they want to decide if they prefer A or B. The challenge with this methodology is getting a large enough cohort to participate and the cost of the ABX equipment needed. And the variability of the performance of the systems themselves.
Thanks for reading the protocol! Much appreciated, and so are your comments. I agree; the statistical power in the draft is currently 80% with 95% confidence, and to reach 90% power, we'll need about 50% more participants (17-22). 95% power would require about 25-30.

This definitely would take a while, but the time spent on less rigorous studies is much greater, yet people continue to point out the methodological flaws. To draft this protocol, I scoured this and other forums for the methodological flaws people had pointed out and tried to correct each one.

My current thinking is that we can focus on drafting the best protocol, and then optimize it for study conduct as a separate effort.
 
  • Like
Reactions: EJ3
That’s also what the vaccine deniers say…

You’ll never convince those people, never mind the audiophiles :facepalm:
I agree there is a subset of people who won't be convinced. I actually deal with some anti-vax people in my day job, so that is partly why I think transparency and input at the test design stage are so important. If we use a public comment period to allow people to have input on the test before we conduct it, some portion of the non-believers will be more open to the results.
 
Some people believe USB cables transmit only binary digital data (0s and 1s), and others assert that USB cables can still affect the output stage of a DAC (antenna effect, etc.). This test is designed to control for both scenarios by using shielded and unshielded cables, as well as two different DACs: one bus-powered DAC with no galvanic or chassis isolation, and another DAC with a separate power supply, galvanic isolation, and chassis isolation. If there is no effect with any combination of cables and DACs, the "digital only" part of the debate will now have data to support or disprove it.
If you are testing for susceptibility to interference (since it is one of your hypotheses), then in your tests you have to specify and control the sources of these interference, the environment under which these interference occurs, the how sensitive are these devices (and you can expect different DACs to behave differently) to these interference, and etc.

This is many orders of magnitudes more complicated than simple input signal and output level matching.
 
There are some deeper issues, too. A blind test, no matter how sophisticated and scientifically sound can never prove there is no difference. This is the straw the subjectivist crowd will clutch at. They will quickly invent a huge number of ad hoc theories as to why the said test did not reveal the differences that anybody with "revealing" enough system is able to hear. They simply do not understand the concept of "burden of proof".
 
Thanks for reading the protocol! Much appreciated, and so are your comments. I agree; the statistical power in the draft is currently 80% with 95% confidence, and to reach 90% power, we'll need about 50% more participants (17-22). 95% power would require about 25-30.

This definitely would take a while, but the time spent on less rigorous studies is much greater, yet people continue to point out the methodological flaws. To draft this protocol, I scoured this and other forums for the methodological flaws people had pointed out and tried to correct each one.

My current thinking is that we can focus on drafting the best protocol, and then optimize it for study conduct as a separate effort.

Having done dozens of tests on USB cables, including ones designed for experimenting with shields and geometry, I found there was sometimes a measurable difference between cables in the noise floor, but normally below any thresholds of hearing. And yes, I've done multiple blind tests on USB cables, as well, although obviously the results can't be generalized to anyone else.

The problem with induced noise null hypothesis is that it depends on too many variables, including ambient EMF noise levels and spectrum, specific DAC or device topology and design, grounding scheme, and what other circuits are connected beside the USB cable.

If you're really concerned with induced noise, use a USB galvanic isolator that will not let through any of the induced noise. These are available for under $30 and are extremely effective. I use such devices in all my measurements, as it eliminates one source of error that is external to the device I'm measuring.
 
Back
Top Bottom