• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Do USB Audio Cables Make A Difference?

Kane1972

Active Member
Joined
Dec 11, 2018
Messages
298
Likes
103
Noise is likely the problem. Jitter on SPDIF can affect the output timing directly, since the clock is derived from the SPDIF stream. USB data is sent in packets and stored in memory before being clocked out using an independent oscillator. Jitter within USB spec should have no effect on DAC output.
Yes, I thought that was probably the case but I’m sure AC3 is also sent in packets and I read that that did not make it immune to jitter.
 

mansr

Major Contributor
Joined
Oct 5, 2018
Messages
4,685
Likes
10,703
Location
Hampshire
Here is an interesting old post from "Mal" on another forum about cable induced jitter. Not sure if USB audio is the same though?

Lets take a look at the timing issue - according to the theory of digital transmmission of analogue signals, in order to reproduce the original analogue signal perfectly samples are supposed to be spearated by exactly 1/f seconds where f is the sampling rate in Hz. Now, in the real world no system can be designed to present samples at an absolutely exact interval. So, there will be an error in the timing of each sample - that is a simple fact. As long as we can design sytems where this error is low enough for the data to still be transmitted intact then we at least have a working sytem. However, unless the samples are perfectly timed the resulting reconstructed analogue waveform will not be a perfect replica of the originally encoded one.

When you send a digital signal down a cable then the waveform you get out at the other end doesn't look the same. The digital signal is a series of pulses:

images


It is impossible to generate the pulses such that they have a perfrectly vertical rise and then a right-angle at the top and then a perfectly vertical drop since those instantaneous changes in amplitude represent infinite frequency. Rather, you will have a slightly rounded looking version of these pulses.

Once you send these down any length of cable then they will be further changed due energy loss. Any cable will have a transmission function which tells you how well is passes different frequencies. No cable can pass infinite frequency - most can't even pass more than a few MHz. If you are wondering why for digital transmission plastic TOSLINK, despite being able to pass 30MHz is not considered as good as glass which can pass 60MHz or more then the answer lies in how accurately it is delivering the pulse train. The lower the rated frequency capability then the more rounded the signal becomes.

Furthermore, other distortions occur to the pulse train due to reflections, electrical interference (in the case of metal cables) etc....

Now, in order to reconstruct the data transmitted by the cable all we need to know is where the 1's and 0's are so it may be tempting to assume that as long as you can still recognise the pulse train however rounded the corners have become and however distorted they may have become then you are still in business. The problem is that once the pulse no longer looks like it did originally it is no longer as easy to define where it is in time. In the original signal you could say that the beginning of each pulse is the sample timing reference point. However, once that has been rounded off where do you set the reference point? Maybe we could choose the middle of the pulse? Well, again once the corners have been rounded off and the general shape of each pulse is distorted by random effects such as electrical inteference then you can no longer define the timing of sample points so accurately.

Now, since many DACs derive their clock signal from the incoming data stream this is clearly going to affect the resultant analogue output.

Whether the effects of these time-based errors (ie jitter) on the reconstructed analogue output will be audible is a matter for debate but the fact that these errors exist is a simple fact of digital audio.
Whoever wrote that doesn't understand how digital data transmission actually works.

Yes, I thought that was probably the case but I’m sure AC3 is also sent in packets and I read that that did not make it immune to jitter.
Are you referring to AC3 over S/PDIF? The data there has a block structure, but it's still a synchronous interface where the receiver has to recover the clock from the incoming bitstream.
 

Julf

Major Contributor
Forum Donor
Joined
Mar 1, 2016
Messages
3,028
Likes
4,035
Location
Amsterdam, The Netherlands
Yes, I thought that was probably the case but I’m sure AC3 is also sent in packets and I read that that did not make it immune to jitter.

AC3 as in the codec used by Dolby Digital? It is a codec, not a physical interface or transmission protocol.
 

Eirikur

Senior Member
Joined
Jul 9, 2019
Messages
318
Likes
510
... since many DACs derive their clock signal from the incoming data stream this is clearly going to affect the resultant analogue output.
The PLL feature (Phase-Locked Loop) is a typical S/PDIF artifact and does not apply to modern USB audio.

Any modern DAC with USB interface will work asynchronous with respect to the D/A clock - this means it collects enough data to keep its internal data buffers filled, while generating the analog signal level based on its internal (and hopefully accurate) clock. No interface induced jitter therefore.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,596
Likes
239,659
Location
Seattle Area
To the extent the USB cable is used to also provide power, then its impedance in doing so can impact the DAC at these tiny levels we are measuring. Insufficient de-coupling could certainly impact the analog output.

On the digital stream itself, sharper, cleaner square wave has more energy and can cause more ground bounce, etc. in the DAC. So paradoxically, you don't want too clean of a USB signal!

These are analog aspects of USB connection which have nothing to do with delivering bits. And certainly measureable on less than well-engineered DACs.
 

Kane1972

Active Member
Joined
Dec 11, 2018
Messages
298
Likes
103
To the extent the USB cable is used to also provide power, then its impedance in doing so can impact the DAC at these tiny levels we are measuring. Insufficient de-coupling could certainly impact the analog output.

On the digital stream itself, sharper, cleaner square wave has more energy and can cause more ground bounce, etc. in the DAC. So paradoxically, you don't want too clean of a USB signal!

These are analog aspects of USB connection which have nothing to do with delivering bits. And certainly measureable on less than well-engineered DACs.
Interesting.
 

DeepSpace57

Senior Member
Joined
Jun 6, 2019
Messages
312
Likes
125
To the extent the USB cable is used to also provide power, then its impedance in doing so can impact the DAC at these tiny levels we are measuring. Insufficient de-coupling could certainly impact the analog output.

On the digital stream itself, sharper, cleaner square wave has more energy and can cause more ground bounce, etc. in the DAC. So paradoxically, you don't want too clean of a USB signal!

These are analog aspects of USB connection which have nothing to do with delivering bits. And certainly measureable on less than well-engineered DACs.

Sorry for my ignorance; but why do many companies be in a race at reducing the noise level of USB digital output/transport?
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,596
Likes
239,659
Location
Seattle Area
Sorry for my ignorance; but why do many companies be in a race at reducing the noise level of USB digital output/transport?
They think or want potential customers to think that such noise goes right to the output of the DAC. In reality, engineers get paid to design DACs that are not sensitive to such input noise. So in practice there is no difference.

Lay intuition about electronics is a dangerous thing in audio. :)
 

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,670
Likes
10,301
Location
North-East
Here's a USB cable comparison showing some differences :)

Lush2 audiophile USB cable vs. a generic (came with one of the non-audio USB devices). Lush2 has some jumpers to allow for various shield/ground combinations. It was used in stock configuration, as shipped. Red is Lush2, blue is generic cable:

View attachment 31828

The test was using Holo Spring DAC in NOS configuration, 100Hz sine wave captured using Apogee Element 24 at 24/96k. Nothing was moved or touched between the tests, other than switching USB cables. I switched them a few times to see that the pattern is repeated.

How does Lush2 USB cable cause higher level harmonics around 4kHz is hard for me to explain, although these are low enough in level. The 60Hz mains frequency component is obviously there only when using Lush, so the default shielding configuration seems to be not as good as the generic USB cable. The generic does have a built-in ferrite bead near both ends. Lush is 0.75m and generic is 1m in length.

Ran another set of tests (3 each with each cable, unplugging the cables in between runs). Same exact configuration, none of the components moved between tests except for the cables. Still some differences, but less prominent this time. 60Hz spike is now visible with the generic cable also, but slightly lower than Lush. Probably has to do with slight variations in how the cable is routed between the PC and the DAC. Generic in blue, Lush in red:

lush2-compare2.png


Conclusions? Since there are some changes in levels between the two nights, I have to assume these are related to noise or interference picked up by one or both cables. In both tests, Lush appears to be a little more susceptible to whatever noise is around the test bench, or possibly to the noise on the power line.
 

Julf

Major Contributor
Forum Donor
Joined
Mar 1, 2016
Messages
3,028
Likes
4,035
Location
Amsterdam, The Netherlands
Would be interesting to see comparisons with just a few feet of bare wire (acting as an antenna) attached to the ground of the DAC.
 

pkane

Master Contributor
Forum Donor
Joined
Aug 18, 2017
Messages
5,670
Likes
10,301
Location
North-East
Ran another set of tests (3 each with each cable, unplugging the cables in between runs). Same exact configuration, none of the components moved between tests except for the cables. Still some differences, but less prominent this time. 60Hz spike is now visible with the generic cable also, but slightly lower than Lush. Probably has to do with slight variations in how the cable is routed between the PC and the DAC. Generic in blue, Lush in red:

View attachment 31859

Conclusions? Since there are some changes in levels between the two nights, I have to assume these are related to noise or interference picked up by one or both cables. In both tests, Lush appears to be a little more susceptible to whatever noise is around the test bench, or possibly to the noise on the power line.

Wanted to see if a different DAC would produce similar differences. I used the two USB cables feeding Phiree U2S USB to SPDIF converter connected to Apogee Element24 DAC input using a Toslink cable, Mogami Gold XLR balanced cable feeding the DAC output to ADC input on Element24. The resulting THD+N was around -105dB, THD of -114dB, or about 5-6dB better than with Holo Spring.

This time there appeared to be even fewer differences between the generic and Lush^2 cables. Try as I might, I couldn't get the 60Hz spike to show up (moving cables around, repositioning the Element and Phiree relative to each other, turning on and off various devices in my office):

toslink-e24-96k.png
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,596
Likes
239,659
Location
Seattle Area

jeremya

Member
Joined
Oct 1, 2019
Messages
20
Likes
9
I do not know abt. frequency or harmonic distortion, I know abt. noise:
Picture transferred from my Pentax K-1 via my esoteric audio-homeopathic USB-cable ...

View attachment 31841

The same now by means of my cheaply 1999 USB-printer cable, Canon branded though ...

View attachment 31843

;)

<meme>Not sure if serious... ;-)</meme>

But just in case you were... Your scenario isn’t as relevant to USB Audio as you may suppose. ;)

USB has multiple protocols that the sender and receiver can engage in. One is called “bulk transfer” and it is both error-detecting and error-correcting by means of cyclic redundancy checks (CRC16) and features guaranteed delivery (including retry of failed messages). This means that if an error in transmission is detected, the recipient can request a re-send of the bad data as many times as needed until it gets it right. All atomic file transfers (and print jobs, etc.) in day to day PC life use Bulk Transfer mode. Unless the cable is broken or the memory on the other end is corrupt, it pretty much can’t get the bits wrong when all is said and done.

Audio, on the other hand, being a real-time process, uses Isochronous Transfer. Think of it like a streaming TV signal — it is essentially a “fire and forget” protocol where you either received the message or you didn’t. A missed or malformed chunk of data is just that — and as the recipient it is up to you to smooth those over somehow (or just insert gaps or hiccups into the playback, your call). This mode features error detection (the recipient can know that something is wrong), but there is no guarantee of delivery and no retry is available.

You can’t rewind time and try again! :)

more here:
https://www.beyondlogic.org/usbnutshell/usb4.shtml

In practice, how often errors are experienced is pretty minimal (and most of them are heinously audible, and therefore imminently measurable)... so I posit, along with many others, that the real problem with USB is noise induced by the power leg or drawn in from the surroundings (emi/rfi/antenna effect), or passed in via the ground plane...

Another theory I’ve heard is that errors in the bit stream cause the receiving end to engage error recovery pathways that aren’t required when everything is kosher. That extra circuitry generates extra heat, extra noise, extra EM, and thus affects the purity of the resulting sound vs no errors at all. True in practice? I dunno. Sounds possible, but not everything that is possible actually happens in practice. XD
 
Last edited:

Julf

Major Contributor
Forum Donor
Joined
Mar 1, 2016
Messages
3,028
Likes
4,035
Location
Amsterdam, The Netherlands
Audio, on the other hand, being a real-time process, uses Isochronous Transfer.

That is true for the older isochronous audio mode, but how about the asynchronous mode that modern DACs use?
 

Eirikur

Senior Member
Joined
Jul 9, 2019
Messages
318
Likes
510
That is true for the older isochronous audio mode, but how about the asynchronous mode that modern DACs use?
Still true, the async part can request adjustments for input packet size and decouples the clocks completely, but isochronous audio data transfer is always used, and will remain the standard for some time. This is not much of a worry: based on MTBI calculations you'll lose a bit or so per week of continuous use as the transmitted signal is differential, making it really robust.
 
Last edited:

Julf

Major Contributor
Forum Donor
Joined
Mar 1, 2016
Messages
3,028
Likes
4,035
Location
Amsterdam, The Netherlands
Still true, the async part can request adjustments for input packet size and decouples the clocks completely, but isochronous is the standard, and will remain so for some time.

What modern DACs still use isochronous?
 
Top Bottom