audio_tony
Addicted to Fun and Learning
My server uses a fairly high spec Intel network card with a large heatsink - I never see any noise created by network activity, even when I run network stress tests.I get what you’re saying here - network activity can create noise. But I see two issues with your description here, and both are very important for answering the question you’re addressing.
One is jitter. My understanding is that network data gets reclocked at the destination. So you either get audible dropouts and clicks/ticks from a defective, unreliable connection, or you get zero audible impact - there’s nothing in between those two extremes when it comes to jitter over the transmission medium.
The other is your final sentence. You wish you had a quieter PSU if the noise injected into the data stream by your current PSU actually makes its way out of the DAC. And as noted by several folks above - and demonstrated by tests run here by Amir and by many others online, this is not a problem with almost any DAC.
If there were audible dropouts, there would be some serious packet loss.
On a LAN, packet loss usually only occurs when there is a media fault, e.g. failing switch, damaged cable or a poor connection.
However packet loss can occur if the network is very heavily loaded, a scenario which is extremely rare (non existent) in a home LAN.
At a company I used to work for, there were 1500 users connected via a multitude of switches and a very busy network, to our central comms. room.
We used to stream music from a server in the comms. room via fibre uplinks to switches, which then connected via Ethernet to various desktops.
We never experienced dropouts.
Audio is not treated any differently to 'ordinary' data - when audio is sent across a network, it is data.
The operating system network stack (Google the 'OSI' model) handles network I/O - it doesn't care (or even differentiate between) what it's passing.
With regard to jitter (this seems to be the last bastion of "believers" as everything else has been comprehensively debunked).
Networks simply don't corrupt data or contribute jitter.
A quick search for Ethernet clock threw up this comment here : https://forums.anandtech.com/threads/how-does-ethernet-get-its-clock.2094902/
Quoted below:
Well you could think the transmitter and receiver would have their own internal clocks and just sample based on that? Now that wouldn't work as they could never be guaranteed to be synchronized. Thankfully Ethernet frames have a Frame Checksum Sequence (FCS) which is nothing more than a bitwise hash of the entire frame, that way the receiver knows if the frame has been modified. The receiver runs the same hash on the entire frame, if the FCS is good it knows the frame is intact. If not, it's tossed and considered "bad". This is why "the network" can NEVER corrupt data, there is built in error checking.
So about that clock...on the front end of every single ethernet frame is what's called the Preamble. It's a sequence 64 bits of ones and zeros (high/low voltage) in a specific pattern that is allows the receiver to synchronize it's clock for this specific frame. 10101011110000101100, something like that I forget the exact sequence but you get the idea.
So the preamble is what sets the receivers clock, and every single ethernet frame includes these 64 bits. When folks talk about normal ethernet frame size of 1514 bytes this does NOT include the preamble. This is why you can never achieve 100% utilization on ethernet. The preamble is always there and is not part of the frame, at best using maximum frame sizes you can only get 98.5 or so percent, it's even worse if the frame size is smaller because the preamble then becomes a larger percentage of the frame in question.
In summary:
Audiophile Ethernet cables make no difference to the sound.
Audiophile switches make no difference to the sound.
Audiophile routers make no difference to the sound.
To an Ethernet network, data is data, bits are bits.
I believe @amirm has comprehensively debunked all this on this very forum anyway....
Last edited: