- Oct 4, 2022
The IP protocol only provides error correction for the header. On top of it, you have the two main transport protocols, TCP and UDP. TCP guarantees error free delivery by providing checks and a handshake protocol. UDP does not - it just delivers whatever payload and lets the layers above implement whatever they need to work.
Early on, UDP was used for a lot of "real time" applications. I think pretty much every streaming platform now leverages TCP. I know Spotify does as does Tidal. That said, it's in no way necessarily superior - you could conceivably implement error-free connectivity on UDP as well, and in some ways, you could do a better job at implementing "application aware" optimizations. But clearly the streaming providers have over time defaulted to TCP because of the built-in error correction for payloads and it means they don't have to re-invent anything, just use it: that's faster and cheaper.
That also means that the whole talk about "audiophile grade" network equipment and Ethernet cables is utter and complete mental diarrhea. If banks and hospitals and the military run their mission critical apps on networks, we know that the network is only relevant because of availability considerations, and *never* because you need "better cleaner bits". If your music system is as important as a heartrate monitor or missile detection systems or billion $ financial transactions, then go invest in *reliability* via equipment and interface *redundancy* with quality network equipment but avoid any "audiophile" claims that immediately highlight the fact that vendor has no clue what they are doing other than milking your wallet shamelessly.
An excellent summary.
On the subject of UDP and its reliability, I would also note that it's still the L4 protocol of choice in lots of one-to-many applications because it underlies multicast. For example, in financial trading, real-time market data is distributed by the financial trading exchanges over multicast. This is not only to reduce load on the network devices distributing the data, but also because regulatory requirements state that, for "fairness", all market data recipients have the opportunity to get the data at the same time. At the application layer, the payload data is encapsulated in a proprietary header*, which includes information the application's reliability mechanisms can use to track if they have received all the data expected. This is, of course, crucial to ensure traders are not using "stale" data, and thus trading on prices that have changed without them knowing, with the risk of increased exposure, and/or immediate losses. If the application detects gaps in the data, it will request (via a long-lived TCP connection) re-transmits from the exchange with the missing data, via another, lower-load multicast feed dedicated to that purpose. If there's still missing data, the application will request a full refresh via the long-lived TCP connection. If there's still gaps, tough! You likely have stale data (and a big problem to fix). And if the exchange detects you're seeing lots of gaps, and you're a major trader who can set prices (a "market maker"), eventually its systems will just automatically kick you out of trading for the rest of the day, to avoid you contaminating the market.
I'd also mention that, of course, there are several buffers used in the trading server's network stack, including at the application layer, so network segment-level considerations are not a problem unless something's badly broken.
The point of all this is that, as you say, UDP can be made extremely reliable, investment banks trust it utterly, and they absolutely must have their network shift the data without risk of error at the application level, or have the potential to lose literally millions. But, strangely enough, they don't use these magical, wonderful audiophile network switches that can transport the data much more "cleanly" and "precisely". I wonder why not? They've got the money to buy the best there is. But in fact some of the kit that I manage at work which handles the mission-critical market data is actually cheaper than some of the high end "audiophile" switches, and has much greater port density, better functionality and management capabilities, of course. On that topic, I've not so far seen any of these audiophile switches that are managed, so no QoS configs, no VLAN capabilities, no link aggregation, etc., etc. And these charlatans want thousands for these things, when my little DLink 8 port cost me £40, and is a managed switch, with all of those capabilities.
People are entitled to spend their money on what they like, but it's saddening that it appears many refuse to listen to the facts about this junk. I suppose it's all part of the increasingly anti-expertise, anti-intellect, anti-education world we seem to be inhabiting now...
* And if you're wondering if this makes troubleshooting multiple exchange feeds a pain in the backside, all with their own headers, yes, it does!