do you want to get to right answer here or are you giving up?
I’ve been in the trenches of digital media my entire career - plenty of right answers have already been given.
do you want to get to right answer here or are you giving up?
I’ve been in the trenches of digital media my entire career - plenty of right answers have already been given.
My reason for listening and watching CDs, Blu-Rays is mostly that I have better compression rate, lossless compression or no compression at all.I suppose the fuel that audiophools have is that, no, streamed, downloaded or really any "remote" audio is asynchronous and... it's not perfect.
OK, you gave up. Sad.
What exactly am I giving up on?
1. You're inferring SRC to slave a clock when there is no clock to slave it to.
2. Same as above. Local clock is the only clock in the universe from the DACs perspective so the error can't be known. The buffer won't overflow as the application isn't going to fetch segments it doesn't need, and if it underflows playback will pause until it's recovered.
Before you attempt to add underflow to your list of non-existent problems to solve, do the math on how long it'd take to consume 1s of buffer with the clock tolerances we're dealing with today.
All of this has been explained to you multiple times.
Ok, so we agree... pfff
No, you say both clocks differ and at some point in time you have to insert zero's ( 'pause') to get things right again.. that's what I said..No, not really. What you've been saying is quite different.
No, what you say both clocks differ and at some point in time you have to insert zero's ( 'pause') to get things right again.. that's what I said..
Great write up! The only case I would like to add is broadcasting of live streams (internet radio/tv). In this case the rendering device needs to reconstruct the sample clock with some kind of (digital) pll or do sample rate conversion. This process is well understood but could potentially be degraded by packet jitter/delays if not implemented perfectly..
With a live internet radio this is not the case, the source 'pushes' the data at a fixed rate.
If indeed 'live' is with a minimal 10 seconds delay then these xx ppm clock differences do not create problems anymore.. It would take days before such buffering could run outIn terms if "Live" as in, a person in a room with a mic, online, with listeners.
The "Live" audio is recorded to a file. A period of time later, the HTTP service is primed to begin accepting connections at the start of the file.
So, the "Live" studio is ahead of the listeners by maybe 10 seconds, but a minute is probably more likely.
When anything goes wrong, or someone drops the F bomb on the stream, they press a button on the server and everyones stream immediately switches to an "Alt" stream, like ads, jingles or just back to the looping playlist.
So the "live" listeners are probably receiving up to a minute of audio as a local buffer, while being served from a longer buffer.
I think one minor, but missed bit of information thus far on "how all this works", might be to remind folks that the VAST majority of internet traffic is "peer to peer", uni-cast. There are extremely few "broadcast" or even "multi-cast" services on the internet and those that do exist are usually things like Netflix backbone routing agreements that's allow them to send wide spanning multi-cast packets on IPv6 instead of sending a separate stream for every user.
In internet radio station terms... uni-cast. If 1000 people are listening. There are 1000 open individual HTTP connections and each of them is at a different position in the stream. The Internert radio software is nothing more than an HTTP server, a playlist manager and a little bit of logic to keep the clients from running off the end of the buffer.
xx ppm clock differences do not create problems anymore
The stream/s we generate are forwarded to a service provider that handles the unicast connections.
When you get into those kinds of service providers, you would be amazed at the kind of high scale hardware they can roll out for things like this. Hardware that most people wouldn't even consider exists or there is a need for.
Where the real magic happens in todays era is in how the big international media giants manage "in rush" and the release of large titles. The next big series they drop an Ep.1 of, you could have maybe 5 million people in the UK alone streaming it. Maybe 100 million world wide isn't completely out of scope.
How do you manage to keep all of those uni-cast connections all reading the same file?
You don't. You pre-ship the high demand items, once, to hundreds of "edge nodes" around the world. When a stream is requested, a "mirror" is selected which closer to the client. When an edge node reaches a limit, another one is spun up, when a datacentre reaches the companies bandwidth limit, they spin an edge node up in another data centre.
Same challenge but slightly more fragile and why still to this day they are fraught with tech troubles, are actual LIVE multi-million viewer streams. For example I was watching a NASA LIVE broadcast on YouTube that claimed to have over 5 million concurrent viewers. The principle is the same. "Fan out". But this time it's a tree of stream forwarders. The master streams it to a multilpexer which sends it to 100 main servers which each stream it on to 100 localised servers.
These servers don't need to be "real" either. Ultimately they WILL have physical hardware assigned, but media companies can literally "spawn and spin up" a new edge node in any datacentre (they have their stuff in) in the world in seconds and destroy it again when it's not needed.
It's off topic, but those "virtualisation" layers, these days, go very, very deep. To the point that most networks beyond the home are not what they appear to be at all and are just a "logical" layer over or within something else. The 'actual' hardware has been desparately abstracted away behind "cloud" and IAAS, Infrastructure as a service.
Without it, big media giants could not scale to the size they need for high demand and they couldn't afford to keep it all running when it's quiet.
That's an ongoing significant issue for which there is no current fix when using scaled unicast. The higher the quality, the deeper the buffer, so UHD watchers saw the goal nearly 60 seconds after those watching on DTT.That's an impressive way of working! But also the end of 'real' live streams?
Or will the performance of these systems evolve to support at least lower latencies so you don't find out that the winning goal is made when your neighbor is cheering who is watching cable?