Addicted to Fun and Learning
- Sep 2, 2018
- Harrow, UK
No, because in the post-production world everything would be compensated (largely automatically) to accommodate any latencies involved. To put it another way, if you know that the sound has, say, 3s latency because of processing, the vision is run 3s later to compensate. Decent post-production systems handle this kind of thing without the user being the tiniest bit aware of what's going on; actually, in the above situation the user might be aware as there would be a 3s delay on pressing 'play' while the various buffers are built.It could become an issue with video lip-sync if you’re using a lot of DSP
Maybe you can, maybe you can't. But what difference does it actually make? Are we in a buffer race? “Mine's smaller than yours!”Why shouldn’t we expect stable performance from a modern device with 5ms or smaller buffer?