It seems like a belief originating from a malicious use of a misconception. When a cable does not have enough bandwidth for a square pulse it will spread the squares which in turn creates interference with the adjacent pulses and that can cause bit flips. Sometimes this effect is also called jitter (in communications theory it is called inter-symbol interference, jitter is mainly the timing variation).
If you heed the original Nyquist criterion (the one dealing with telegraphic transmission) your cable must have at least twice the bandwidth of your signaling rate to avoid interference and see the cable as a "flat" channel modeled by a simple gain loss, but when ISI is present and your signal to noise ratio is good there are a lot of digital techniques that will work better than a thousand dollar cable.
A digital audio signal is, in the grand world of communications, a low bandwith signal tough. Still the cables should be following standards of shielding and lenghts, but the phenomenon you describe won’t happen in audio. there might be bit flips, if noise is too high or there is too much lenght before a repeating amp impacted the amplitude, those are extreme case, but there is not a cable used in digital audio that doesn’t have a 384k bandwidth, no other digital techniques needed beside respecting the protocols standards.
I didn’t watch the video I admit, but more generally, an audio precision analyser is not the right tool to explain jitter. It can show it’s effect. An eye diagram on a scope show jitter. But in all cases, In audio, you don’t need bit flips for jitter to have an effect, all the samples decoded can be right, but if they are converted at a fluctuating rate, the resulted analog signal is distorted.