Yes, I'm sure of it.
I worked for over 10 years as a hardware expert in the field of high-availability systems, data backup, and archiving. Data transfer in these areas also uses LVDS as the hardware layer, and such systems log every error. Guess how often errors occurred in operation? Exactly, absolutely negligible. A handful of errors per month, with daily transfers of 30-150 TB.
Before that, but that was 25 years ago, I worked on the programming and development of CD jukebox systems for data backup and burning audio CDs.
Even back then, error-free transmission and format conversion were a solved problem.
Almost simultaneously, around 2001/2002, we conducted measurements of the readout and error rates of audio CD players. That, too, was no longer a problem back then, as the current measurements by
@NTTY here in the forum prove.
A few years ago, we tested whether I2S offered any advantages over USB as part of a project. While USB is an incredibly poor solution for audio, solutions like XMOS, Amanero, and Xing Audio have raised it to a level where compromises are no longer necessary.
We also tested conversion using I2S, SPDIF, AES, etc., and even after five round trips, not a single bit error occurred.
In fact, the use of external I2S devices is decreasing year by year, and it's becoming increasingly irrelevant. The widespread adoption of I2S was triggered by DDCs, back when USB interfaces in DACs were problematic and not very high-quality. Even then, I2S wasn't essential, but the higher transfer/sampling rates were certainly a selling point.
However, for several years now, this technology has been integrated directly into DACs, rendering it obsolete.
And so, in every current, good DAC with a USB connection, you have exactly this setup: USB cable -> XMOS interface -> I2S transmission to the DAC chip.
What advantage is a jitter-prone external I2S transmission supposed to offer at this point?
If all these manufacturers weren't just focused on raking in customers' money with the next nonsensical product, DDCs and I2S, they could have integrated I2S, or something else, directly into computers as a high-quality audio interface and established it on the market. But of course, this kind of nonsense makes more sense.
The whole "reduced jitter" thing is just another myth, but this expensive nonsense has to be justified somehow.
There are plenty of measurements, both here in the forum and independent ones, that have clearly demonstrated the jitter susceptibility of the external I2S interface. It takes a really significant effort to reduce it enough to even come close to the performance of DACs fed via USB or SPDIF.
Direct transmission is absolute rubbish. Direct means USB cable -> XMOS interface -> I2S transmission to the DAC chip, not:
USB cable -> XMOS interface -> I2S -> LVDS -> external I2S over LVDS transmission via HDMI cable -> LVDS -> I2S -> transmission to the DAC chip.
For higher bandwidth, download the free Hi-Res files from Sound Liaison and compare them yourself, completely blind and without any prior knowledge.
Incidentally, I only joined this forum well after our own I2S projects and measurements, meaning my experience was completely independent of ASR and Amir.
And most importantly, every changed bit can be measured or detected; that's a fact. An absolutely bit-perfect data stream cannot sound different, and it doesn't.
If there were actually bit changes caused by anything in the data stream, it would be very easy to prove.
Could you imagine better advertising for anything? I can't. That would be the first thing I would do as a manufacturer if I wanted to sell something.
But none of these manufacturers have been able to prove that yet.
Strange, isn't it?
In the recording studio and production, the signal is converted dozens or hundreds of times. I2S, SPDIF, AES, every time it passes through a device, at every DSP chip, via various USB/FireWire interfaces, etc. How bad and compromised must all the music we hear be if a single conversion or interface is supposed to make such a big difference?