• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Study: Is I²S interface better for DACs than S/PDIF or USB?

Marantz A-10 and newest Yamaha. Marantz is expensive but well-made and has 8K passthrough, an HDMI 2.1 , Dirac and 16 channels outputs . Would be ideal if ;
1. the digital volume was in 32 bits ( I don't know if it is ) in order to not decrease bit depth
2. There was a way to bypass the digital to analog conversion , bringing the digital channels to the xlr,s directly ( analog from the manufacturer)

I think this should be an option offered by the manufacturer as it is in the Datasat AP-25 and Trinnov.
However waiting for this to happen would mean that I would be deaf by then...
So hopefully, there is a way to do it yourself ( if y are good at these things) or have a tech doing it for you.
But, I don,t know how to do that ,so I ask this forum if there is already a solution to this vexing problem as I can,t afford a Trinnov/Storm/Datasat given the low value of the Canadian dollar and Canadian taxes on goods ( 17% in Quebec, Canada).
Best, R
 
Marantz A-10
My google does not seem to find A-10, only AV-10 or AMP-10 or SA-10.
the digital volume was in 32 bits ( I don't know if it is ) in order to not decrease bit depth
The real-world DACs cannot utilize full 24 bits anyway, volume control at 24bits yields the same result as that at more bits.
There was a way to bypass the digital to analog conversion , bringing the digital channels to the xlr,s directly ( analog from the manufacturer)
I am afraid I do not understand what you mean by bringing digital channels to the XLR - like AES/EBU balanced digital outputs?
 
My google does not seem to find A-10, only AV-10 or AMP-10 or SA-10.

The real-world DACs cannot utilize full 24 bits anyway, volume control at 24bits yields the same result as that at more bits.

I am afraid I do not understand what you mean by bringing digital channels to the XLR - like AES/EBU balanced digital outputs?
Yes, AES/EBU digital outputs with volume .
Also, sorry about the Marantz A-10, you are right I meant the AV-10. I thought that most AV or Preprocessor would take the HDMI audio stream and and amplify it in the digital domain before converting it to analog and sending it to the AES channels in analog. Was wondering ,if it was possible ( and hopefully not too expensive ) to take the digital steam ( before it's conversion to analog ) and route it directly to the AES channels making them outputting digital channels with volume ?
It is because I would want to have in the near future all of my speakers ( 16 or 22 ) to be digital ,like Genelec or Neumann KH 150 and 120 AES67 . The computer would have Dirac live for correction and output to the Marantz ( or any other cheaper product suitable) that would do the digital AES/EBU volume control .
However, the ideal would be a computer with Dirac plus a Dolby Atmos license able to receive the HDMI 2.1 ( maybe 2.2 by then) pass it to a sound card that could then output 16 to 32 channels with digital volume control to the speakers , maybe via a Db-25 or other ? I don't know much about the pro-world of audio except that some card can go up to 128 channels and more . Likely, I would need a lot of time to learn how to properly use such a device and software.
Alternatively, there are 2 devices that appear to make it simpler maybe ; Arvus H2-4D and Audient Oria.


With h2-4d and Oria, one need a computer with Atmos license , Dirac if desired and maybe a video-card if one wants to thinker with video stream .
I would prefer that Prepro came with the option of analog or digital outputs options. Likely it would be simpler ,cheaper and more robust.

So .... What is one to do ?
 
I thought the benefit of I2S was its ability to pass the DSD layer of an SACD to an external DAC of the user’s choosing, rather than being limited to the internal DAC of the SACD player itself. In that regard, users can take advantage of modern DAC technologies to convert the DSD music layer. SPDIF and Toslink only pass the Red Book CD (44.1 kHz / 16-bit) layer. So yes, I2S has the potential to sound a great deal better, as Paul at PS Audio has said time and time again.

Lastly, we listen with our ears, yet this forum seems to focus primarily on technical data and measurements—often bashing audiophiles, audiophile companies, and for some reason PS Audio in particular. This is not a very kind way of treating others. We all have our opinions, and we should try to take away the good from what others say. Life is too precious to be so vengeful toward others.

I feel like leaving this online forum because it makes me sad reading your posts, Amir. Please reconsider your stance on how you criticize people and companies who are trying to do admirable things for the Hi-Fi community.
And it's precisely this kind of information, taken out of context and false, that causes certain people to lose credibility and then be attacked.
Does that surprise you?

I2S is the original data transmission in CD players from the reading drive to the internal DAC, so it initially has nothing to do with SACD. It's also been around for 40 years.
Externally, I2S is transmitted via the LVDS protocol (hardware layer), repurposing existing connectors and cables like HDMI, RJ45, etc.

As long as the SPDIF transmission limits aren't exceeded, the conversion from I2S to SPDIF and back is absolutely lossless; this has been tested often enough.
Even in DACs, the USB, SPDIF, etc., signal is converted back to I2S to be processed in the DAC chip.

But now comes the crux of I2S. This data stream is passed directly to the DAC chip without reclocking or jitter reduction. It cannot be processed by the usual routines.
Therefore, with modern DACs, both USB and SPDIF are significantly superior to I2S transmission due to the processing of the digital signals and jitter reduction.

And up to this point, none of this has anything to do with SACD. You can't access the DSD data stream of a SACD at all; it's locked. You need a modified device that outputs the DSD data stream.
For 2-channel stereo, the bandwidth of SPDIF is perfectly adequate, so there would be no advantage to using I2S, nor could there be.
I'm speaking from experience with many tests using different devices. I've never found an advantage to I2S, but often disadvantages.
 
Wouldn't the multiplexing of I2S to SPDIF and back to I2S by an external DAC be prone to data integrity from hardware design, or poor design? Why take that extra step if you don't have to?
@Roland68 explained perfectly fine why.. So why ask again?
We are always saying the path of least resistance is always preferable.
We, are not...
It seems like more and more manufacturers are incorporating I2S into their components and for this reason, because it's a "better" way. I think they are on to something here but I may also be wrong, I will admit that much. I don't know for sure. Are you 100% certain?
If it were such a good way, it would be a proper standard by now. Sadly, it's not. The pinouts are different, and there are who knows how many topics here on ASR alone about people not understanding that this socket on their DAC is not HDMI. Lots of things became common, even though they weren't better... Remember Betamax vs VHS?
 
Let's allow @Roland68 to answer for himself please.
He did ;)
Yes we are--all over onljne audio component forums... and should be for good reason. By "we" i meant everyone in the world and not just this thread.
We are not all other audio component forums. We do not follow the dogma that this industry has brainwashed into its customers for decades. We follow science, engineering, logic, and sense.
Reasons why I believe what Paul McGowen says is correct:
  • Lower jitter: Because the clock and data are on separate lines, there's no need for a Phase-Locked Loop (PLL) to recover the clock from the data stream. This can significantly reduce jitter, which is a major factor in sound quality.
In fact, the opposite is true. Because the clock is not recovered or refreshed at the receiver, jitter is usually worse, not better.
  • Better detail: The lack of a PLL often leads to a cleaner signal, which can result in a more detailed and "3D" soundstage.
Pure nonsense. Is there any actual evidence?
  • Higher bandwidth: I2S supports higher bit depths and sample rates than S/PDIF, with some implementations reaching up to 32-bit depth and 768 kHz sample rates, compared to S/PDIF's 24-bit and 192 kHz.
True, but you don't need the bandwidth, really.
  • Direct transmission: The signal is transmitted directly from the source to the DAC, without needing to be decoded and re-timed, which can improve performance.
How, in what way? Is there proof?
Lastly, we all have our opinions and I for one am not going to bash others for not agreeing with me. The behavior I am reading, not just in this post, especially other posts actually, is that Amir seems to have an agenda towards bashing PS Audio, and people on this site seem to "follow" him like a leader in posting comments that seem like ganging up on Paul at PS Audio even when he is doing the right things and saying the right things.
This is not about opinions; this is about facts.
 
In fact, the opposite is true. Because the clock is not recovered or refreshed at the receiver, jitter is usually worse, not better.
I wonder if there are any jitter measurements of these external LVDS I2S links.
 
Like I said we listen with our ears not scientific equipment and measurements. There are many sites proving what I am saying.
None of them listen with their ears only. That is the major issue here.
I know I am right and that is what is important to me. You can go on believing whatever you like, I really don't care.
What the hell are you even doing here?
The point I was trying to make is express your opinions without bashing others. I am leaving this thread and this site as I believe the content here is wrong and I have wasted enough of my time here.
You've had at least 3 posts removed because you were bashing members here... Go look in the mirror..
 
Wouldn't the multiplexing of I2S to SPDIF and back to I2S by an external DAC be prone to data integrity from hardware design, or poor design? Why take that extra step if you don't have to? We are always saying the path of least resistance is always preferable. It seems like more and more manufacturers are incorporating I2S into their components and for this reason, because it's a "better" way. I think they are on to something here but I may also be wrong, I will admit that much. I don't know for sure. Are you 100% certain?
Yes, I'm sure of it.
I worked for over 10 years as a hardware expert in the field of high-availability systems, data backup, and archiving. Data transfer in these areas also uses LVDS as the hardware layer, and such systems log every error. Guess how often errors occurred in operation? Exactly, absolutely negligible. A handful of errors per month, with daily transfers of 30-150 TB.

Before that, but that was 25 years ago, I worked on the programming and development of CD jukebox systems for data backup and burning audio CDs.
Even back then, error-free transmission and format conversion were a solved problem.
Almost simultaneously, around 2001/2002, we conducted measurements of the readout and error rates of audio CD players. That, too, was no longer a problem back then, as the current measurements by @NTTY here in the forum prove.

A few years ago, we tested whether I2S offered any advantages over USB as part of a project. While USB is an incredibly poor solution for audio, solutions like XMOS, Amanero, and Xing Audio have raised it to a level where compromises are no longer necessary.
We also tested conversion using I2S, SPDIF, AES, etc., and even after five round trips, not a single bit error occurred.

In fact, the use of external I2S devices is decreasing year by year, and it's becoming increasingly irrelevant. The widespread adoption of I2S was triggered by DDCs, back when USB interfaces in DACs were problematic and not very high-quality. Even then, I2S wasn't essential, but the higher transfer/sampling rates were certainly a selling point.
However, for several years now, this technology has been integrated directly into DACs, rendering it obsolete.

And so, in every current, good DAC with a USB connection, you have exactly this setup: USB cable -> XMOS interface -> I2S transmission to the DAC chip.
What advantage is a jitter-prone external I2S transmission supposed to offer at this point?


If all these manufacturers weren't just focused on raking in customers' money with the next nonsensical product, DDCs and I2S, they could have integrated I2S, or something else, directly into computers as a high-quality audio interface and established it on the market. But of course, this kind of nonsense makes more sense.

Let's allow @Roland68 to answer for himself please.

Yes we are--all over onljne audio component forums... and should be for good reason. By "we" i meant everyone in the world and not just this thread.

Reasons why I believe what Paul McGowen says is correct:
  • Lower jitter: Because the clock and data are on separate lines, there's no need for a Phase-Locked Loop (PLL) to recover the clock from the data stream. This can significantly reduce jitter, which is a major factor in sound quality.
  • Better detail: The lack of a PLL often leads to a cleaner signal, which can result in a more detailed and "3D" soundstage.
  • Higher bandwidth: I2S supports higher bit depths and sample rates than S/PDIF, with some implementations reaching up to 32-bit depth and 768 kHz sample rates, compared to S/PDIF's 24-bit and 192 kHz.
  • Direct transmission: The signal is transmitted directly from the source to the DAC, without needing to be decoded and re-timed, which can improve performance.
Lastly, we all have our opinions and I for one am not going to bash others for not agreeing with me. The behavior I am reading, not just in this post, especially other posts actually, is that Amir seems to have an agenda towards bashing PS Audio, and people on this site seem to "follow" him like a leader in posting comments that seem like ganging up on Paul at PS Audio even when he is doing the right things and saying the right things.
The whole "reduced jitter" thing is just another myth, but this expensive nonsense has to be justified somehow.
There are plenty of measurements, both here in the forum and independent ones, that have clearly demonstrated the jitter susceptibility of the external I2S interface. It takes a really significant effort to reduce it enough to even come close to the performance of DACs fed via USB or SPDIF.

Direct transmission is absolute rubbish. Direct means USB cable -> XMOS interface -> I2S transmission to the DAC chip, not:
USB cable -> XMOS interface -> I2S -> LVDS -> external I2S over LVDS transmission via HDMI cable -> LVDS -> I2S -> transmission to the DAC chip.


For higher bandwidth, download the free Hi-Res files from Sound Liaison and compare them yourself, completely blind and without any prior knowledge.

Incidentally, I only joined this forum well after our own I2S projects and measurements, meaning my experience was completely independent of ASR and Amir.

And most importantly, every changed bit can be measured or detected; that's a fact. An absolutely bit-perfect data stream cannot sound different, and it doesn't.
If there were actually bit changes caused by anything in the data stream, it would be very easy to prove.
Could you imagine better advertising for anything? I can't. That would be the first thing I would do as a manufacturer if I wanted to sell something.
But none of these manufacturers have been able to prove that yet.
Strange, isn't it?

In the recording studio and production, the signal is converted dozens or hundreds of times. I2S, SPDIF, AES, every time it passes through a device, at every DSP chip, via various USB/FireWire interfaces, etc. How bad and compromised must all the music we hear be if a single conversion or interface is supposed to make such a big difference?
 
...

This is not about opinions; this is about facts.
Exactly - we all know what opinions are like - and how many people have one. :-)
 
I wonder if there are any jitter measurements of these external LVDS I2S links.

I have two or three of these lying around somewhere. Maybe I'll dig one up and remeasure. After I didn't find any audible improvement using them over USB, I measured one with a Holo Audio Spring DAC. I recall the noise floor was marginally higher with the I2S input and a DDC vs USB. I didn't bother with jitter measurements at that time. Somewhere earlier in this thread I posted a Sonore's spreadsheet showing just how non-standard all the I2S connectors really are. Nearly all manufacturers have done something different with it.
 
@SeanometerE was consulted and determined he wanted his account removed btw.

May clean up more later, but you should not expect any further interaction with him.
 
Last edited:
Yes, I'm sure of it.
I worked for over 10 years as a hardware expert in the field of high-availability systems, data backup, and archiving. Data transfer in these areas also uses LVDS as the hardware layer, and such systems log every error. Guess how often errors occurred in operation? Exactly, absolutely negligible. A handful of errors per month, with daily transfers of 30-150 TB.

Before that, but that was 25 years ago, I worked on the programming and development of CD jukebox systems for data backup and burning audio CDs.
Even back then, error-free transmission and format conversion were a solved problem.
Almost simultaneously, around 2001/2002, we conducted measurements of the readout and error rates of audio CD players. That, too, was no longer a problem back then, as the current measurements by @NTTY here in the forum prove.

A few years ago, we tested whether I2S offered any advantages over USB as part of a project. While USB is an incredibly poor solution for audio, solutions like XMOS, Amanero, and Xing Audio have raised it to a level where compromises are no longer necessary.
We also tested conversion using I2S, SPDIF, AES, etc., and even after five round trips, not a single bit error occurred.

In fact, the use of external I2S devices is decreasing year by year, and it's becoming increasingly irrelevant. The widespread adoption of I2S was triggered by DDCs, back when USB interfaces in DACs were problematic and not very high-quality. Even then, I2S wasn't essential, but the higher transfer/sampling rates were certainly a selling point.
However, for several years now, this technology has been integrated directly into DACs, rendering it obsolete.

And so, in every current, good DAC with a USB connection, you have exactly this setup: USB cable -> XMOS interface -> I2S transmission to the DAC chip.
What advantage is a jitter-prone external I2S transmission supposed to offer at this point?


If all these manufacturers weren't just focused on raking in customers' money with the next nonsensical product, DDCs and I2S, they could have integrated I2S, or something else, directly into computers as a high-quality audio interface and established it on the market. But of course, this kind of nonsense makes more sense.


The whole "reduced jitter" thing is just another myth, but this expensive nonsense has to be justified somehow.
There are plenty of measurements, both here in the forum and independent ones, that have clearly demonstrated the jitter susceptibility of the external I2S interface. It takes a really significant effort to reduce it enough to even come close to the performance of DACs fed via USB or SPDIF.

Direct transmission is absolute rubbish. Direct means USB cable -> XMOS interface -> I2S transmission to the DAC chip, not:
USB cable -> XMOS interface -> I2S -> LVDS -> external I2S over LVDS transmission via HDMI cable -> LVDS -> I2S -> transmission to the DAC chip.


For higher bandwidth, download the free Hi-Res files from Sound Liaison and compare them yourself, completely blind and without any prior knowledge.

Incidentally, I only joined this forum well after our own I2S projects and measurements, meaning my experience was completely independent of ASR and Amir.

And most importantly, every changed bit can be measured or detected; that's a fact. An absolutely bit-perfect data stream cannot sound different, and it doesn't.
If there were actually bit changes caused by anything in the data stream, it would be very easy to prove.
Could you imagine better advertising for anything? I can't. That would be the first thing I would do as a manufacturer if I wanted to sell something.
But none of these manufacturers have been able to prove that yet.
Strange, isn't it?

In the recording studio and production, the signal is converted dozens or hundreds of times. I2S, SPDIF, AES, every time it passes through a device, at every DSP chip, via various USB/FireWire interfaces, etc. How bad and compromised must all the music we hear be if a single conversion or interface is supposed to make such a big difference?
I am just finding out about this whole I2S mess after reading a good bit of the thread and after watching this video

The link will take you directly to the relevant part in the video. I was intrigued. The video gave an extremely favorable review of this external I2S interface using HDMI or other cables. I believe his claim was primarily that it would improve jitter and at one point in the video he does show some jitter graphs.

Following my review and analysis,

I find your response to be the most accurate explanation of what is going on here. It's also the simplest explanation which generally tends to be the right one.

In my now informed opinion of this external I2S thing, at least at this point in time, it's literally the definition of a gimmick and one of the most obviously useless and potentially harmful ones that exists in an industry NOTORIOUSLY PLAGUED with a litany of gimmicks bordering on scams. I dare say the industry is even highly commercially driven by quite a few things that very well border on gimmicks. For this we're talking about digital. As you said, digital doesn't really error in a way that matters. Additionally, it seems that performing the additional step of using a proprietary interface just to convert the signal back and forth could only make things worse. I don't buy the whole 'lack of a pll makes it better' thing and I haven't seen any real evidence that it improves jitter. You're claiming it makes it worse based on tests you've seen or done and unless I see something different, which I doubt I will ever see, I have to agree with you that this is a more likely outcome.

As far as gimmicks go, in this industry when it comes to a particular feature, some may or may not be able to actually tell a difference (depending on the feature, not all are money grabs,) while even some audiophiles cannot hear the difference. While measurements are objective, a given individual may or may not be able to hear the difference depending on the measurement provided even assuming it's accurate. I personally value something like signal to noise ratio extremely highly, so not everything is a scam. These gimmicks particularly target audiophiles and people like most of us who prefer high quality sound for music and media. I myself fall victim to these "money grabs" regularly, which is a better way to characterize most of them (although not all of them- someone pushing external I2S in 2025 without providing evidence is a scammer IMHO.) Although I do my best to avoid them where possible, as an audiophile I've shelled out a lot of money for a lot of nothing. For instance, I prefer to use high quality audio cables and even balanced XLR cables, even though I'm pretty sure I wouldn't be able to tell the difference between slightly shittier cables and Mogami Gold or something and I can't really imagine hearing a difference between balanced and unbalanced and I haven't done a test. Maybe I would hear a difference, but I just enjoy using XLR and pretending it makes it sounds better.

Lesson: Be hyper aware of the money grab in this industry
 
am just finding out about this whole I2S mess after reading a good bit of the thread and after watching this video
The link will take you directly to the relevant part in the video. I was intrigued. The video gave an extremely favorable review of this external I2S interface using HDMI or other cables. I believe his claim was primarily that it would improve jitter and at one point in the video he does show some jitter graphs.
Why test this on a DAC that has an internal PLL on all inputs, and then turn it off? Especially for SPDIF? This already invalidates the whole video :facepalm:

Fun fact: most ESS DACs already are asynchronous, having internal resampling PLL and reclocking all built into the chip.

And if you really care about better jitter, why not just buy a DAC with lower jitter in the first place?

As for it being audible… It’s generally not. See here what it sounds like:


Note that these examples have multiple orders of magnitude more jitter than any halfway decent DAC.
 
Last edited:
I am just finding out about this whole I2S mess after reading a good bit of the thread and after watching this video

The link will take you directly to the relevant part in the video. I was intrigued. The video gave an extremely favorable review of this external I2S interface using HDMI or other cables. I believe his claim was primarily that it would improve jitter and at one point in the video he does show some jitter graphs.

Following my review and analysis,

I find your response to be the most accurate explanation of what is going on here. It's also the simplest explanation which generally tends to be the right one.

In my now informed opinion of this external I2S thing, at least at this point in time, it's literally the definition of a gimmick and one of the most obviously useless and potentially harmful ones that exists in an industry NOTORIOUSLY PLAGUED with a litany of gimmicks bordering on scams. I dare say the industry is even highly commercially driven by quite a few things that very well border on gimmicks. For this we're talking about digital. As you said, digital doesn't really error in a way that matters. Additionally, it seems that performing the additional step of using a proprietary interface just to convert the signal back and forth could only make things worse. I don't buy the whole 'lack of a pll makes it better' thing and I haven't seen any real evidence that it improves jitter. You're claiming it makes it worse based on tests you've seen or done and unless I see something different, which I doubt I will ever see, I have to agree with you that this is a more likely outcome.

As far as gimmicks go, in this industry when it comes to a particular feature, some may or may not be able to actually tell a difference (depending on the feature, not all are money grabs,) while even some audiophiles cannot hear the difference. While measurements are objective, a given individual may or may not be able to hear the difference depending on the measurement provided even assuming it's accurate. I personally value something like signal to noise ratio extremely highly, so not everything is a scam. These gimmicks particularly target audiophiles and people like most of us who prefer high quality sound for music and media. I myself fall victim to these "money grabs" regularly, which is a better way to characterize most of them (although not all of them- someone pushing external I2S in 2025 without providing evidence is a scammer IMHO.) Although I do my best to avoid them where possible, as an audiophile I've shelled out a lot of money for a lot of nothing. For instance, I prefer to use high quality audio cables and even balanced XLR cables, even though I'm pretty sure I wouldn't be able to tell the difference between slightly shittier cables and Mogami Gold or something and I can't really imagine hearing a difference between balanced and unbalanced and I haven't done a test. Maybe I would hear a difference, but I just enjoy using XLR and pretending it makes it sounds better.

Lesson: Be hyper aware of the money grab in this industry
At this point, I must also defend DDCs.
DDCs emerged when USB was far from being a good interface for audio, and jitter reduction in DACs wasn't very effective.
At that time, DDC manufacturers were doing the pioneering work that advanced the USB interface and jitter reduction.
The fact that these technologies and developments have now spilled over into DACs has largely rendered DDCs obsolete.

However, DDCs still do a good job when it comes to equipping older DACs with well-functioning USB interfaces or keeping DACs with defective USB interfaces alive.
But one shouldn't forget the contribution of DDCs to the further development of DACs.

The video is also a bit outdated, and it already was when it was released.
Crystals in DACs haven't been a problem for a long time either.
@NTTY recently measured a deviation of 0.44 ppm on a current €300 DAC. A few years ago, this was a value that required external clock generators costing over €500-2000.
 
A few years ago, this was a value that required external clock generators costing over €500-2000.
Just note that these things were never meant to provide a super clock. They are built for studios to sync up various equipment. Also note that the 10 MHz clock these things provide isn't what is needed by the DACs, so further processing is needed to create the actually needed frequency. This is done once again via a PLL, which will again act as a jitter-reduction barrier. So in general, these master clocks don't really need to be so good anyway, and for sure, the long cable runs in a studio will degrade clock jitter. So a PLL at the receiver is basically mandatory.

And finally: a lot of music was probably recorded at a time when clocks were nowhere near as good as today. So claiming that some miracle jitter reduction from -139 dB to -143 dB would be audible is ludicrous. Never mind if the source is tape...
 
Last edited:
However, DDCs still do a good job when it comes to equipping older DACs with well-functioning USB interfaces or keeping DACs with defective USB interfaces alive.
Surely - rather than buying a DDC for this purpose, it would be better to replace the DAC?
 
Surely - rather than buying a DDC for this purpose, it would be better to replace the DAC?
That's a matter of perspective and attitude.
Not everyone wants to join the typical throwaway society; for some, devices have personal or sentimental value.
Many people also own expensive DACs with high-quality housings, premium construction, and high-quality components.
Many such DACs also have a through-hole design, so they never suffer failure due to short circuits in SMD resistors/capacitors.
Why would you replace a robust, high-quality, and durable device with one where that very quality is questionable?

My Cambridge Dacmagic 1, one of John Westlake's legendary devices, is over 30 years old and works like it always did. I need a DDC converter for it when using USB sources, but I wouldn't replace it.

Here in the forum, several people have already upgraded their €300-1000 DACs with a DDC due to defective USB interfaces, and this doesn't have to be expensive; with the PO100 2024, PO100AK, PO100 Pro and Douk Audio U2 PRO, it costs less than any repair.

Addendum:
Another reason to use DDCs is to achieve galvanic isolation via Toslink. This works well and affordably with the PO100 2024, PO100AK, PO100 Pro, and Douk Audio U2 PRO. This isn't always reliably achievable with inexpensive USB isolators.
 
Last edited:
There is something odd with that video of GoldenSound, and it makes me wonder. It seems to be an old one and I know we learn from experience, and so what we thought correct couple of years ago turns out to be false or incomplete. I did many of these mistakes. But, hey, saying there are no studies of jitter audibility, and that it is still subject to further analysis, well.... Also, stating that are no studies with "experienced ears", well again... There is one at least: Theoretical and Audible Effects of Jitter on Digital Audio Quality. It is sufficiently well describe to even repeat it, which is what I've done.

I reused the same music material : track 14 from "Don Dorsey - Bachbusters" (CD-80123), between 0:06 et 0:10 (5 notes only), 4 secondes of music with extremely narrow frequency content (which facilitated jitter detection).

They added jitter @1530Hz (to facilitate detection again, the further the side bands are and the easier they are to detect), and asked 8 trained people to tell at which point they were able to hear jitter. Best was 112ns and rest above 200ns (same as in the paper).

On my side, I prepared the file same way, adding the same type of jitter and amount of. I asked 2 friends to test (only), and gave them the anonymized WAV files with jitter and the original file (file name was given as the referenced to compare to). I asked them to tell me which were the files different from the original one. There was one trick (two additional files with no added jitter) to ensure they did not provide me with random answers. This is not much controlled, but I initially intended to do that only for me ;)

Similar to the paper, only one of them detected 100ns. I was somewhat disappointed not to do better than 200ns (I used Foobar ABX plugin, 16 tests, p-value below 0.05 to be relevant, in fact p-value was 0 at 200ns for me).

In more standard music and jitter much less further away in frequency, none of my friends nor me detected 800ns of jitter. It does not mean others wouldn’t have done better.

But the above correlates with the findings of Archimago as described in his article.

Also, GoldeSound shows an initial test from his computer, where he says it's the digital output of his SPDIF from the computer and states this is awful jitter, and that there's no noise because it is digital, while actually there is. So, I think there's something wrong here (maybe only edit issues). This is what he shared:

1762262629203.png


That is J-test with 48kHz sampling rate. But that pattern, I recognized it very well. It was a bug in REW when creating J-Test WAV files with 16bits bit-depth. I kept one of these test files from the time, and I can do the same measurement (except it is 44.1kHz sampling rate that I kept, not 48kHz), that is run it with Foobar from my computer and capture the output from a DAC:

1762262891299.png


Very similar, right? So much so that I think this is the bug of REW we see, not the jitter from the SPDIF of a computer. Also the low level noise comes from a conversion. In real digital domain, we'd see the below:

1762263570994.png


No noise can be seen indeed with a limit at -170dBFs. So I don't think what he shared was in digital domain, unless the computer was changing the bit depth via and Sample Rate Converter, in which case it's not the same test.

He then compares with jitter test run at 24bits, so not the same.

By the way, 200ns of added jitter provides us with the below nasty view:

1762263992695.png


Not the same... The above, I can hear in the very specific example of the AES study. But I won't hear that in more "standard" music.

I don't understand what PLL OFF means, in the context of GoldenSound's tests.

Cheers
 
Last edited:
Back
Top Bottom