• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Digital Audio Jitter Fundamentals Part 2

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
Author: our resident expert, DonH50

We left Digital Audio Jitter Fundamentals talking about digital signals. However, error correction and design margins mean jitter on the digital bit stream is rarely an issue for the bit rates used for A/V systems. (At 10 Gb/s and above, it is a bigger issue.) When jitter is brought up as an issue in the audio world, we are talking about jitter on the sampling clock. This can happen for all the reasons mentioned before, but once that clock is used to drive your DAC, the jitter goes right to your ears (OK, there are a few steps along the way, but you get the idea).

Clock recovery is a complicated subject beyond the scope of this thread. Let’s just say getting a very clean, low-jitter clock takes some effort. As a result, jitter can run pretty high (several ns or more) in many audio systems. Make it an A/V system with videophiles defining the clocks and not worrying too much about audio, throw in a bunch of various digital signals around the audio bit stream and clock lines, and Bad Things can happen, like signal coupling and excessive jitter.

First, I am going to repeat some information from an earlier post. To see the impact of jitter, let's look at a 16-bit converter sampling at 44.1 kS/s (CD resolution and rate). The DAC is ideal except for the added random timing jitter. A perfect 16-bit ADC has SNR of about 98 dB. I have plotted the SNR vs. jitter for 100 Hz, 1 kHz, 2 kHz, 10 kHz, and 20 kHz signals. You can clearly see how the higher frequencies are much more sensitive to jitter. At 100 Hz, 10 ns of jitter is hardly noticeable, but at just 1 kHz the SNR has decreased by nearly 20 dB (down about 3 bits)! At 20 kHz, we have SNR less than an ideal 10-bit DAC (< 60 dB).

upload_2017-9-13_18-40-11.png


Another way to look at the impact is to plot the SNR lost as jitter increases, as shown below. A perfect 16-bit DAC would lose 0 dB in SNR; as jitter increases, more and more SNR is lost. With 1 ns of random jitter, things don't look too bad through 2 kHz, but at 20 kHz we see that 20 dB SNR loss. To keep the loss to just a few dB at 20 kHz, we need jitter < 100 ps; with just 1 ns of random jitter the upper midrange and high end is getting pretty noisy, with the effective dynamic range reduced by 10 to 20 dB.

upload_2017-9-13_18-40-45.png


I did run a few test cases so we can look at frequency spectrums. (For the geeks: the input frequency is chosen relatively prime and bounded with the sample record to minimize spectral leakage per IEEE Standard 1241.) First, the ideal 16-bit DAC with a full-scale 1 kHz input signal:

upload_2017-9-13_18-41-17.png


Now with 1 lsb (4.9 ns, standard deviation, normal distribution) of added random jitter we’ve lost about 1.5 bits of resolution (from 16 effective number of bits (ENOB) to 14.6 ENOB) and about 8 dB of SNR; spurious-free dynamic range is lower but still very high. Note only the noise floor is raised; random jitter will not generally add discrete tones, just more noise.

upload_2017-9-13_18-41-46.png


Here’s what happens we when double the random jitter:
upload_2017-9-13_18-42-13.png


Lost another bit, but still a very low noise floor (84 dB SNR).

Now look what happens when the signal level is cut in half (-6 dBFS):

upload_2017-9-13_18-43-11.png


Ah, the noise floor drops, too, and ENOB stays about the same. With less signal, ENOB will go down since we are using less of the DAC’s range (reducing the number of bits used). Jitter drops because, as stated in Jitter 101, it is related to the slew rate of the signal. Smaller signal, lower slew rate, at least for a sinusoid. This is not true for something like a square wave that has faster edges (and thus higher frequency components – see the “Building a Square Wave” thread). So, jitter will degrade fast pulses more than a single tone like this test case.



Looking at a 10 kHz tone, 1 lsb of jitter is 1/10th that of the 1 kHz case (0.5 ns vs. 5 ns). As we’d expect, the results are about the same as the 1 kHz test case.

upload_2017-9-13_18-43-40.png


Now, increase the jitter to ~5 ns, equal to 1 lsb at 1 kHz, and performance is markedly lower:

upload_2017-9-13_18-44-15.png


Wow, 70 dB SINAD (signal to noise and distortion), and only a little over 11 bits! Ouch! SFDR is still high, over 100 dB, but the noise level is rising to audibility.

Now let’s return to a 1 kHz input with 5 ns (1 lsb at 1 kHz) jitter, but add 1 lsb of the input to the jitter. That is, nothing fancy, just a simple addition of 1 lsb (in time) modulation of the (formerly) random jitter on the clock. This is the dreaded deterministic jitter term.

upload_2017-9-13_18-44-53.png


Well, ENOB is about as before, but now there’s a 2 kHz tone sticking well above the noise floor. That is second-order distortion, i.e. a second harmonic that wsan’t there before. Probably not really audible at almost 100 dB down, but a very annoying thing to see. Dropping the signal amplitude reduces the distortion spur as well, just as before.

upload_2017-9-13_18-45-27.png


An interesting experiment is to throw in a deterministic term not related to the signal. Below is a plot with the jitter modulated by a 120 Hz signal, such as would come from a typical full-wave power supply, again at the 1-lsb level. Note the ENOB did not change significantly, but now there is a pair of spurs around the 1 kHz signal. Other frequencies, and more complex combinations of deterministic jitter, can generate a series of tones that are not related to the signal. This is important because we can hear non-harmonic distortion much more readily than harmonic distortion.

upload_2017-9-13_18-46-15.png


Repeating the plot at 10 kHz with 1 lsb of 10 kHz signal injected puts a spur at 20 kHz at the same level as in the 1 kHz plot, just like the first trials. Using the 1 kHz jitter level as modulated by the 10 kHz signal yields a fairly high (about -76 dBFS) distortion term.

upload_2017-9-13_18-46-46.png


One last series of trials… Here’s a plot using five input tones (1, 2, 3, 5, and 10 kHz) without jitter. Notice the tones are all about 10 dB down; this is because when the all add up (in phase), they drive the DAC to full-scale, even though the average is -10 dB per tone. Music is much more complex thus average values are often -20 dBFS or less to ensure the output doesn’t clip (and sound very, very bad). This eats into your dynamic range pretty quickly, and helps explain why what appears to be low jitter can start to impact the ‘real world” noise floor. It also helps explain why recording systems and studios really like using 24 bits; that extra headroom is a boon when working with everything in the mix before the amplitudes are matched and the signal made to fit back into 16 bits for your CDs.

upload_2017-9-13_18-47-11.png


Here are those same tones but with 5 ns (1 lsb at 1 kHz again) of jitter added, along with 1 lsb of the 1 kHz tone added to the jitter. Note the multiple tones added as the deterministic jitter mixes with the other tones through the sampling process. The effective SFDR is now only about 75 dB from peak signal to peak spur…

upload_2017-9-13_18-47-39.png


My last example is those same five tones, but with the 5 kHz tone reduced 20 dB (about ¼ volume) to emulate what might happen in (very simple) music. I moved the deterministic jitter to 120 Hz, though still at 5 ns (1 lsb at 1 kHz). The distortion spurs are much more numerous and only about 60 dB down (worst-case) from the reduced 5 kHz tone. Would you hear this? I don’t know, but probably not. However, it is clear that as we move toward more complex signals like music, and correspondingly more complex and realistic jitter, we are heading toward something that could be readily audible.

upload_2017-9-13_18-48-16.png


Hopefully this has given you a picture of what jitter can do, and a flavor for the real-world impact it might have. My goal was not a realistic, musical example (readily done but the plots would be very messy), but rather something that helps clearly show what jitter does to DAC performance. Thus, I have not used complicated signals so (hopefully) it is easy to see what happens when jitter is added.

Enjoy! - Don
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Author: our resident expert, DonH50
When jitter is brought up as an issue in the audio world, we are talking about jitter on the sampling clock. This can happen for all the reasons mentioned before, but once that clock is used to drive your DAC, the jitter goes right to your ears (OK, there are a few steps along the way, but you get the idea).

Clock recovery is a complicated subject beyond the scope of this thread. Let’s just say getting a very clean, low-jitter clock takes some effort. As a result, jitter can run pretty high (several ns or more) in many audio systems. Make it an A/V system with videophiles defining the clocks and not worrying too much about audio, throw in a bunch of various digital signals around the audio bit stream and clock lines, and Bad Things can happen, like signal coupling and excessive jitter.
OK, but my objection is that "clock recovery" doesn't apply at all to what may be classed broadly as asynchronous systems. In these (sensible :)) systems, there is one clock next to the DAC, and it doesn't need to be recovered. Again, anyone but an expert would walk away thinking that digital audio is (a) really difficult and (b) just another form of analogue, where every stage in the 'signal' path contributes to a build-up of jitter and noise - which isn't true. The beauty of the concept becomes lost in the technical details.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
So can you explain to us how digital streaming across the internet works? At what stage does the jitter from the entire internet and the miles of cable in to your house become so bad that it shows up in the measurements and/or becomes unlistenable?

(maybe I can find a cartoon to illustrate the concept for you)
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
So can you explain to us how digital streaming across the internet works? At what stage does the jitter from the entire internet and the miles of cable in to your house become so bad that it shows up in the measurements and/or becomes unlistenable?

(maybe I can find a cartoon to illustrate the concept for you)

I think we are running in circles at this point.
As said before, the beauty of a concept might not show up in reality; the numerous variants of "Murphy´s law" , although funny, exist for a reason.
It is much easier to draw a flawless system on a sheet of paper than to realize it as a finished product, therefore the proof is the measurement at the output of a device.
Despite the fact that the master clock in a cd-player established the master to which all others are synchronized, it is/was still possible to measure differenct spectra at the analog outputs of the player.

I think, i´ve written it before, there is a difference between "it can´t be" and "it shouldn´t be" . :)
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
I think we are running in circles at this point.
As said before, the beauty of a concept might not show up in reality; the numerous variants of "Murphy´s law" , although funny, exist for a reason.
It is much easier to draw a flawless system on a sheet of paper than to realize it as a finished product, therefore the proof is the measurement at the output of a device.
Despite the fact that the master clock in a cd-player established the master to which all others are synchronized, it is/was still possible to measure differenct spectra at the analog outputs of the player.

I think, i´ve written it before, there is a difference between "it can´t be" and "it shouldn´t be" . :)
But there's a major thought experiment here: the world's internet reaches your house via several miles of ordinary cable on top of the thousands before that. If there is any truth to the idea that "jitter" prior to the final stage is important, streaming should be thousands of times worse than playing off a memory stick, say. But it isn't. This means that somewhere along the chain, that terrible, terrible jitter is removed. Completely. Was it your modem? Was it your router? Was it your WiFi? Was it your USB cable? Whatever it was, all we have to do is, make sure that we use it whenever playing digital audio. In fact, it was the way digital audio works at a system level that performed this miracle, and all the talk of shot noise, etc. is only relevant at that very final stage.

If there are electrically defective systems around, they should not be used to discredit all of digital audio - which is what is happening.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Although generally jitter accumulation (build up) is sometimes possible (think for example of ASRCs), is it really disputed that jitter (leaving the requirements for data integrity aside at this point) only is relevant at the conversion stage? Maybe i have missed it.

But i think the main point is, that in a thought experiment it is easy to state that "jitter was removed" while in reality it is often more like " it think we reduced it sufficiently" . ;)
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,835
Likes
16,497
Location
Monument, CO
For heaven's sake, just ignore the lines that say "clock recovery". The focus is on clock jitter at the DAC, whatever the source, and not an attempt to "discredit all of digital audio" -- that seems a leap. And please note in my world the DAC is the actual data converter, not usually the interface and other components that audiophiles consider a DAC in a box. This was written years ago and I do not have time for a rewrite at the moment, maybe in a few weeks. These were also focused on an audience at an educated or willing to learn lay level; please do not expect complete technical rigor in any of them. It may be better to rewrite the article in your own image to correct the mistakes, or write a better one (probably not hard, I do not claim to be a writer). Perfection is beyond my pay grade, I'm afraid.
 
Last edited:

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
For heaven's sake, just ignore the lines that say "clock recovery". The focus is on clock jitter at the DAC, whatever the source, and not an attempt to discredit all of digital audio" -- that seems a leap. And please note in my world the DAC is the actual data converter, not usually the interface and other components that audiophiles consider a DAC in a box. This was written years ago and I do not have time for a rewrite at the moment, maybe in a few weeks. These were also focused on an audience at an educated or willing to learn lay level; please do not expect complete technical rigor in any of them. It may be better to rewrite the article in your own image to correct the mistakes, or write a better one (probably not hard, I do not claim to be a writer). Perfection is beyond my pay grade, I'm afraid.
Once again, I'm not saying that you are wrong in any way, or that you are trying to discredit digital audio, just that the fundamental ideas are buried in amongst the low level technical stuff and need to be emphasised over and over. I reckon you could go to any number of "Digital audio explained" audiophile articles, and they will launch straight into definitions of jitter, etc. without any background on how jitter is *eliminated* at the DAC no matter how much has accumulated - as long as it is a sensible implementation. If, like me, you want to kill off the audiophile USB cable myth, this is the only way to do it.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
OK, but my objection is that "clock recovery" doesn't apply at all to what may be classed broadly as asynchronous systems.
As Don noted, these articles were written back in 2010 -- some 7 years ago -- where async USB was just beginning to show up and vast majority of people still relied on synchronous S/PDIF input for their DACs.
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,835
Likes
16,497
Location
Monument, CO
Once again, I'm not saying that you are wrong in any way, or that you are trying to discredit digital audio, just that the fundamental ideas are buried in amongst the low level technical stuff and need to be emphasised over and over. I reckon you could go to any number of "Digital audio explained" audiophile articles, and they will launch straight into definitions of jitter, etc. without any background on how jitter is *eliminated* at the DAC no matter how much has accumulated - as long as it is a sensible implementation. If, like me, you want to kill off the audiophile USB cable myth, this is the only way to do it.

Write an update! Seriously -- as Amir said, the technology has advanced, asynch abounds, clock buffering/retiming chips are common, and a lot of the interface issues that were big back then are a non-issue for a good DAC (not a piece of -- well, you know! ;) ) today.

Another thing I keep thinking about writing up, but have not had time nor enough technical data, is some insight into how power and ground signals that cross digital and analog domains can corrupt the output even if the clock is "perfect".

As an aside, I am very envious (and appreciative!) of Amir, Ray, and many others here who have the equipment and knowledge and take the time to post real-world data. I'd love to have the gear and time but life's been (too) busy the past few years. And the last time I thought of taking a DSO and SA home from work, I had permission, but got cold feet about bringing home a couple of pieces of test gear worth more than my home...
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,200
Likes
16,981
Location
Riverview FL
got cold feet about bringing home a couple of pieces of test gear worth more than my home...

Get a more expensive home.
 

Arnold Krueger

Active Member
Joined
Oct 10, 2017
Messages
160
Likes
83
So can you explain to us how digital streaming across the internet works? At what stage does the jitter from the entire internet and the miles of cable in to your house become so bad that it shows up in the measurements and/or becomes unlistenable?

(maybe I can find a cartoon to illustrate the concept for you)

A well designed digital receiver can remove an amazing amount of jitter, and turn crappy-looking signals back into pristine audio. In the past, a common example of this was just about any CD player. CD players aren't so common anymore!

The analog signal that comes off of the CD player's phototransistor array usually looks like $##@! Jitter can be so great as to be visible on the waveform. In traditional players, there was a place to monitor it with a 'scope named "TP1". The classic display of this signal on a scope is called "an eye pattern". Google the phrases to find out more.

The classic circuit for removing jitter in a CD player is centerpieced by a buffer and a phase-locked-loop or PLL. A PLL is there to simulate a highly selective resonant circuit tuned to the bit rate being processed. The dirty data is ideally cleaned up using filters and voltage-sensitive triggers. It is then put into the buffer at whatever data rate it shows up with. It is clocked out based on a precise oscillator tuned to the desired bit rate. When the buffer is nearly full, the rate of data being added is slowed by slowing the rotation of the CD. When the buffer is nearly empty the rate of data being added is increased by increasing the rate the CD is spun at.

When the source is the web, the data rate of inbound data is controlled by a function called pacing. Depending on the details of the system, there are several kinds of pacing between various parts of the data transmission system. The logic is similar to the CD player, but instead of changing the spin rate of the CD, the rate at which the host sends the data to the receiver is usually controlled by the receiver. In some cases, the receiver just follows the lead of the source, in which case the clock rate of the final received data has to be adjusted to keep the buffer just partially full and avoid losing data.

There is obviously a lot of circuitry and logic here. Or, so it seems. The first such circuit I ever worked on was in an IBM tape drive for an old mainframe computer called the 1401. Implemented in discrete parts It was broken up into circuit cards that filled a few rows in a swing-out rack that was about 5' x 3'. A few years later I worked with a circuit with a similar function that recovered the 19 KHz pilot tone in a FM multiplex receiver. It was a little chunk of real estate on a 14 pin IC chip. Of course the silicon die was a tiny piece buried inside the already-tiny piece of plastic that encased the chip. Today, it is far, far, smaller. All the circuitry for a CD player has been reduced to a cheap chip and that chip might work about as well as anything.

In a digital audio player, much of this circuitry may not exist at all, but is implemented in some program code. An example of this is the Software Defined Radio Receiver (SDR) chip that is used in most modern TV sets. The same chip can be the core of just about any kind of receiver that can be conceived, with the right program code and supporting circuits. it can be a FM receiver, an AM receiver, a ham receiver, a walkie talkie, an ECM receiver, a radar, a GPS, a cell phone, a HD TV receiver for cable or OTA, you name it. If you really wanted to overthink and overdo it, it could probably be the core of an audiophile DAC. I hope that JK isn't reading this!
 

Arnold Krueger

Active Member
Joined
Oct 10, 2017
Messages
160
Likes
83
I'd love to have the gear and time but life's been (too) busy the past few years. And the last time I thought of taking a DSO and SA home from work, I had permission, but got cold feet about bringing home a couple of pieces of test gear worth more than my home...

For chump change you can obtain a USB 'scope or a handheld scope that can work pretty well. Handling signals up to 20 or even 200 MHz can run under $100. They put much of the logic of a DSO where it probably belonged all along - code running on a general purpose computer such as a PC or a cell phone.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
A well designed digital receiver can remove an amazing amount of jitter
It seems to me that in an on-demand packet-based system (which I count a CD player as the equivalent of) it is not just an amazing amount of jitter removal, but *total*. As always, in the real world it will not be absolutely 100.0000% because extraneous mechanisms to do with power supplies and RF emissions, etc. will show up as something that looks like very low level jitter, but in principle jitter present in the data as it is received is not remembered (unless it's like homoepathic medicines..? :)) once the data is in the buffer, in the same way that the jitter present when downloading a file is *completely* forgotten once it is in memory.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
For chump change you can obtain a USB 'scope or a handheld scope that can work pretty well. Handling signals up to 20 or even 200 MHz can run under $100. They put much of the logic of a DSO where it probably belonged all along - code running on a general purpose computer such as a PC or a cell phone.
Those scopes are way, way too slow to handle jitter on USB bus. There, even the old USB 2.0 high-speed runs at nearly half a gigahertz. As such, you need a scope that runs at least 10X faster to make sure that you are not looking at the scope response (i.e. rise time) versus the source. So we are talking 5 Ghz scope which is a very high-end product. And sampling rate needs to be even higher. Picoscope makes such as USB scope but it retails for $10,000+. In addition, you need active probes as to not load down the USB bus. That alone can cost as much as the scope.

USB scopes are also slow to use due to lack of physical buttons. For static, one-off measurements that won't be an issue but for experimentation knobs are faster than mouse and menus. You can get a good digital scope for a few hundred dollars so there is no need to get USB equiv.
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
632
It seems to me that in an on-demand packet-based system (which I count a CD player as the equivalent of) it is not just an amazing amount of jitter removal, but *total*. As always, in the real world it will not be absolutely 100.0000% because extraneous mechanisms to do with power supplies and RF emissions, etc. will show up as something that looks like very low level jitter, but in principle jitter present in the data as it is received is not remembered (unless it's like homoepathic medicines..? :)) once the data is in the buffer, in the same way that the jitter present when downloading a file is *completely* forgotten once it is in memory.
Right. Jitter is a "bits in motion" concept, as in data transmission from point A to point B. Bits in a quiescent state, such as being stored in memory buffers, on hard drives, on optical drives, etc. have no jitter and no memory of jitter that may have occurred upstream.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Right. Jitter is a "bits in motion" concept, as in data transmission from point A to point B. Bits in a quiescent state, such as being stored in memory buffers, on hard drives, on optical drives, etc. have no jitter and no memory of jitter that may have occurred upstream.
Yes. But it has to be emphasised that in a unidirectional link e.g. S/PDIF, even though the data is in a buffer, the DAC will still impart its own form of jitter on the signal when it has to dynamically change its sample rate (or other resampling equivalent) to match the source. And in this case, its decision on how much and how often to change will be affected by cable jitter! So jitter *is* being passed through at some level in this case.

Only when the link is bidirectional, and the flow of data can be regulated, can it be said that the link is equivalent to a static stored file - it's just that replay has started before the whole file was downloaded.
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
632
I totally agree. As I see it, jitter is a measure of the timing irregularity of the flow of data transmission, usually in audio in terms of signal words, not necessarily of bits within those words. In spdif, the DAC must adapt its timing of the processing of those words to their rate of receipt into its buffers, which may be affected by upstream issues in the sending player mechanism, wires and send/receive components.

Hence, timing accuracy of the D to A conversion may be adversely affected in that constant timing adaptation in spite of the buffering in one-way spdif type transmission. And, even big buffers capable of storing many sample words are not a simple solution, since even big buffers can eventually fill to capacity and start to lose samples unless the D to A process adapts its timing. There may be other clever ways for spdif DACS to minimize the impact of this through excellent engineering.

Like you, I believe in the elegance of asynchronous transmission, since it manages those buffers in "look ahead" fashion by adapting the rate of transmission and signaling the player to speed up or slow down the incoming rate to always keep the buffers sufficiently full. This leaves the DACs D to A process able to run at exact, unvarying speed - "free running" and unaffected by the transmission speed into the DAC.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,522
Likes
37,053
Yes. But it has to be emphasised that in a unidirectional link e.g. S/PDIF, even though the data is in a buffer, the DAC will still impart its own form of jitter on the signal when it has to dynamically change its sample rate (or other resampling equivalent) to match the source. And in this case, its decision on how much and how often to change will be affected by cable jitter! So jitter *is* being passed through at some level in this case.

Only when the link is bidirectional, and the flow of data can be regulated, can it be said that the link is equivalent to a static stored file - it's just that replay has started before the whole file was downloaded.

Would this be true of ASRC methods like Benchmark and a few others use? They don't have any PLLs involved.
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
632
Would this be true of ASRC methods like Benchmark and a few others use? They don't have any PLLs involved.
They claim their latest Ultralock3 jitter reduction scheme reduces jitter induced noise and distortion to -140 dB or less via spdif. I do not know the details, but that might be possible. They also claim a lock time "with precise phase accuracy" in 6 ms. It was formerly 400 ms. in their prior DAC2. This would seem to indicate some improved "agility" in their scheme. They are not a company whose statements I have any reason to distrust.

So, this might be another proprietary way to overcome inherent potential problems in the one-way streaming spdif, etc. architecture, involving much hard work, I am sure. However, I still like the idea of the simpler, more economical, open two-way communications architecture of asynchronous USB with the constant, free running DAC timing, myself. But, in some applications, USB cannot be used.
 
Top Bottom