watchnerd
Grand Contributor
is *eliminated* at the DAC no matter how much has accumulated
Well, kind of.
You can't eliminate any accumulated jitter that was introduced by the ADC as part of the recording process.
is *eliminated* at the DAC no matter how much has accumulated
have no jitter and no memory of jitter that may have occurred upstream.
Not true unless the ADC jitter indirectly causes data integrity issues - generating the wrong bits - which these days is doubtful. Remember jitter is only about the timing of bits during transmission. There is absolutely no mechanism anywhere in all of digital technology to store jitter as such. Only discrete bits are stored, never the jitter with may have occurred during prior transmission.They have memory (permanent) of the upstream jitter that was embedded by the ADC at recording time.
(which may be inaudible)
Not true unless the ADC jitter indirectly causes data integrity issues - generating the wrong bits - which these days is doubtful. Remember jitter is only about the timing of bits during transmission. There is absolutely no mechanism anywhere in all of digital technology to store jitter as such. Only discrete bits are stored, never the jitter with may have occurred during prior transmission.
Not to get too picky, but I think you are unintentionally agreeing with what I said. Yes, generating wrong bits at the wrong time can hypothetically be the result of extreme jitter, and those wrong bits can be stored, but not the underlying jitter itself. There is no physical mechanism to store the jitter itself.Watchnerd is correct.
Due to the fact that jitter is often exclusively associated with data transmission and DACs it gets forgotten what the sampling theorem requires.
Although other schemes are possible, the most common realization of sampling is based on the concept of equidistant sampling points in time and equidistant means _really_ _equidistant_ .
In fact jitter during the AD conversion means that "wrong" bits are stored because of the misaligned points in time.
Btw, as said before, ASRC too leads to accumulation of jitter related effects as new "wrong" (means at least slightley misaligned timing is converted to at least slightley wrong data) data is stored.
Not to get too picky, but I think you are unintentionally agreeing with what I said. Yes, generating wrong bits at the wrong time can hypothetically be the result of extreme jitter, and those wrong bits can be stored, but not the underlying jitter itself. There is no physical mechanism to store the jitter itself. ..<snip>
It may again be a bit pedantic, but imo we should try to keep the various error mechanism apart where ever possible. As we are discussing jitter effects we should stick to jitter related errors and keep in mind that there exists a "plethora" of other error mechanisms.
Wrt ADC a typical example of "wrong bits" would be the case of missing codes (another would be DNL).
But related to jitter, the ADC produces no "wrong bits" itsself but just related to the input signal due to the nonideal timing.
DonH56 already mentioned the socalled aperture jitter, but that too is a virtue of the ADC or more precise the sample and hold circuit (or sample and track circuit) and that might even be a constant delay.
Imo watchnerd referred more to the sampling clock jitter supplied to the ADC, and of course any jitter effect at the ADC is converted to a voltage error wrt the input signal. If this voltage error (considered in isolation) exceeds 1/2 LSB it will irrevocably impair the signal quality. (*)
(*) I remember having read a IEEE paper on a stochastic approach where the authors concluded that at least in theory it would be possible to remove some of the jitter related voltage errors in the signal.
So many potential problems! But before my head spins off completely with the complexity of it all, we can still note that the first digital music recordings were made over 40 years ago, and even some of the earliest are still regarded as audiophile classics. Certainly, many recordings from 20 years ago are highly regarded. Hopefully all these problems are fine tuning at the margins - digital is still better than vinyl!Hmmm... A few comments from my perspective as a designer of data converters (at the transistor level)
So many potential problems! But before my head spins off completely with the complexity of it all, we can still note that the first digital music recordings were made over 40 years ago, and even some of the earliest are still regarded as audiophile classics. Certainly, many recordings from 20 years ago are highly regarded. Hopefully all these problems are fine tuning at the margins - digital is still better than vinyl!
<snip>, are minimized by design, and may have no audible effect. Finally, whilst most of these apply to any architecture, the actual behavior and sensitivity to some error sources is quite different among say a more conventional flash, folded-flash, multistep etc. ADC, dual-slope ADC, and one incorporating a delta or delta-sigma modulator.
<snip>
I do not follow this. First, the mechanisms you cited above cause "wrong bits" in the sense that the output does not map to the input. With regards to timing, jitter on the clock causes the signal to be sampled at the "wrong" time relative to the ideal sampling instance and thus the output bits are "wrong".
What would you call it? Just curious, as I have heard it called various things over the years,.....
But, the jitter does not have to exceed 1/2 lsb to corrupt the signal, though it will be sample-by-sample and so an average or RMS metric (like SNR) might not show an error. For example, if the signal is right below the threshold, and noise pushes it just over, then the "wrong" bit is output. If the threshold for some bit is 1.000 and the signal is at 0.999, but then a little noise (added by the ADC via signal or clock buffers etc.) pushes it to 1.001 at the instant it is sampled, then it is wrong with respect to the actual input.
I was referrering to this one:There are a bunch of papers on removing jitter and other errors and I seem to recall that same paper, or one of them.
But anyway, i meant it just to describe that the ADC produces an output code that does not map to the input, while in the case of jitter, the ADCs output code maps to the input but with respect to the sampling process takes the samples at the wrong time events. Therefore i wrote:
"But related to jitter, the ADC produces no "wrong bits" itsself but just related to the input signal due to the nonideal timing."
So to speak an idealized ADC with a jittered clock.
Those scopes are way, way too slow to handle jitter on USB bus. There, even the old USB 2.0 high-speed runs at nearly half a gigahertz. .
I think we are saying the same, I was just more anal/nit-picky about it.
I don't have a copy of 1241 with me but can check later but to see if it actually uses "aperture jitter" -- despite helping write the thing it was a while ago and I do not want to say wrongly.
Hmmm... The phrase that ran through my mind comes from the music, not engineering, side of my brain: "The right note at the wrong time is still a wrong note." I think we are down to semantics (word definitions) but if you sample at the wrong time in a waveform, the level (voltage in this case) is also wrong unless it is a DC voltage. So to my mind jitter that causes a time error must also cause a voltage error. If it fall within the same lsb step it will not be seen at the output, but if it happens to cross the lsb boundary then it will be recorded as an error.
If the jitter (noise) is on the incoming signal, technically there is no ADC error since it sampled what it received; if the error arises from internal clock jitter, then the ADC is to blame.
False. Just because a line protocol is capable of high speed doesn't mean that it always runs at the highest speed possible. This s especially true with USB 2.x Please consult https://en.wikipedia.org/wiki/USB#Signaling_rate_.28transmission_rate.29. USB 2.x has three (3) different line speeds:
I observe that the speeds actually used are adjusted to the maximum speed that could be productively used. HS mode is as a fraction of application types is relatively rare, largely reserved for USB hard drives and the like (which are of course very common, but not typical of audio gear). Thus a 20 MHz scope might be productively used to do useful analysis of a USB 2.0 audio component. I wouldn't recommend a scope for studying anything but very extreme cases of jitter such as those that inhibit steady communication.
- Low-speed (LS) rate of 1.5 Mbit/s is defined by USB 1.0. It is very similar to full-bandwidth operation except each bit takes 8 times as long to transmit. It is intended primarily to save cost in low-bandwidth human interface devices (HID) such as keyboards, mice, and joysticks.
- Full-speed (FS) rate of 12 Mbit/s is the basic USB data rate defined by USB 1.0. All USB hubs can operate at this speed.
- High-speed (HS) rate of 480 Mbit/s was introduced in 2001. All hi-speed devices are capable of falling back to full-bandwidth operation if necessary; i.e., they are backward compatible with USB 1.1 standard.[clarification needed] Connectors are identical for USB 2.0 and USB 1.x.
False. Just because a line protocol is capable of high speed doesn't mean that it always runs at the highest speed possible. This s especially true with USB 2.x Please consult https://en.wikipedia.org/wiki/USB#Signaling_rate_.28transmission_rate.29. USB 2.x has three (3) different line speeds:
I observe that the speeds actually used are adjusted to the maximum speed that could be productively used. HS mode is as a fraction of application types is relatively rare, largely reserved for USB hard drives and the like (which are of course very common, but not typical of audio gear). Thus a 20 MHz scope might be productively used to do useful analysis of a USB 2.0 audio component. I wouldn't recommend a scope for studying anything but very extreme cases of jitter such as those that inhibit steady communication.
- Low-speed (LS) rate of 1.5 Mbit/s is defined by USB 1.0. It is very similar to full-bandwidth operation except each bit takes 8 times as long to transmit. It is intended primarily to save cost in low-bandwidth human interface devices (HID) such as keyboards, mice, and joysticks.
- Full-speed (FS) rate of 12 Mbit/s is the basic USB data rate defined by USB 1.0. All USB hubs can operate at this speed.
- High-speed (HS) rate of 480 Mbit/s was introduced in 2001. All hi-speed devices are capable of falling back to full-bandwidth operation if necessary; i.e., they are backward compatible with USB 1.1 standard.[clarification needed] Connectors are identical for USB 2.0 and USB 1.x.
Which is not of interest to anyone since data errors is not what at stake here. A number of companies make tweak devices that claim to reduce USB jitter and noise. Today, we hook up a DAC and see what they do on its output (not much). What would be interesting is to test their original hypothesis of cleaning up the USB signal.Thus a 20 MHz scope might be productively used to do useful analysis of a USB 2.0 audio component. I wouldn't recommend a scope for studying anything but very extreme cases of jitter such as those that inhibit steady communication.
At what stage does the jitter from the entire internet and the miles of cable in to your house become so bad that it shows up in the measurements and/or becomes unlistenable?