• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Digital Audio Jitter Fundamentals Part 2

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
633
They have memory (permanent) of the upstream jitter that was embedded by the ADC at recording time.

(which may be inaudible)
Not true unless the ADC jitter indirectly causes data integrity issues - generating the wrong bits - which these days is doubtful. Remember jitter is only about the timing of bits during transmission. There is absolutely no mechanism anywhere in all of digital technology to store jitter as such. Only discrete bits are stored, never the jitter with may have occurred during prior transmission.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Not true unless the ADC jitter indirectly causes data integrity issues - generating the wrong bits - which these days is doubtful. Remember jitter is only about the timing of bits during transmission. There is absolutely no mechanism anywhere in all of digital technology to store jitter as such. Only discrete bits are stored, never the jitter with may have occurred during prior transmission.

Watchnerd is correct.
Due to the fact that jitter is often exclusively associated with data transmission and DACs it gets forgotten what the sampling theorem requires.
Although other schemes are possible, the most common realization of sampling is based on the concept of equidistant sampling points in time and equidistant means _really_ _equidistant_ . :)

In fact jitter during the AD conversion means that "wrong" bits are stored because of the misaligned points in time.
Btw, as said before, ASRC too leads to accumulation of jitter related effects as new "wrong" (means at least slightley misaligned timing is converted to at least slightley wrong data) data is stored.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,588
Likes
239,445
Location
Seattle Area
ADC is a master device so the only jitter is intrinsic to its clock. Since the talent and producer have approved those bits--jitter noise and all -- what happens there is immaterial.
 

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
633
Watchnerd is correct.
Due to the fact that jitter is often exclusively associated with data transmission and DACs it gets forgotten what the sampling theorem requires.
Although other schemes are possible, the most common realization of sampling is based on the concept of equidistant sampling points in time and equidistant means _really_ _equidistant_ . :)

In fact jitter during the AD conversion means that "wrong" bits are stored because of the misaligned points in time.
Btw, as said before, ASRC too leads to accumulation of jitter related effects as new "wrong" (means at least slightley misaligned timing is converted to at least slightley wrong data) data is stored.
Not to get too picky, but I think you are unintentionally agreeing with what I said. Yes, generating wrong bits at the wrong time can hypothetically be the result of extreme jitter, and those wrong bits can be stored, but not the underlying jitter itself. There is no physical mechanism to store the jitter itself.

I think a better term for an ADC "misfiring" and generating the wrong bits would be "sampling error", whether that is caused by jitter and/or other mechanisms. And, as a practical matter, I doubt that is a serious issue in recording studios today.
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,878
Likes
16,655
Location
Monument, CO
I haven't really been following (busy, plus our new members are way more audio engineering knowledgable than I) but there are a multitude of jitter sources for ADCs just as there are for DACs. This article started out to address fundamental sampling errors induced by jitter in data converters, ADCs or DACs, and fell out of earlier (and much more in depth) articles and notes. It did not address the transmission of bits but actual jitter in the sampling process (acquisition, for an ADC).

Transmission was introduced because, back then, many DACs did not buffer and resample the data but instead used whatever (usually noisy) clock was provided or recovered from the data stream. If all you need do is recover the data, i.e. ensure a "1" sent is a "1" received (and same for 0's), you have the entire unit interval (bit period) to get it right. Jitter bad enough to drop data is rarely a problem at audio'ish rates. But use that noisy incoming clock as the DAC's sampling clock, and significant jitter can be added to the analog output signal. That was the genesis of this article. Significant does not always mean audible, natch.

Aperture errors (jitter etc.) in the ADC cannot in general be compensated by the DAC; the DAC has no way of knowing the error. Similarly jitter in the DAC's output will only add to whatever was encoded into the ADC's samples. Jitter can be noise, crosstalk, clock bleed, etc. from within the device or externally coupled e.g. noise on either the clock or the analog input (output) signal, power supplies, etc.

Really two things going on here: aperture errors from jitter or whatever at the ADC or DAC that corrupt the sampled data; and, transmission errors that lead to bit errors and invalid signal at the end of the chain (digital or analog). The two overlap when the recovered clock is used to drive the DAC and adds a noise (jitter, etc.) source from the bit stream. For serial data streams (think S/PDIF, AES, or the PCIe and SAS/SATA links in your PC) following the clock is a good idea since the goal is to maximize the eye opening and optimize data recovery. Once you have the data, the idea is to send it out as originally intended, ideally perfectly sampled in time and amplitude, without any variance in sampling instants. Thus the whole asynchronous clocking scheme was developed for audio (it has been around for other applications for many decades; this is not a new problem ;) ).

Got longer than I had planned, and less clear, but rushing through on my lunch break. Apologies if all known already and/or not in line with the discussion. Times and products have changed, and of course this article does nothing to address things like power/ground noise crossing the digital-to-analog boundary via the USB or HDMI wires and corrupting the analog output.

HTH,
Don
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Not to get too picky, but I think you are unintentionally agreeing with what I said. Yes, generating wrong bits at the wrong time can hypothetically be the result of extreme jitter, and those wrong bits can be stored, but not the underlying jitter itself. There is no physical mechanism to store the jitter itself. ..<snip>

It may again be a bit pedantic, but imo we should try to keep the various error mechanism apart where ever possible. As we are discussing jitter effects we should stick to jitter related errors and keep in mind that there exists a "plethora" of other error mechanisms.

Wrt ADC a typical example of "wrong bits" would be the case of missing codes (another would be DNL).
But related to jitter, the ADC produces no "wrong bits" itsself but just related to the input signal due to the nonideal timing.
DonH56 already mentioned the socalled aperture jitter, but that too is a virtue of the ADC or more precise the sample and hold circuit (or sample and track circuit) and that might even be a constant delay.
Imo watchnerd referred more to the sampling clock jitter supplied to the ADC, and of course any jitter effect at the ADC is converted to a voltage error wrt the input signal. If this voltage error (considered in isolation) exceeds 1/2 LSB it will irrevocably impair the signal quality. (*)

(*) I remember having read a IEEE paper on a stochastic approach where the authors concluded that at least in theory it would be possible to remove some of the jitter related voltage errors in the signal.
 
Last edited:

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,878
Likes
16,655
Location
Monument, CO
Hmmm... A few comments from my perspective as a designer of data converters (at the transistor level), so I may be misunderstanding your context, and of course definitions are not uniform despite the existence of standards like IEEE 1241. And note up-front that some of these are about mechanisms deep in the chip, are minimized by design, and may have no audible effect. Finally, whilst most of these apply to any architecture, the actual behavior and sensitivity to some error sources is quite different among say a more conventional flash, folded-flash, multistep etc. ADC, dual-slope ADC, and one incorporating a delta or delta-sigma modulator.

It may again be a bit pedantic, but imo we should try to keep the various error mechanism apart where ever possible. As we are discussing jitter effects we should stick to jitter related errors and keep in mind that there exists a "plethora" of other error mechanisms.

Indeed.

Wrt ADC a typical example of "wrong bits" would be the case of missing codes (another would be DNL).

Yes, INL, DNL, noise, clock coupling, etc. etc. etc. can cause missing codes. Note INL and DNL are usually considered static errors and are characterized at "DC" or using a low-frequency input. The IEEE Standard (1241) discusses them.

But related to jitter, the ADC produces no "wrong bits" itsself but just related to the input signal due to the nonideal timing.

I do not follow this. First, the mechanisms you cited above cause "wrong bits" in the sense that the output does not map to the input. With regards to timing, jitter on the clock causes the signal to be sampled at the "wrong" time relative to the ideal sampling instance and thus the output bits are "wrong". Clock jitter can come from outside the chip but some is added by the on-chip clock circuitry. That is unrelated to the input signal but causes "wrong bits" at the output. And of course other mechanisms in the clock and signal path affect the timing and level of the sampled signal.

The input signal can influence the sampling instant (and aperture time) through things like voltage modulation of the sampling switches -- the actual switching device may (usually does) sample at a slightly different time depending upon the voltage level of the input. And there are other error sources to which the input signal is a direct contributor, including nonlinearity in the buffers up to the switch (HD/IMD), thermal errors (one example is a simple flash design, where the comparators in the middle switch "constantly", but the ones at the ends are mostly switched one way or the other, leading to self-heating effects that change their thresholds), etc.

DonH56 already mentioned the socalled aperture jitter, but that too is a virtue of the ADC or more precise the sample and hold circuit (or sample and track circuit) and that might even be a constant delay.

What would you call it? Just curious, as I have heard it called various things over the years, but I think we all understand aperture time and how jitter affects the sampling instant. If you dig deeply, it gets gnarly... There is a delay (latency) component that is somewhat constant, in the signal and clock paths, but at high resolution and/or high speed it is a pain because the delay changes with signal slew, clock slew, signal and clock amplitude, etc. -- all of which can be in turn affected by the circuit and environmental conditions (PVT -- process, voltage, temperature). Random jitter comes about from noise on the clock and signal that changes the actual time of the sample, leading to errors in the sampled values (bits). Perhaps more insidious is deterministic jitter, which comes in many forms, but for example can be caused by clock coupling into the signal that adds spurs to the output due to the clock mixing with the input signal.

Imo watchnerd referred more to the sampling clock jitter supplied to the ADC, and of course any jitter effect at the ADC is converted to a voltage error wrt the input signal. If this voltage error (considered in isolation) exceeds 1/2 LSB it will irrevocably impair the signal quality. (*)

There are a couple of things that can happen. If the jitter exceeds 1/2-lsb then SNR is degraded. Those plots in the original article show that effect. Effectively it turns the lsb into "noise" and you'll lose a bit of resolution. Processing can regain that in some cases since the noise will average out and the signal will remain, but that only works if you have a (relatively) steady-state (constant) signal.

But, the jitter does not have to exceed 1/2 lsb to corrupt the signal, though it will be sample-by-sample and so an average or RMS metric (like SNR) might not show an error. For example, if the signal is right below the threshold, and noise pushes it just over, then the "wrong" bit is output. If the threshold for some bit is 1.000 and the signal is at 0.999, but then a little noise (added by the ADC via signal or clock buffers etc.) pushes it to 1.001 at the instant it is sampled, then it is wrong with respect to the actual input.

(*) I remember having read a IEEE paper on a stochastic approach where the authors concluded that at least in theory it would be possible to remove some of the jitter related voltage errors in the signal.

There are a bunch of papers on removing jitter and other errors and I seem to recall that same paper, or one of them. I actually researched a few ADC designs that incorporated schemes for removing both random and deterministic jitter. Some of those would work well at lower frequencies (I should have said above that most of my efforts were for converters in the 0.5 to 10+ GS/s range) but were area and power prohibitive at high speed. Some used digital processing and some various more "analog" approaches.

Thanks for the discussion, and I am probably misunderstanding and/or wrong as well as long-winded, but hope this provides a little more about my take on all this.

Onwards,
Don
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Hmmm... A few comments from my perspective as a designer of data converters (at the transistor level)
So many potential problems! But before my head spins off completely with the complexity of it all, we can still note that the first digital music recordings were made over 40 years ago, and even some of the earliest are still regarded as audiophile classics. Certainly, many recordings from 20 years ago are highly regarded. Hopefully all these problems are fine tuning at the margins - digital is still better than vinyl!
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,878
Likes
16,655
Location
Monument, CO
So many potential problems! But before my head spins off completely with the complexity of it all, we can still note that the first digital music recordings were made over 40 years ago, and even some of the earliest are still regarded as audiophile classics. Certainly, many recordings from 20 years ago are highly regarded. Hopefully all these problems are fine tuning at the margins - digital is still better than vinyl!

Hey, it's why we get the big bucks! Oh wait...

The impact of what might seem gross spec violations on the actual music played is always an issue... Some of my favorite CDs are earlier but well-recorded releases, whilst some of my least favorite are newer but poorly recording and/or remastered. The best technology doesn't fix the monkey on the (sound/mixer) board, as well I know since that monkey has been me at times!

Both digital and vinyl have their pros and cons; I just got tired of all the steps to get my LPs to sound good and all the renewables I kept having to buy. I still like the sound, arguably just got lazy. Technically digital mostly wins but not always (e.g. LPs had FR well above 20 kHz from day one). And like some tube circuits the distortion added may sound good even if not technically accurate.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
<snip>, are minimized by design, and may have no audible effect. Finally, whilst most of these apply to any architecture, the actual behavior and sensitivity to some error sources is quite different among say a more conventional flash, folded-flash, multistep etc. ADC, dual-slope ADC, and one incorporating a delta or delta-sigma modulator.

Indeed. :)
A quick remark; imo we are just discussing the impact of jitter related effects on signal quality as a technically interesting process. I don´t think that we have to emphasize in every post what we think about the audibility. (watchnerd already provided a caveat about audibility)
I only comment on the technical aspects of the process and it is in no way meant as a statement about audibility.

<snip>
I do not follow this. First, the mechanisms you cited above cause "wrong bits" in the sense that the output does not map to the input. With regards to timing, jitter on the clock causes the signal to be sampled at the "wrong" time relative to the ideal sampling instance and thus the output bits are "wrong".

Fitzcarraldo215 introduced the term "wrong bits"; obviously it would have been better for me to ask him what that means instead of assuming that my interpretation meets his. :)
But anyway, i meant it just to describe that the ADC produces an output code that does not map to the input, while in the case of jitter, the ADCs output code maps to the input but with respect to the sampling process takes the samples at the wrong time events. Therefore i wrote:
"But related to jitter, the ADC produces no "wrong bits" itsself but just related to the input signal due to the nonideal timing."
So to speak an idealized ADC with a jittered clock.


What would you call it? Just curious, as I have heard it called various things over the years,.....

I call it aperture jitter too (doesn´t "socalled" mean that it is called so?) but aperture jitter is related to the sample/track and hold process itsself (means its inherent time uncertaincy) while the clock jitter contributed by the oscillator and/or associated circuitry inside or outside the ADC is a different error mechanism. I thought that complies to the 1241 terminology?!

But, the jitter does not have to exceed 1/2 lsb to corrupt the signal, though it will be sample-by-sample and so an average or RMS metric (like SNR) might not show an error. For example, if the signal is right below the threshold, and noise pushes it just over, then the "wrong" bit is output. If the threshold for some bit is 1.000 and the signal is at 0.999, but then a little noise (added by the ADC via signal or clock buffers etc.) pushes it to 1.001 at the instant it is sampled, then it is wrong with respect to the actual input.

Of couse; i should have been more clearly in stating that againg i was assuming an otherwise ideal ADC with jittered clock.

There are a bunch of papers on removing jitter and other errors and I seem to recall that same paper, or one of them.
I was referrering to this one:
Weller, Goyal; Bayesian Post-Processing Methods for Jitter Mitigation in Sampling, IEEE Transactions on Signal Processing ( Volume: 59, Issue: 5, May 2011 )

Thank you for the very interesting insights .
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,878
Likes
16,655
Location
Monument, CO
I think we are saying the same, I was just more anal/nit-picky about it. I don't have a copy of 1241 with me but can check later but to see if it actually uses "aperture jitter" -- despite helping write the thing it was a while ago and I do not want to say wrongly.

But anyway, i meant it just to describe that the ADC produces an output code that does not map to the input, while in the case of jitter, the ADCs output code maps to the input but with respect to the sampling process takes the samples at the wrong time events. Therefore i wrote:
"But related to jitter, the ADC produces no "wrong bits" itsself but just related to the input signal due to the nonideal timing."
So to speak an idealized ADC with a jittered clock.

Hmmm... The phrase that ran through my mind comes from the music, not engineering, side of my brain: "The right note at the wrong time is still a wrong note." I think we are down to semantics (word definitions) but if you sample at the wrong time in a waveform, the level (voltage in this case) is also wrong unless it is a DC voltage. So to my mind jitter that causes a time error must also cause a voltage error. If it fall within the same lsb step it will not be seen at the output, but if it happens to cross the lsb boundary then it will be recorded as an error.

If the jitter (noise) is on the incoming signal, technically there is no ADC error since it sampled what it received; if the error arises from internal clock jitter, then the ADC is to blame.

Many ADCs (and DACs) implement some form of noise decorrelation (dither) so it can be argued whether the lsb really even matters, especially for very high dynamic range systems. Years ago measured jitter from some sources measured pretty high (~50 ns or more) but I am not sure any of today's components are that high. And re-clocking schemes are prevalent, further mitigating the problem.

The paper looks interesting. It was written a year after I changed jobs so is not something I have tried, though it looks similar. But, at my age, everything "looks similar". :)

Thanks! - Don
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,246
Likes
17,159
Location
Riverview FL
Let's see:

1 lsb error at 16 bits is a single sample's worth of dither...

1 lsb at 24 or higher bits is infinitesimal...

How many equivalent bits of noise does my merely rather quiet room have, if full scale = 105dB SPL?

Say, 33dB room... Gives 72dB SNR, 72dB / 6dB per msb = 12... So I hear (using 16bits) 12 bits above the noise floor... Or something like that... when it's real quiet here... and playing rather loudly...
 
Last edited:

Arnold Krueger

Active Member
Joined
Oct 10, 2017
Messages
160
Likes
83
Those scopes are way, way too slow to handle jitter on USB bus. There, even the old USB 2.0 high-speed runs at nearly half a gigahertz. .

False. Just because a line protocol is capable of high speed doesn't mean that it always runs at the highest speed possible. This s especially true with USB 2.x Please consult https://en.wikipedia.org/wiki/USB#Signaling_rate_.28transmission_rate.29. USB 2.x has three (3) different line speeds:

  • Low-speed (LS) rate of 1.5 Mbit/s is defined by USB 1.0. It is very similar to full-bandwidth operation except each bit takes 8 times as long to transmit. It is intended primarily to save cost in low-bandwidth human interface devices (HID) such as keyboards, mice, and joysticks.
  • Full-speed (FS) rate of 12 Mbit/s is the basic USB data rate defined by USB 1.0. All USB hubs can operate at this speed.
  • High-speed (HS) rate of 480 Mbit/s was introduced in 2001. All hi-speed devices are capable of falling back to full-bandwidth operation if necessary; i.e., they are backward compatible with USB 1.1 standard.[clarification needed] Connectors are identical for USB 2.0 and USB 1.x.
I observe that the speeds actually used are adjusted to the maximum speed that could be productively used. HS mode is as a fraction of application types is relatively rare, largely reserved for USB hard drives and the like (which are of course very common, but not typical of audio gear). Thus a 20 MHz scope might be productively used to do useful analysis of a USB 2.0 audio component. I wouldn't recommend a scope for studying anything but very extreme cases of jitter such as those that inhibit steady communication.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
I think we are saying the same, I was just more anal/nit-picky about it.

Nothing wrong with that, we have to ensure that we are talking about the same things at least sometimes. :)

I don't have a copy of 1241 with me but can check later but to see if it actually uses "aperture jitter" -- despite helping write the thing it was a while ago and I do not want to say wrongly.

I only checked the table of contents where the term "aperture uncertainty" is used.

Hmmm... The phrase that ran through my mind comes from the music, not engineering, side of my brain: "The right note at the wrong time is still a wrong note." I think we are down to semantics (word definitions) but if you sample at the wrong time in a waveform, the level (voltage in this case) is also wrong unless it is a DC voltage. So to my mind jitter that causes a time error must also cause a voltage error. If it fall within the same lsb step it will not be seen at the output, but if it happens to cross the lsb boundary then it will be recorded as an error.

Obviously we are saying the same; as said before i tried to make a distinction between a misfunction/error introduced by the conversion process itself (even if provided with a perfect - means jitterfree - clock) and the error introduced by the clock jitter (even if otherwise working as a perfect analog digital converter)

If the jitter (noise) is on the incoming signal, technically there is no ADC error since it sampled what it received; if the error arises from internal clock jitter, then the ADC is to blame.

Of course; maybe there is a reason for misunderstandings as we are in these discussions talking about "ADCs" but obviously sometimes just mean the integrated circuits called ADC while sometimes we are referrencing to a complete apparatus called ADC.
I´ll try to be more precise in the future...
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
False. Just because a line protocol is capable of high speed doesn't mean that it always runs at the highest speed possible. This s especially true with USB 2.x Please consult https://en.wikipedia.org/wiki/USB#Signaling_rate_.28transmission_rate.29. USB 2.x has three (3) different line speeds:

  • Low-speed (LS) rate of 1.5 Mbit/s is defined by USB 1.0. It is very similar to full-bandwidth operation except each bit takes 8 times as long to transmit. It is intended primarily to save cost in low-bandwidth human interface devices (HID) such as keyboards, mice, and joysticks.
  • Full-speed (FS) rate of 12 Mbit/s is the basic USB data rate defined by USB 1.0. All USB hubs can operate at this speed.
  • High-speed (HS) rate of 480 Mbit/s was introduced in 2001. All hi-speed devices are capable of falling back to full-bandwidth operation if necessary; i.e., they are backward compatible with USB 1.1 standard.[clarification needed] Connectors are identical for USB 2.0 and USB 1.x.
I observe that the speeds actually used are adjusted to the maximum speed that could be productively used. HS mode is as a fraction of application types is relatively rare, largely reserved for USB hard drives and the like (which are of course very common, but not typical of audio gear). Thus a 20 MHz scope might be productively used to do useful analysis of a USB 2.0 audio component. I wouldn't recommend a scope for studying anything but very extreme cases of jitter such as those that inhibit steady communication.

There was a reason that older audio interfaces using FS-USB usually only supported 24Bit/96Khz stereo audio data, because the then so called "hi rez" using 24Bit/192kHz would not work reliable under all possible USB constraints, although the raw data rate would fit to the specification. Therefore HS-USB is used for "hi rez" audio interfaces.
 
Last edited:

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,319
Location
Albany Western Australia
False. Just because a line protocol is capable of high speed doesn't mean that it always runs at the highest speed possible. This s especially true with USB 2.x Please consult https://en.wikipedia.org/wiki/USB#Signaling_rate_.28transmission_rate.29. USB 2.x has three (3) different line speeds:

  • Low-speed (LS) rate of 1.5 Mbit/s is defined by USB 1.0. It is very similar to full-bandwidth operation except each bit takes 8 times as long to transmit. It is intended primarily to save cost in low-bandwidth human interface devices (HID) such as keyboards, mice, and joysticks.
  • Full-speed (FS) rate of 12 Mbit/s is the basic USB data rate defined by USB 1.0. All USB hubs can operate at this speed.
  • High-speed (HS) rate of 480 Mbit/s was introduced in 2001. All hi-speed devices are capable of falling back to full-bandwidth operation if necessary; i.e., they are backward compatible with USB 1.1 standard.[clarification needed] Connectors are identical for USB 2.0 and USB 1.x.
I observe that the speeds actually used are adjusted to the maximum speed that could be productively used. HS mode is as a fraction of application types is relatively rare, largely reserved for USB hard drives and the like (which are of course very common, but not typical of audio gear). Thus a 20 MHz scope might be productively used to do useful analysis of a USB 2.0 audio component. I wouldn't recommend a scope for studying anything but very extreme cases of jitter such as those that inhibit steady communication.

....mmmmm.....

Take a look at a couple of usb 2 eye patterns. Are you really sure of your assertion that a 20 MHz scope is going to cut it? Take a close look at the time base and sample rate. You also require a differential probe.

near end.png
rt eye active probe.jpg
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,588
Likes
239,445
Location
Seattle Area
Thus a 20 MHz scope might be productively used to do useful analysis of a USB 2.0 audio component. I wouldn't recommend a scope for studying anything but very extreme cases of jitter such as those that inhibit steady communication.
Which is not of interest to anyone since data errors is not what at stake here. A number of companies make tweak devices that claim to reduce USB jitter and noise. Today, we hook up a DAC and see what they do on its output (not much). What would be interesting is to test their original hypothesis of cleaning up the USB signal.

For that, even if we are testing 12 mbit/sec USB, we would need a scope that is at least 10X faster as to make sure we are not seeing the effect of scope's slow response vs the USB device under question. That means 120 MHz scope, not 20. A 20 Mhz scope would severely distort the rise time of a 12 Mhz signal.

You also need a fast trigger as any delays or jitter there also intertwines with the signal under test.

And as I and BE mentioned, you need a special probe that does not have high capacitance as to load down the source signal. This usually means active differential probe.

But we can put all of this aside and ask if you have done such work or are just assuming it would work. Do you have such a USB scope and can post results of looking at USB bus?
 

Krunok

Major Contributor
Joined
Mar 25, 2018
Messages
4,600
Likes
3,067
Location
Zg, Cro
At what stage does the jitter from the entire internet and the miles of cable in to your house become so bad that it shows up in the measurements and/or becomes unlistenable?

It actually doesn't work that way. Packet delay variation is compensated with jitter buffers in the process called Packet Loss Concealment. As long as packet loss is under 1% it is considered inaudible, while if greater than 3% you will be certainly able to hear it. It will not actually modify the sound itself but you will hear it as clicks or dropouts.

To put the things short - in packet audio transmmition you don't really have jitter, instead you have packet loss and delay. And yes, if a packet loss comes over the certain limit you will hear it in a very different way than you hear jitter.

More info here: https://kb.smartvox.co.uk/voip-sip/rtp-jitter-audio-quality-voip/
 
Last edited:
Top Bottom