• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Uptone ISO Regen Review and Measurements

OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,656
Likes
240,861
Location
Seattle Area
While I enjoy reading your posts - I do have to ask just one simple question. How does the human mind perceive a stereo image from just two separate point sources? From all the research I have read - brain scientists can't really explain it - many theories - no proof.
Oh, we know considerable amount about that. One great proof of that is headphones that simulate multiple channels. They are able to position sound 360 degrees around you by understanding exactly what is going on there.

Briefly, we have two ears and face and torso which impact how sound arrives at each ear. The differential in those two sounds in level, timing, and frequency response allows us to locate position of sounds.

To be sure, it is a complex matter. For example, levels and timing differences are used in different frequency ranges to locate things.

There are specific metrics for example such as IACC (InterAural Cross Correlation), IID (interaural intensity differences) and ITD (interaural time difference) that measure such things.

You ever wonder why dogs twist their head sideways like this:

Why-do-dogs-tilt-their-heads1.png


They do that to accentuate the differential sound that arrive in their ears vertically to better triangulate the position of sound. And of course to also act cute. :D

Please see Chapter 7 of Dr. Toole's book, "Impressions of Space", https://www.amazon.com/Sound-Reproduction-Psychoacoustics-Loudspeakers-Engineering/dp/0240520092
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,656
Likes
240,861
Location
Seattle Area
So if one is to take that basic final result (and purpose) of an audio chain (is there music, if a stereo is playing and no one there to hear it? Air waveforms yes - music no), how can any single audio measurement capture this complex process? We can maybe make a few guesses at what a measured electrical aspect might produce in terms of subject sound quality - but until heard - just a guess.

Isn't almost every experience a human has a 'subjective biased illusion'? In some cultures fat women are the most attractive - in others near starvation bone thin is? Which is objectively more 'beautiful'? Etc...
Starting from the end, the research community for both medical, educational and audio enjoyment has performed incredible amount of research into how we hear through controlled listening tests. Over many decades, a specific domain of science is established called psychoacoustics. Consensus has been formed around them and proven time and time again. For example, lossy codecs are able to reduce the data rate of music by some 90% with the results still sounding awfully close to the original. This is done by using the psychoacoustic model of our hearing and analyzing what needs to be kept, and what can be reduced in resolution or discarded.

One of the most basic parts of this science is threshold of hearing. Ultimately our ear has limits of what it can detect. That threshold (vs frequency) is encapsulated by work done by Fletcher and Munson back in 1933 while working for Bell Labs:

Fletcher-Munson01.png


Looking at the bottom graph, we see the faintest sound we can hear. Once we achieve such quietness, there is no reason to keep improving. It will simply won't make a difference.

Another fundamental one is simultaneous masking:

index.php


This says that in the presence of a loud tone, we can't hear others less loud around it. In the above situation, the blue tone is not heard but the red one is.

This gets us to the question you asked about clocks. The "better clocks" people talk about have lower amount of noise/jitter at low frequencies. Based on the above graph you see that such low frequency jitter will be very close to the main tones and hence, highly inaudible.

You see how powerful that science is?

As to measurements, when it comes to audio tweaks, all we have to show is that either no electrical change has occurred in the output of the system or that what has changed is inaudible against psychoacoustics. If so, we have a high confidence conclusion that the tweak is not making an audible difference.

In the case of digital systems this is highly simplified by the fact that aspects of audio which we easily notice is not changed. For example, frequency response remains the same. No clipping happens as in analog amplifier. So we don't need a comprehensive set of measurements to arrive at a high confidence answer.

If we were speaking of speakers for example, that situation would change a lot and we would need to know a lot more than one or two measurements.

Ultimately though, we can do a controlled blind listening test. Those tests have invariably shown what psychoacoustics readily explains: that we simply are not sensitive to these small differences we argue about in forums. That audiophiles in general are no more sensitive in hearing non-linear, complex distortions any better than general public. What we think we "hear" is routinely what our brain likes to manufacture to satisfy our sense of curiosity, optimism, and imagination.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,656
Likes
240,861
Location
Seattle Area
So from a purely Philosophical perspective your adamant statements (bordering on absolutist dogma), that unless something can be measured and if someone perceives it sounds better - it's just a bias expectation result. Sounds like completely unsupported conjecture and rhetoric.

I see scientists make huge logical inference errors all the time. Most of biology, chemistry, physics, and even mathematical theories (Godel's incompleteness theorems) are rife with it.

Now if 10 people hear it and 9 agree it sounds better - that is further subject proof. No?
Most definitely. We are in absolute agreement on the last sentence.

The problem is, folks don't understand how the brain works when it comes to evaluating sound and proceed to conduct tests that 99% lie to you about what you think you are hearing. With the data being wrong 99% wrong, the conclusions of audiophiles will be wrong just as much.

If you conduct correct listening tests and arrive at such data, I would gladly and happily throw away any measurements and get on board with you! :)

Here is a simple test with mind boggling outcome:

Take a digital file in your library and play it for a few seconds. Jump to its beginning and listen again. Do this three times and keep paying attention to fidelity you hear: the air between instruments, darkness of background, how analog like the sound is, etc.​

Here is the outcome of the test: you will hear differences in every run!!! This is the case even though obviously nothing has changed.

And here is the most amazing part: even with full knowledge of what I just said, you can still hear the difference! I know I do.

To not hear this is to not be human!

Why do we hear such changes? Because we have a two stage memory. An "echoic" short-term memory that captures large amount of data. Since your ears are constantly working, no brain could ever keep capturing such massive amount of data.

So what happens is that the brain analyzes what is being heard in the echoic memory and commits to memory very small amount of information. This way you can remember sounds for many years. But remember, this is a highly lossy process. Tiny, tiny amount of information is kept. Rest is all discarded. Left to its own defenses, the brain is not going to focus on that black background one very note for example.

Now, when you do a listening test, you insert your conscious mind into this process. You ask your brain to keep searching for differences. Now, you hear details that you capture and commit to long-term memory you had not before. What you keep now, is changed.

Due to above reason, you remember variable amount of information note to note in audio evaluations. It is called elasticity of hearing.

Have you ever gone back to an old piece of music and all of a sudden hear some sound you had not heard before? That is the above situation in action.

In evaluating audio equipment it is critical that we guard against this aspect of our perception. If we do not, we get tons of variable information that doesn't add up to anything. Ever wonder why subjective, sighted evaluations are so all over the place amount audiophiles? Now you know :).

In controlled testing, we run many trials for example to rule out the variability of our hearing. We don't just take one AB test and declare and outcome. We do as you say, you play 10 times and if 9 times you could identify a difference, then we believe the outcome.

Audiophiles don't do that. Change a cable and oh wow, so much changes. Well no. All that happened was that when you changed cables, you started to look for details that would normally be discarded by the brain and now heard them. That had nothing to do with having that new cable in that system. The sound could very well be identical yet you hear profound differences.

In controlled testing, we take away all other input from you such as whether the cable has or has not been replaced. Now, if you can track those changes with your ears -- and nothing by your ears -- then yes, you absolutely have better results than any measurements.

So going full circle, it is the understanding of hearing system and how our brain works that allows us to dismiss much of what audiophiles do in evaluating equipment. It is precisely that, which guides us in know what is good and bad data.

Put another way, if audiophiles really believe in science, they should follow it. They should read the research. They should conduct controlled testing of their own and realize how easily the brain lied to them time and time again.

Just have a loved one make a change to your system every night. One day put a new cable in. One day put the old cable in. All without you knowing. You arrive at your system, without seeing the change, you take note of how your system changes. Do this for a week and you will come to our side of the argument and fast! :)

We ride on the shoulders of decades of proper audio research. That is why our case is so strong and so defensible. We don't rely on lay intuitions, assumptions of lack of understanding, and making up our own domain of science. That, would not be logical in any other field around us. Why in audio???
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
While I enjoy reading your posts - I do have to ask just one simple question. How does the human mind perceive a stereo image from just two separate point sources? From all the research I have read - brain scientists can't really explain it - many theories - no proof.

So if one is to take that basic final result (and purpose) of an audio chain (is there music, if a stereo is playing and no one there to hear it? Air waveforms yes - music no), how can any single audio measurement capture this complex process? We can maybe make a few guesses at what a measured electrical aspect might produce in terms of subject sound quality - but until heard - just a guess.

Isn't almost every experience a human has a 'subjective biased illusion'? In some cultures fat women are the most attractive - in others near starvation bone thin is? Which is objectively more 'beautiful'? Etc...
The interesting phenomenon that can occur with stereo is that given a high enough quality from the point sources that then the "subjective biased illusion" becomes so robust that it becomes impossible to perceive the "stage machinery". That is, one can't aurally detect that the two speakers in front of one are the source of the sound image being projected - no matter how one goes about trying to disrupt the "mirage". In one sense this is just an extension of the infamous audiophile "sweet spot" - but it's usually very easy to undermine the sweet spot version, just by moving closer to a speaker on one side - the image collapses, and that speaker is obviously the source of sound.

This is extremely rare, because systems usually generate too many audible anomalies, and the job for the ear to pinpoint a particular speaker is quite easy - hence the need to sit in a "special spot" to get the big picture illusion. But it also means that it's easy to "measure" the SQ - how close is a particular setup to pulling off this feat? It's an interesting goal to have, and IME requires an extremely high integrity of the electrical performance of the whole chain.
 

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
@Amir meet @Atomicbob.

SBAFanboy with an EE and test bench full of analyers. He just loves the new $179 Schiit Eitr - USB decrapifier.

I'd like to see you review that one:
http://superbestaudiofriends.org/index.php?threads/schiit-eitr-preview-thread.4729/page-17

Refer to the axiom about theory and practice.

Consider the two cases of
1. analyzer loop back from digital output to digital input
2. analyzer sending data to traverse the incredibly tortuous audio Win(x) maze through USB and then to a DDC and finally back to analyzer digital input

Case 1 - simple analyzer loop back:
index.php

It may be hard to see the loop back from DO to DI in the diagram. This will be the purest path possible for the analyzer.

Case 1 - results:
index.php

Note how it appears there is only one channel showing. Both Left (blue) and Right (red) are depicted but they overlay so closely as to only display one color.

Now consider case 2 - look at the signal path that must be traversed in Win(x) to transport digital audio from application to digital endpoint.
index.php

This path is both complex and susceptible to all manner of data corruption. Most obvious would be data under or over flows. Less obvious is a potential trip through any of Win(x) sample rate conversions, mixers, enhancers, etc. Not easily understood through the various places one may interact with audio device and driver properties.

Now consider a DDC that attempts valiantly to traverse the gauntlet only to result in the following:
index.php

Not bad but not nearly as close to the analyzer loop back path. Also note how the Right channel (red) is now appearing as there are differences between left and right channels.

Here again, for convenience is the Schiit Eitr measurement:
index.php

Again note how there are differences between left and right channels showing as both blue and red are now displayed. These are quite minor but also demonstrate the difference between theory and practice.

atomicbob, Jul 26, 2017
#332

RakiRaki, earnmyturns, Azteca and 12 others like this
 
Last edited:

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
Thanks for the kind words. No, I have not had a dac with replaceable clocks. The topic you mention is a hot one as of late with a lot of arguments. I would love to measure such a device. Do you have any in mind that I could purchase to measure and evaluate?
Yes the cheap Gustard U12:
http://www.ebay.com/itm/GUSTARD-U12...391071?hash=item3adea3f85f:g:q-kAAOSw4CFY4tlt

On my U12 HF thread people did this - swapped out the Chinese TXCO's for CCHD-975's. Claimed it improved the SQ:
s-l1600 (1).jpg
 

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
Most definitely. We are in absolute agreement on the last sentence.

The problem is, folks don't understand how the brain works when it comes to evaluating sound and proceed to conduct tests that 99% lie to you about what you think you are hearing. With the data being wrong 99% wrong, the conclusions of audiophiles will be wrong just as much.

If you conduct correct listening tests and arrive at such data, I would gladly and happily throw away any measurements and get on board with you! :)

Ok did this. Double blind test comparing my hybrid USB/Ethernet chain/generic iCore7/WIN10 box - by SPDIF to my heavily modded R2R PCM1704U-K DAC ($600 Lite DAC60). Vs a Mac Mini (powered by a $900 Uptone JS-2)/Empricial Offramp 5 by i2s to a PS Audio Perfect Wave DAC ($6000).

Running same files simultaneously - we randomly switch back and forth.

Listener was Jason Serinus reviewer for Stereophile. He had no idea which system was playing. Keeping score A/B. He favored 'B' (mine) every time. I had no idea which was playing. The person doing the switching (by remote) was standing out of site at the back of the room.

Jason's comments were that my system had greater tonal density and was more natural.
 

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
The interesting phenomenon that can occur with stereo is that given a high enough quality from the point sources that then the "subjective biased illusion" becomes so robust that it becomes impossible to perceive the "stage machinery". That is, one can't aurally detect that the two speakers in front of one are the source of the sound image being projected - no matter how one goes about trying to disrupt the "mirage". In one sense this is just an extension of the infamous audiophile "sweet spot" - but it's usually very easy to undermine the sweet spot version, just by moving closer to a speaker on one side - the image collapses, and that speaker is obviously the source of sound.

This is extremely rare, because systems usually generate too many audible anomalies, and the job for the ear to pinpoint a particular speaker is quite easy - hence the need to sit in a "special spot" to get the big picture illusion. But it also means that it's easy to "measure" the SQ - how close is a particular setup to pulling off this feat? It's an interesting goal to have, and IME requires an extremely high integrity of the electrical performance of the whole chain.
Agreed - my Magnepans can pull this off easily. Even freaks my dog out - as the two channel system can create the illusion of voices and instruments behind the listener.

But my point is - no one can truly explain how the Ear/Brain/Mind can produce such an apparently detailed illusion from the minuscule timing differences from two point source just feet away. The auditory nerve chain is just two slow to process the fractional timing differences. And the wave inference patterns are just to complex for the Ear/Brain/Mind to process in real time for dynamic complex music. Yet we hear it any way.

This uncanny human ability - is just to complex to be reduced to a few tests tones measured for noise and harmonics.

Only live human listening can determine what 'sounds' good, versus what 'measures' good.
 

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
Starting from the end, the research community for both medical, educational and audio enjoyment has performed incredible amount of research into how we hear through controlled listening tests. Over many decades, a specific domain of science is established called psychoacoustics. Consensus has been formed around them and proven time and time again. For example, lossy codecs are able to reduce the data rate of music by some 90% with the results still sounding awfully close to the original. This is done by using the psychoacoustic model of our hearing and analyzing what needs to be kept, and what can be reduced in resolution or discarded.

One of the most basic parts of this science is threshold of hearing. Ultimately our ear has limits of what it can detect. That threshold (vs frequency) is encapsulated by work done by Fletcher and Munson back in 1933 while working for Bell Labs:

Fletcher-Munson01.png


Looking at the bottom graph, we see the faintest sound we can hear. Once we achieve such quietness, there is no reason to keep improving. It will simply won't make a difference.

Another fundamental one is simultaneous masking:

index.php


This says that in the presence of a loud tone, we can't hear others less loud around it. In the above situation, the blue tone is not heard but the red one is.

This gets us to the question you asked about clocks. The "better clocks" people talk about have lower amount of noise/jitter at low frequencies. Based on the above graph you see that such low frequency jitter will be very close to the main tones and hence, highly inaudible.

You see how powerful that science is?

As to measurements, when it comes to audio tweaks, all we have to show is that either no electrical change has occurred in the output of the system or that what has changed is inaudible against psychoacoustics. If so, we have a high confidence conclusion that the tweak is not making an audible difference.

In the case of digital systems this is highly simplified by the fact that aspects of audio which we easily notice is not changed. For example, frequency response remains the same. No clipping happens as in analog amplifier. So we don't need a comprehensive set of measurements to arrive at a high confidence answer.

If we were speaking of speakers for example, that situation would change a lot and we would need to know a lot more than one or two measurements.

Ultimately though, we can do a controlled blind listening test. Those tests have invariably shown what psychoacoustics readily explains: that we simply are not sensitive to these small differences we argue about in forums. That audiophiles in general are no more sensitive in hearing non-linear, complex distortions any better than general public. What we think we "hear" is routinely what our brain likes to manufacture to satisfy our sense of curiosity, optimism, and imagination.
Thanks Amir. But noise threshold is just one very small factor in our auditory chain. So much more going on. Not saying that science knows nothing about our hearing - just how can this illusion of stereo can occur so precisely.

Now like most modern science - the fall back is Reductive Materialism's favorite dogma - "Oh we just haven't discovered it yet - but will". No better then how did life start on Earth? - "Oh easy little green men put us here - don't have any proof yet - but once we start exploring other stars we'll meet them".

I do respect science - and now when I listen to a Redbook WAV file - I hear details never heard before. On recording made 30 or 40 yrs ago. Albums I've heard thousands of times - some on virgin 200gm vinyl, SACD, DVD-A (24/96). And these 16/44 files sound so superb! So the brilliant engineers at Philips and Sony were absolutely correct in mathematically calculating the need bit and sampling freq to capture and reproduce this level of detail and tonal realism. Old Nyquist-Shannon theory was spot on!

After 30 years of intensely pursuing realistic sound from my systems - there are many things that do sound better! And many times are cheaper (so it's not just the money=SQ delusion). I was a big analog guy for several decades - spending god knows much time and money to try countless turntables, cartridges, phono pre. Never fully satisfied. Now with a source chain costing 1/20th - completely satisfied. The SQ is so far beyond anything I've heard analog produce, no matter what the cost. But I do prefer the 'sound' of NOS tubes vs solid state opamps (even discrete designs). And have assembled a rather complex chain of USB and Ethernet devices to achieve this SQ. In the end not very expensive, for the quality of the sound.

I have to ask you - do you like to that iFi iDAC? I had one and it sounded awful. Same for that Behringer - just gave it to my kid. It might test well but sounds pretty thin and flat - same for the iDAC.
 
Last edited:

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
Oh, we know considerable amount about that. One great proof of that is headphones that simulate multiple channels. They are able to position sound 360 degrees around you by understanding exactly what is going on there.

Briefly, we have two ears and face and torso which impact how sound arrives at each ear. The differential in those two sounds in level, timing, and frequency response allows us to locate position of sounds.

To be sure, it is a complex matter. For example, levels and timing differences are used in different frequency ranges to locate things.

There are specific metrics for example such as IACC (InterAural Cross Correlation), IID (interaural intensity differences) and ITD (interaural time difference) that measure such things.

You ever wonder why dogs twist their head sideways like this:

Why-do-dogs-tilt-their-heads1.png


They do that to accentuate the differential sound that arrive in their ears vertically to better triangulate the position of sound. And of course to also act cute. :D

Please see Chapter 7 of Dr. Toole's book, "Impressions of Space", https://www.amazon.com/Sound-Reproduction-Psychoacoustics-Loudspeakers-Engineering/dp/0240520092
Oh we know how to manipulate the perceived sound field and of course we ourselves know to change our hearing position - but this is far more complex of a situation.

Follow the path: The Sound wave enters the pinna (our outer ear) then travels down the ear canal to the eardrum, this membrane starts to vibrate, this in turn transfers mechanical energy to the three bones of the inner ear: The malleus (hammer), incus (anvil) and stapes (stirrup). Now the chain continues: The bone furthest from the outer ear, the stirrup, sets another membrane in motion transferring the mechanical energy to the cochlea in the inner ear. Not done yet by a long shot!
Outer_Ear.jpg

The cochlea is a small organ looks like a snail and, indeed, cochlea means “snail” in Greek. The cochlea is filled with fluid. When a vibration reaches it, the liquid moves and microscopic hair-like cells within it move as well. It is the movement of these small hairs, known as stereocilia, that triggers electrical signals which are then sent to the brain via the auditory nerve. The auditory nerves is rather long and those signals have to be processed by the brain.

All this takes time - a lot of time - many mechanical processes involved then the nerve reaction itself. Not to mention the brain's synapses firing on registering the input.

Science can not explain how such a Rube Goldberg scheme can detect sound difference of millisecond degrees. The Ears are only a few inches apart - sound travels at 13,512 inches per second. Say your ears are 7 inches a part - in that 7 inches sound travels in .001 seconds or 1 millisecond. So to do all the echo location the entire chain has to distinguish a 1 millisecond difference in arrival time. And/or map a combined inference pattern internally (at the brain as the two auditory signals arrive separately). Accounting for outer and inner ear physical differences and possible nerve path differences. This long chain of bones, membranes, hairs, fluid, and nerves can not be explained to consistently measure millisecond differences.

To make things even more complicated, there are different types of stereocilia in the cochlea responsible for four essential areas of auditory perception:

  • Pitch
  • Volume
  • Location
  • Timbre
Adding to this complexity - you have additional channels of sound perception - the bone of the skull itself absorbs sound waves and interacts with the inner ear fluid (at a different slower speed to the sound waves traveling by air pressure down the ear canal).

Then you have air pressure differences between the inner and outer ear - try equalizing the pressure while listening to music (like 'clearing your ears on an airplane), and note the change in high freq perception.

How all of this can produce such a precise and complex array auditory qualia - can not be explained by biological science or physics.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,656
Likes
240,861
Location
Seattle Area
The Qualia concept or explanation - goes right to the heart of this forum. Trying to judge audio gear by nothing other then - test bench measurements.
Our instrumentation is not attempting to judge how something sounds. Instead, it shows detected variations from ideal. We then apply psychoacoustics to see if that difference is material to what is heard. Since psychoacoustics is based on listening tests, we are incorporating both (hearing and instrumentation) in our analysis.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,656
Likes
240,861
Location
Seattle Area
Also note the third leg in the stool: we combine measurements with engineering of what the device does. It is that totality which enables us to make strong conclusions that can stand the test of criticism.
 

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
Also note the third leg in the stool: we combine measurements with engineering of what the device does. It is that totality which enables us to make strong conclusions that can stand the test of criticism.
I appreciate that - the breaking down of a device's engineering.

Thanks!
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,656
Likes
240,861
Location
Seattle Area

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
6601565_thumb.jpg
Thanks. I ordered the Gustard. But what is the frequency of the oscillator/link to the exact part so that I can order it now.

And is this soldered and folks desolder and put in the new oscillator?
Great! 45.15840 Mhz and 49.152 Mhz. CCHD-975 should be a 'drop in' replacement. Digikey has them both $28.

EDIT: The Crystek CCHD-975 only come in SMD - so you will need to mount them on adapter with the four pins outs. I have the adapters and can send them to you for free.

Shouldn't be to hard to do
Here is an example. It's been a few years since I did this mod!

https://www.digikey.com/products/en/crystals-oscillators-resonators/oscillators/172?k=&pkeyword=&pv139=2611&pv139=270&FV=fffc02e8,ffe000ac,ffec7420,1f140000&mnonly=0&ColumnSort=0&page=1&quantity=0&ptm=0&fid=0&pageSize=25

IMG_4333.JPG
Some people add a small bypass cap - I'll get the pins for that.
 
Last edited:

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,894
Likes
16,705
Location
Monument, CO
Knowing nothing about the circuit for this oscillator, I'll just say that every one I have ever designed/used had a pi filter (C-L-C, usually with damping Rs) for the voltage to the oscillator's power pin to keep the noise down. We use LVPECL a lot and they don't have the greatest PSRR especially broadband.

That said, it probably takes a lot of random jitter to violate spec for an audio device, and probably much more to be audible.
 

RB2013

Member
Joined
Aug 24, 2017
Messages
56
Likes
11
Here are the adapters with SMD pads for the 975 mount to. The bottom shows the four pins outs. The two small pads are for the bypass cap soldering.

I have some 10nf VISHAY-RODERSTEIN MKP-1837, if you want to do the bypass (optional - I didn't)

IMG_0581.JPG

Those four pins match up perfectly where the 'Gustard' TXCO's are soldered. The tricky part is the reflow soldering of the 975 to the pads. Best to use hot air gun and low temp melt solder paste painted to the pads.

I may even have some of these trick quick swap adapters - this allows quick clock swapping (kinda like Opamps).
IMG_0582.JPG
 

Mivera

Major Contributor
Joined
Mar 4, 2016
Messages
2,322
Likes
97
Location
West Kelowna
Here are the adapters with SMD pads for the 975 mount to. The bottom shows the four pins outs.
View attachment 8355
Those four pins match up perfectly where the 'Gustard' TXCO's are soldered. The tricky part is the reflow soldering of the 975 to the pads. Best to use hot air gun and low melt solder painted to the pads.

I may even have some of these trick quick swap adapters - this allows quick clock swapping (kinda like Opamps).
View attachment 8356


I didn't know record players used clocks??
 
Top Bottom