• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

UpTone LPS-1 Linear Power Supply Review and Measurements

Jinjuku

Major Contributor
Forum Donor
Joined
Feb 28, 2016
Messages
1,279
Likes
1,180
My understanding is that Foobar ABX is specifically designed to minimise the chances for differences in the playback chain to affect the results. It tries to ensure that the only differences are those encoded in the actual files being compared.

This is how all playback software should operate and why I use JRiver to cache up to 1GB in RAM.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Which still begs the question I asked.

I´m at a loss.
You´ve associated a possible outcome to an listener´s expectation. If that association really exist is unknown (imo).

I can give you a clear example based on personal experience.

Let's say poster says 320 kbps MPS is much worse than CD. And that we know trained listeners can tell the difference between 320 kbps MP3 and CD. In that situation, if I tested the person without training, he could fail to hear that difference even though that fidelity drop is there. That difference is inaudible to the poster without training and hence is much less than what he expected.

Let's now address the first part above. If I make the MP3 data rate 32 kbps, then the difference between that and CD will be huge and higher than the person's impression that MP3 is worse than CD. There, no training is necessary as the audible fidelity drop is even higher that the impression of the tester.

That´s only one aspect of training (i.e. training the detection ability for a specific effect), i am talking about training to get used to the specific conditions of a controlled listening test to retain the (at least) same detection ability a listener might have under "normal/casual" listening conditions.

As said before, we already know that introducing an additional variable (i.e. a money reward) isn´t a good idea as it alters the behavior of listeners in a test.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,705
Likes
241,436
Location
Seattle Area
That´s only one aspect of training (i.e. training the detection ability for a specific effect), i am talking about training to get used to the specific conditions of a controlled listening test to retain the (at least) same detection ability a listener might have under "normal/casual" listening conditions.
Oh, why do you say that would not exist in the test process I suggested? Surely it does. The person will go from testing the same thing sighted to unsighted.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Interesting! You might have posted the link to this food study already, but would you mind posting it again?

Harris /1/ asserted already in a letter to the JASA (1952) that in their experiments an A/B test was more sensitive than an ABX, with further corrobation by other people doing similar experiments. In addition they found that subjectively the ABX test task was more difficult.
"In this laboratory we have made some comparisons among DLs for pitch as measured by the ABX technique and by a two category forced-choice judgment variation of the constants method (tones A B, subject forced to guess B "higher" or "lower"). Judgments were subjectively somewhat easier to make with the AB than with the ABX method, but a greater difference appeared in that the DLs were uniformly smaller by AB than by ABX. On a recent visit to this laboratory, Professor W. A. Rosenblith and Dr.Stevens collected some DLs by the AB method with similar results.
The case seems to be that ABX is too complicated to yield the finest measures of sensitivity."

Huang/Lawless /2/ did experiments in 1997 comparing the ABX with other protocols (like paired comparison, 3AFD, Duo-Trio and Triangle) and their data showed that, although all tests delivered significant results, the proportion of correct responses was higher for paired comparison and 3AFC.

Macmillan/Creelman /3/ predicted that, due to their models used, 2AFC and (mostly) 3AFC tests would show greater proportion of correct responses than ABX except when the differences were really large. (ABX would be more sensitive than same/different tests)

But all of these were done with test where the DUTs differed only in one dimension, so it might be different with multidimensional stimuli, although i´ve experienced the same feeling of difficulty when trying ABX and observed it too when asking two other people trying it, therefore dropped it.

Otoh people like Paul Frindle /4/ reported quite expressive sensitivity taking his list of differences detected in ABX tests:

"-) Absolute and stereo differential gain anomalies of less than 0.1 dB
-) Differential stereo delays of 1 uS
-) Frequnecy response variations of 0.1 dB from "flat" 20 Hz - 20 kHz
-) Harmonic distortion components at 80 dB below signal level even when they are more than 10 dB below noicse floor
-) Limit cycle behaviour of Delta/Sigma DAC converters at 100 dB below max signal level
-) ......"

Accepting that, it imo seems to be a matter of training or to find a controlled test protocol that suits your personal habits/abilities and to be patient until getting used to it.

/1/ J. Donald Harris, Remarks on the Determination of a Differential Threshold by the So-Called ABX Technique, J. Acoust. Soc. Am. 24, 417 (1952).
/2/ Yu-Ting Huang,Harry Lawless, Sensitivity of the ABX Discrimination Test, Journal of Sensory Studies 13 (1998) 229-239.
/3/ Neil A. Macmillan, C. Douglas Creelman, Detection Theory: A User’s Guide, 2nd edtion, Lawrence Erlbaum Associates, Inc., 2005, 253
/4/ Paul Frindle, Are We Measuring the Right Things? Artefact Audibility Versus Measurement, UK 12th Conference: The Measure of Audio (MOA) (April 1997), paper no. MOA-05 .


I posted this in another thread today, but this study is actually also relevant to the discussion here: http://epubs.surrey.ac.uk/805557/9/2013 135th AES Convention Pike.pdf

Their study was about adaptation to rooms and timbre, but they also investigated different comparison methods. In their second experiment, formal and focused comparison was compared to a type of listening that the authors regard as closer to real world listening. The result was that focused, direct comparison was more revealing of differences.

This makes intuitive sense to me: Human perception is inherently adaptable. If it suddenly starts raining outside the house, we will hear it and notice it immediately. If it rains for hours and hours, at some point we will stop noticing the sound so much (interestingly, Buddhist monks who meditate a lot seem be more continuously aware of their sensory experiences than others). I would simply assume this to be the case with audio as well. Things that sound strange or "wrong" at first, may disappear from our consciousness after a while. So, I've concluded that I'm not convinced by the argument that long-term listening is more revealing of differences than short-term listening.
(but sensory adaptation may also be be a problem with short-term listening tests, of course)

I do find the argument against ABX convincing though, in favor of AB tests. This is partly because I've done some ABX tests myself, and I realized how cognitively demanding it was. I've also done some AB tests, and found them to be much easier on the brain. It also makes sense to me that some training may be needed to be able to spot differences in blind tests, given that blind listening is so different from our normal mode of operating.

It surely depends on what "long term listening" in this context means or should mean. From an experimental point of view it migth be favourable as it tend to avaerage out personal day to day differences and from the adapting effect that you´ve mentioned it might be favourable as the listener is used to this normal listening experience and even a minor difference can be obvious in the moment it arises.
 
Last edited:

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Oh, why do you say that would not exist in the test process I suggested? Surely it does. The person will go from testing the same thing sighted to unsighted.

Not sure having said that :) , but it seems that we basically agree that it most likely will take some time until a listener will be able to get correct results in an ABX tests ."Correct" meant in a sense that the results not mainly reflect listeners difficulties while struggling with the specific test conditions.
 

Jinjuku

Major Contributor
Forum Donor
Joined
Feb 28, 2016
Messages
1,279
Likes
1,180
Oh, why do you say that would not exist in the test process I suggested? Surely it does. The person will go from testing the same thing sighted to unsighted.

Why subjectivists don't understand this simply pillar of fact has always amazed me. Whatever you can do sighted you can do blind.
 

Jinjuku

Major Contributor
Forum Donor
Joined
Feb 28, 2016
Messages
1,279
Likes
1,180
Not sure having said that :) , but it seems that we basically agree that it most likely will take some time until a listener will be able to get correct results in an ABX tests ."Correct" meant in a sense that the results not mainly reflect listeners difficulties while struggling with the specific test conditions.

No, what you will do is sample enough times that some tests will come out lining up with the inputs. ABX isn't about 'correctness' or 'incorrectness'.
 

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,212
Likes
16,960
Location
Central Fl
Why subjectivists don't understand this simply pillar of fact has always amazed me. Whatever you can do sighted you can do blind.
They understand full well, always just excuses made to avoid participation for fear of failure. (highly likely failure ;))
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
No, what you will do is sample enough times that some tests will come out lining up with the inputs. ABX isn't about 'correctness' or 'incorrectness'.

Unfortunately i don´t know what your first sentence means, but wrt to the second - ABX is just a test protocol and therefore only part of an experimental design. This said experiment is - as any sound experiment - about "correctness" or "incorrectness" of any result wrt the research objective the experimenter specified.
For example if an experiment using the ABX protocol gives negative results and the same experiment using the A/B paired comparicon protocol would give positive results the experimenter should be very much interested in the question which one of the results is the correct one.
(Assumed the research hypothesis was if a differece between two DUTs can be heard)
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,705
Likes
241,436
Location
Seattle Area
Unfortunately i don´t know what your first sentence means, but wrt to the second - ABX is just a test protocol and therefore only part of an experimental design. This said experiment is - as any sound experiment - about "correctness" or "incorrectness" of any result wrt the research objective the experimenter specified.
We are not performing research. Heaven knows no sighted test conducted is such. So let's not get all "lawyered up" with such formal lingo.

All we are trying to do is encourage people to perform some tests where they don't know what is being tested. If they try, it will give them new insights into their understanding of audio.

Subjective evaluations have gone so crazy, so wild, so out there, that we need people to slow down and take a breath. We are not trying to convert them into researchers. But if we were, this would still be a good first step.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Whatever you like to call it, basically the goal should be to get correct results, otherwise nothing would change. Incorrect results will corrcobate either the original (possibly wrong) subjective impression or invoke another (possibly wrong) subejctive impression.
If you only guard against the first possibility but don´t care about the second it´s just another form of subjectivism.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,705
Likes
241,436
Location
Seattle Area
Whatever you like to call it, basically the goal should be to get correct results, otherwise nothing would change. Incorrect results will corrcobate either the original (possibly wrong) subjective impression or invoke another (possibly wrong) subejctive impression.
Again, you are stipulating goals for a research project. Not what an audiophile needs to do at least once or twice to get a feel for what it means to evaluate an audio product without knowing which is which. Just putting their toe in that water is very useful. They will speak from experience then, not just reading arguments.

If you only guard against the first possibility but don´t care about the second it´s just another form of subjectivism.
I don't know what that word play means. There is learning in trying new a new method of evaluation. That doesn't make it subjectivism.
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
Again, you are stipulating goals for a research project. Not what an audiophile needs to do at least once or twice to get a feel for what it means to evaluate an audio product without knowing which is which. Just putting their toe in that water is very useful. They will speak from experience then, not just reading arguments.

As said before, you might call it what you like, but in reality it is an experiment and experimental results are only usefull if one can conclude to the reason for a specific result. (Leaving the usual caveats about only probabilites as results aside for the moment).
Your assertion "only if the difference isn´t as big as they think" isn´t backed up by data, that´s where the problem starts.

That it is just about "trying something" without any demand to gain more insight does neither meet the forum reality nor imo the goals your posted earlier.
There you spoke about money rewards and "significant data points" that were gathered; in another post you were talking about learning from the results (i.e. explicitely mentioning taking sighted listening impressions "casually" afterwards)


I don't know what that word play means. There is learning in trying new a new method of evaluation. That doesn't make it subjectivism.
Although it might be "word play" to you, in reality it´s just reminding of the simple truth of controlled listening tests.
You won´t be satisfied (as you have stated in other discussions) if a listener will do only two or three trials while evaluating "a new method". You would demand more trials and are therefore demanding to guard against one error mechanism while neglecting the other.
 
Last edited:

jtwrace

Major Contributor
Forum Donor
Joined
May 31, 2017
Messages
1,227
Likes
1,410
Location
Orlando, FL

March Audio

Master Contributor
Audio Company
Joined
Mar 1, 2016
Messages
6,378
Likes
9,321
Location
Albany Western Australia
Ot, but an interesting recent comment from JS:

I want to stop working on power stuff for a while and get working on all the clock analysis stuff so we can get a good handle on all this clocking stuff and build some equipment where we really KNOW what is going on, I have a feeling this will be far more game changing than another power supply.

John S.

Its what we knew all along, that the products (regen) efficacy had never been tested or validated, but its an interesting admission nonetheless.
 

AndrovichIV

Active Member
Forum Donor
Joined
May 25, 2019
Messages
158
Likes
218
Hello again gang:

It has been a busy week, and since I don't generally hang out with the friendly, pitchfork wielding crowd here, I was not aware of this thread until Thursday when someone sent me a link.

Needless to say, we were shocked by the measurements you posted and the conclusion you drew to condemn our product as not blocking the path of leakage currents and transmitting noise and harmonics from whatever PS is used to "energize"/charge the UltraCap LPS-1.

Happily I can report and prove that you measurements and conclusions are entirely incorrect. The LPS-1 does not pass ANY AC leakage or noise from its charging supply.

What you are seeing is radiated--into the air--harmonics from the Mean Well (and iFi iPower as you showed even worse), from the brick itself, and very much from the DC cable from the SMPS to the LPS-1. The cables from your DAC to your analyzer are picking this up! (And we suspect that the output impedance of your bus-powered DAC may be a bit high, causing greater sensitivity to this--since our tests show one cheap DAC, the Micca OriGen+ was sensitive while others on hand were not.)


Before posting our graphs and photos and giving you suggestions on proving this for yourself, allow me to address a couple of other points mentioned in your opening assessment of our product:

a) You mentioned the interval at which the banks switch as 3 seconds when powering the ISO REGEN. The bank change-over rate is entirely dependent on the load. Since the ISO REGEN is the sole source of power for your bus-powered DAC, the overall load is 100mA (the ISO REGEN itself) plus whatever the DAC is drawing (perhaps 400mA? easy to check with a USB ammeter).


b) You wrote: "FPGAs are normally used when CPUs are not fast enough to do the job in software. I don't know of any design characteristics of this type of device as to require an FPGA. I am at a loss as to why this part is used other than maybe more familiarity of the designer with hardware than software. Or a way of making the design difficult to copy."

Without debating the quizzical nature of the first sentence (why would we use a full CPU for our s/w when a small FPGA is the perfect place to load and run all the code?), there are a great many reasons--having to to with charging, diagnostics, resets, over-current modes and recovery--supporting our choice to use an FPGA.
In fact, the entire design was prototyped originally with all discrete circuitry--using about 100 more parts. It became apparent that for reliability, flexibility (we can adapt the core tech to larger applications), and cost, the FPGA architecture was the only reasonable way to go.
As for "more familiarity of the designer with hardware than software," this again makes no sense, unless you are just tossing out another glib insult. There is a ton of code loaded into the board, and we used the right part for the job.
As for making the design difficult to copy, I'd like to see someone try. It is a 4-layer board with 4 power domains, a lot of fairly costly parts, and a lot of unique thinking went into it. It is also our lowest margin product as I was determined to price it for volume sales.

c) Your legend on the photo of the bottom of the board (the side with all the ultracaps) has the input and output jacks reversed.

d) Your measurements and any judgement of the merits of the LPS-1--aside from being incorrect about leakage blocking--are of course not assessing either of its other 2 key attributes.
  1. The LPS-1 offers extraordinarily low output impedance (across a wide bandwidth, and especially at low frequencies which we think is most important for its application). Plotted measurements of this are tricky to perform, but we plan to publish impedance plots in a few weeks.
  2. The cascaded TI TPS7A700 output regulators we use are the lowest noise (wideband) and highest PSRR integrated devices available in the 1A class. This is a VERY quiet supply. Again, environmental factors make it hard produce clean plots, but we are getting there.
--------------------

Okay, on with the show!

LEAKAGE: Defined in the current discussion as AC harmonic current traveling over DC connections.

Your supposition, which you tried to support with the measurements you made, is that any leakage current from the PS "energizing"/charging the LPS-1, is going through the LPS-1, coming out on its DC cable, going into whatever it is connected to (in this case the downstream side of the ISO REGEN), then through the USB cable to the DAC, and out of the DAC into your analyzer. Such would be quite damning if true, and would render our entire bank-switching design superfluous. Might as well offer a traditional low noise LPS.

Except that what you measured was not leakage coming through the output of the LPS-1!

What I am going to show with the following photos and plots is that simply having the Mean Well (or other SMPS) plugged into the wall allows it to radiate (from the brick itself and from the cable) into your single-ended RCA cables from your DAC to the analyzer. As you have pointed out, your analyzer is very sensitive and the harmonics you are picking up are more than 115dBv down.

By the way, our tests lead us to believe that almost none of what you are seeing is making its way from the SMPS into the wall and to your analyzer via mains ground pins, but without knowing all of your set up we can not be sure. In our own tests the wall connections did not make a lot of difference.

It will be easy for you to demonstrate any of this for yourself, and maybe later we can discuss a proper method for you to measure actual leakage currents from various power supplies directly (it is not hard, but we need to know about the input grounding of your analyzer to instruct you correctly).


Here we have the Mean Well SMPS energizing the LPS-1, which is powering an ISO REGEN, which is powering the Micca OriGen+ DAC you are familiar with. The 3.5mm>RCA cable in the foreground goes to our HP spectrum analyzer (12KHz signal on all graphs--just out of the displayed band):
View attachment 8233

Here is the plot for it.
Please note that for all our plots, the vertical axis is in dBm (versus dBV for Amir's graphs) and that's a 13 dBx difference. so the marker at -98.6 dBm is equivalent to -111.6 dBV.


View attachment 8232

Now here is the same set up, only this time the LPS-1 is being powered by a quiet linear supply (offscreen, our own choke-filtered, dual-rail, 5-7 amp JS-2). BUT NOTE THAT THE MEAN WELL IS STILL PLUGGED INTO THE WALL:
View attachment 8234


And the near identical plot for it:
View attachment 8235


Now we remove the Mean Well's power cord:
View attachment 8236


And ta-da, a much cleaner plot, proving that what was being measured was NOT leakage coming through the LPS-1's DC output. The active bank (supplying the load) has a pile of opto-isolators (those big white parts across power domain "moats") keeping it isolated from the side that is charging.

View attachment 8237

Now like the DACs you used, the Micca OriGen+ was pretty sensitive to the radiated harmonics of the Mean Well and other SMPS charging units (you already saw the even worse iFi iPower). But every DAC is different--likely owing to its output impedance and the analog cables used. So here is the same set up, with the Mean Well powering the LPS-1, but with an HRT MusicStreamer DAC:
View attachment 8238


And its much quieter results, with 120Hz noise down at -131dBV:
View attachment 8240
---------------------------------------------

John got called away today, but we will follow-up soon with some direct measurements of power supply leakage--from the DC jacks that is. We will show plots of the Mean Well itself, and of our UltraCap LPS-1 being powered by it (or any other charger). The latter should pretty much show up as a flat line. We might also post the leakage of some other supplies we have measured (like the iFi iPower, though that is not a pretty sight and it would be impolite of us).

Lastly, I want to close by discussing and disclosing something interesting we found during all the kerfuffle here--mostly when we were looking into your ISO REGEN power supply measurements, which we now thing were a combination of actual leakage plus the radiated harmonics issue. It concerns the Mean Well power supplies.

Aside from the obvious fact that the 22-watt unit we originally chose for inclusion with the original USB REGEN is oversize (we could easily have gotten away with a 5-8W model) puts out more leakage than a smaller wall-wart would, there is another factor at play that is less obvious.

The world-wide governing bodies that regulate the types, efficiencies, and emissions of all AC-mains-conneccted devices have been steadily changing the rules over the years. I won't bore you wth the details--feel free to Google it--but in 2016 the law changed to requiring power adaptors with certifications to Level VI. [Being as about 50% of our business is overseas, conforming to the current laws and standards is important--customs offices will and do reject importation of power products that are not fully certified. (Germany is the worst; must have your papers!)

The original Mean Well model we chose in 2015 (GS25A07-P1J), met the Level V certification for efficiency and emissions. You can see this model in our web photos as it has a 16AWG coaxial DC cable. Middle of 2016, after the laws requiring Level VI certification took effect, my Mean Well importer (actually I bring in hundreds straight from Taiwan as the USA distributors suck) informed me that GS25A07-P1J was being discontinued and replaced with the Level VI-compliant GST25A07-P1J. Here is a photo of the two versions (don't mind the M&Ms, I love dark chocolate M&Ms as they keep me from eating too much chocolate):

View attachment 8241

As you can see, the new version--aside from having a ferrite near the box (as well as near the DC plug end, as does the older version)--has 16AWG zip cord for its cable. I was not terribly happy about it at the time (mainly because the zip cord gives it a cheaper feel and is not as supply as the coax). But aside from a slight efficiency improvement, and likely lower emissions at ultra-high frequencies which is what the EU nannies are most paranoid about--I could find no differences on their detailed data sheets.

Well... it turns out that while the units have the same specs, with the newer Level VI version being quieter in the ultrasonic range, their leakage profiles are a little different, and the zip-cord DC cable radiates a bit more AC 50/60Hz harmonics. So while it does not matter much for the LPS-1 (usually the brick and the cord go away to the floor and are not near RCA cables) I will be looking into this with regards to the default supply for the ISO REGEN.
Right now, I am not finding any small, Class I, Level VI-compliant, low-leakage "medical" supplies that have IEC320-C14 3-prong AC inlets--just Class II units with two-wire C8 jacks. Those are problematic for export and forces my hand with regards to providing international plugs. Might have to accept that headache at some point.

--------------------

Well that about wraps it up for my Saturday. Amir, I suggest that you play about a bit with various SMPS units plugged in and with their cables local to your setup. We think you will find that the single-ended cables to your analyzer are what are picking up the mains harmonics. Different analog cables, different DACs--even a change of AP input impedance (does it have a 50 ohm input)--will greatly alter what you see.

I know you love to measure noise and large jitter anomalies with you analyzer--at the output of DACs--and that's fine. But characterizing the performance of a low-noise power supply requires other methods.

As well, regarding the widespread reports of people using improved clocking, supplies, signal integrity, USB cables, and many server side improvements: Perhaps the scientific curiosity in you can lead you towards deeper study of the mechanisms at play. I bet if you were not so quick to dismiss so much out of hand--because your current measures do not immediately reveal variations--you could join others in researching how the phase-noise fingerprints of upstream variables somehow are audible even when you don't see it on your screen. (Very close in PN, right around the shaft of the signal skirt may be one place to look. More dynamic tests as others are doing may also yield insights.)

John is developing an advanced test system (I just spent a couple grand this week on development boards for the project) with the hope and expectation that we will at some point be able to blow the doors off all this and come forth with some very clear results, providing vindication for some of the firms whose legitimate products are regularly derided by a few as "snake oil." It just may take a while. Designing and producing products that people enjoy is our first priority, but for more than one project we are working on (an alternative to USB interface) we really need these more advanced measurement techniques in place to finalize and prove the efficacy (this in part because the proprietary nature of the tech we will us, unless patented, will require us to do more "show" (results) than "tell" (about the how). And sanding chips may even be in order.

I am 100% certain that most of the ASR regulars are going to quickly take pot shots at nearly everything in this long post. I'll tell you upfront that I am not going to dignify snarky, derisive comments with any reply. Nor will I indulge in taunts and BS challenges regarding blind tests. Our happy customers prove our products every day. I'm a small operation working 60+ hours a week to keep everything on track. There are LOTS of other firms--a number of them with similar products to ours--who you could as well challenge to blind-test their products that you don't believe in. I'm the easy target, but I will choose to walk away from people who don't show a modicum of civility and respect.

Thank you and good night,

Alex Crespi
UpTone Audio LLC


You mention that your product runs thousands of lines of code. My guess is that you didn't write it for the most part and are running some open source kernel like Linux. If that's the case you're required to publish your code or else you're violating the GPL 2 license
 

Superdad

Active Member
Audio Company
Joined
Aug 6, 2017
Messages
149
Likes
91
You mention that your product runs thousands of lines of code. My guess is that you didn't write it for the most part and are running some open source kernel like Linux. If that's the case you're required to publish your code or else you're violating the GPL 2 license

Incorrect. OurUltraCap supplies are controlled by an Altera/Intel FPGA (model 10M04SCU169C8G). My partner John Swenson wrote AL the code for it. No Linux kernel, no open source core code, no GPL2 licensing requirements.
 

vst14

New Member
Joined
Jan 16, 2020
Messages
1
Likes
0
Location
Kyiv, Ukraine
Dear all,
I'm using Uptone LPS-1 with Sonore Ultrarendu.
I have some problems with my power supply since last week it had been damaged and gone outwork (with a little smell).
I've been send my LPS-1 for the local expert, cause I haven't opportunity to send it to USA (I've bought my devices in the USA, but currently I'm living in Ukraine).
The local expert had identified that the damage was happened with input pulse stabilizer (marked as REG8 on the back side of the board, near above L2).
As it was damaged, it can't be identified correctly.
Can somebody please provide an exact serial number of the highlighted input pulse stabilizer with the aim of replacing it for a new one.
I mean that I can't find any photo in internet with a high resolution that can allow me to identify it correctly, perhaps somebody have their own photos or can do it. See the photo attached (marked by red).
Will be much appreciate for the highlighted information
smile.png
 

Attachments

  • Uptone lps-1.jpeg
    Uptone lps-1.jpeg
    282.6 KB · Views: 146
Top Bottom