• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Topping Pre90 Review (preamplifier)

No Sir I don’t agree to this concept of brain getting accustomed to. In my opinion and from 40 years experience I can say with certainty than every audio equipment including cables need burn in time.
40 years of errors, Bravissimo.
The inclusion of cables is a "tell."
 
The inclusion of cables is a "tell."

The impedance is so low in cables, that we used to transfer telegrams and eventually telephones over undersea cables. How exactly he thinks he can burn in a 5-10 ft copper cable is beyond me.

I recently got a pair of "high end" cables, and immediately I could tell the difference. I felt it being more clear, and everything felt crisp.

When I A/B'd them, I could feel the cloudiness, and the oldness, and the not so clear sound of the old cables.

But after sitting around for a bit, I started hearing the clarity of the new cables come out of my old cables!

VOILA! The clarity showed up in my amazon basic cables! As if it was some miracle of imagination, suddenly I couldn't tell which cable is which.

I ended up plugging in my expensive ones because I didn't want to humiliation of being wrong (I had spent $150CAD on these fancy new cables)

Better example is this:
I could always tell my car was smooth after an oil change, it was night vs day, such a smooth experience in comparison, always gave me a delight.
Until one day, my wife took my car and did an oil change without my knowledge, that same day I kept complaining that my car needed an oil change, it was so overdue, until she told me she already did it. I checked it oil, and yes it was brand new oil (not even black) and she had a receipt.

That's when it blew my mind how little I understand human nature and bias.
 
Last edited:
No Sir I don’t agree to this concept of brain getting accustomed to. In my opinion and from 40 years experience I can say with certainty than every audio equipment including cables need burn in time.
Those claiming brain getting accustomed to the sound are ignorant or not good listeners who can’t differentiate between pre and post burn in sound.

Are you seriously saying that you can tell the difference between brand new, never used cables and cables with say 100 hours of burn in time, in a blind test? What about 200 hours? 500 hours?
 
I take claims of break in with quite a few grains of salt. But I have also learned that entering into arguments about it with some of my fellow audio enthusiasts is not a beneficial use of my time.

But I do think that there may be something to claims of break in with the items you mentioned above, and I would respectfully add that there may be something to claims of changes during the first few hours of use with vacuum tubes.

But in the case of vacuum tubes there are actual changes in some of the physical properties of some of these devices that could account for these changes. There are devices called getters in vacuum tubes that are designed to scavenge any last remnants of gas left inside the vacuum envelope and ensure the best possible vacuum. But any changes cause by these devices must surely be completed very early in the process, minutes or at most hours. In fact, these changes are probably well done before the tubes are packaged for sale. At any rate these could never take the hours or weeks that is claimed as required by those who insist that burn in is real.

Additionally, the actual physical dimensions of the internal components of vacuum tubes will change a little as they warm up, but this is not the same phenomenon as so-called break in or burn in.

I don't believe that the physical changes are large, and I don't believe that changes that occur during so called burn in with respect to tubes are drastic. But I do concede they may be possible.

When I read someone claim that when they first power on a device it sounded like shit but after a few hours or weeks it is now amazingly great, I just keep moving. Perhaps they will eventually come to a different conclusion after learning more, perhaps not. But trying to convince them of the error of their ways is a fool's errand.

Resistance changes depending on temperature, and tubes because they operate at pretty ridiculous temperatures, they are at the point where it might affect resistance. So I totally understand the concept of warming up tubes (we deal with temperature based resistance issues all the time, it's the reason why your iPhone doesn't work at -20C, it's because the voltage drops from the battery so low that the CPU can no longer work)

Burning in is just something I've never studied, and was never brought up in any classes, because what is it exactly? Take a fresh copper for instance, is there chemical change from burning in / running electrical current through it that affects impedance (that's really it right? what else colors sound if not impedance)? If so, it should be very easily measurable using any basic tool, so if that's the case, then we should have to account for it in our designs, and thus should've been a course that we take (considering we do have one for temperature issues, especially building radars for aircraft / fighter jets)

In my opinion, the change is so minor that even in our $50,000 oscilloscopes, we can't tell a burned in circuit board vs a non-burned in circuit board. At least I could never tell, and on top of that, would there be a burn in process for our oscilloscopes?? The newer oscilloscopes should be producing results that are wildly different from osclloscopes that's been in use for months to years...

Alas, it's not the case, so maybe it's just a bunch of nonsense afterall.

Also a tip:
Radar design is interesting for fighter jets because circuits love the cold, the colder the circuit the lower the impedance, but fighter jets constantly heat up to 80-120C (like lets say Mach 1.5) when they speed up, so you have massive temperature variants. Those phased array radars are a magical EE nightmare.
 
Last edited:
Resistance changes depending on temperature, and tubes because they operate at pretty ridiculous temperatures, they are at the point where it might affect resistance. So I totally understand the concept of warming up tubes (we deal with temperature based resistance issues all the time, it's the reason why your iPhone doesn't work at -20C, it's because the voltage drops from the battery so low that the CPU can no longer work)

Burning in is just something I've never studied, and was never brought up in any classes, because what is it exactly? Take a fresh copper for instance, is there chemical change from burning in / running electrical current through it that affects impedance (that's really it right? what else colors sound if not impedance)? If so, it should be very easily measurable using any basic tool, so if that's the case, then we should have to account for it in our designs, and thus should've been a course that we take (considering we do have one for temperature issues, especially building radars for aircraft / fighter jets)

In my opinion, the change is so minor that even in our $50,000 oscilloscopes, we can't tell a burned in circuit board vs a non-burned in circuit board. At least I could never tell, and on top of that, would there be a burn in process for our oscilloscopes?? The newer oscilloscopes should be producing results that are wildly different from osclloscopes that's been in use for months to years...

Alas, it's not the case, so maybe it's just a bunch of nonsense afterall.

I agree, there may be some changes, but they must be very small, or else most of our circuitry would have problems working reliably.

But I do think that there may be something to some claims.

I used to build computers and then attempt to overclock them as a hobby. It has happened to me several times that at some point I increased the clock to the CPU too high, and it would crash. If I then reduced the clock frequency and ran the system doing a CPU intensive task, such as folding or SETI for several hours, I could then increase the clock back to the unstable frequency and the system would operate reliably. Something changed, something happened.

In the hobbyist community I used to frequent there used to be talk of electron migration as a possible explanation, but these discussions were way over my head. I can say for certain that changes happened. A system that before burn in would crash almost immediately would now work reliably for days and weeks after burning in.

Do similar things happen in audio equipment? Do changes happen that would be audible?
 
I used to build computers and then attempt to overclock them as a hobby. It has happened to me several times that at some point I increased the clock to the CPU too high, and it would crash. If I then reduced the clock frequency and ran the system doing a CPU intensive task, such as folding or SETI for several hours, I could then increase the clock back to the unstable frequency and the system would operate reliably. Something changed, something happened.

Could it be the heat sink paste softening and becoming better distributed the first time you really push the processor to heat up?
 
I agree, there may be some changes, but they must be very small, or else most of our circuitry would have problems working reliably.

But I do think that there may be something to some claims.

I used to build computers and then attempt to overclock them as a hobby. It has happened to me several times that at some point I increased the clock to the CPU too high, and it would crash. If I then reduced the clock frequency and ran the system doing a CPU intensive task, such as folding or SETI for several hours, I could then increase the clock back to the unstable frequency and the system would operate reliably. Something changed, something happened.

In the hobbyist community I used to frequent there used to be talk of electron migration as a possible explanation, but these discussions were way over my head. I can say for certain that changes happened. A system that before burn in would crash almost immediately would now work reliably for days and weeks after burning in.

Do similar things happen in audio equipment? Do changes happen that would be audible?
This chart is from my thread regarding voltage instability of DAC. Indeed changes can happen from power on to multiple hours. Audible or not, don't know for sure. The chart provided to me only shows changes up to 3 hours. I suppose it would continue to change up to certain level after 3 hours.

1612476874045.png
 
Last edited:
I agree, there may be some changes, but they must be very small, or else most of our circuitry would have problems working reliably.

But I do think that there may be something to some claims.

I used to build computers and then attempt to overclock them as a hobby. It has happened to me several times that at some point I increased the clock to the CPU too high, and it would crash. If I then reduced the clock frequency and ran the system doing a CPU intensive task, such as folding or SETI for several hours, I could then increase the clock back to the unstable frequency and the system would operate reliably. Something changed, something happened.

In the hobbyist community I used to frequent there used to be talk of electron migration as a possible explanation, but these discussions were way over my head. I can say for certain that changes happened. A system that before burn in would crash almost immediately would now work reliably for days and weeks after burning in.

Do similar things happen in audio equipment? Do changes happen that would be audible?

The overclocking is purely anecdotal, that's not at all what happens in a chip. Gates, or transistors fill up with electricity, and once they fill to a certain to voltage, the gate transistor either becomes conductive or insulator (hence the transisitor part of it)

Filling up and expulging takes time, and as you speed up the clock past a certain amount, these transistors start behaving erratically. Not all transistors are made equal, so sometimes you fill them up faster, some are filled up slower. This happens in the nano/pico second scale, but at overclocked times it will be affected.

So imagine this, you have 16 billion transistors, and you have the best of the best, meaning the center disk of the silicon wafer, which is usually reserved for the top end chips, you can clock them higher because on average, you have pure transisters that fill up at equal times, and dump at equal times, so when you up the clock speed, there are no errors or run time issues that affect the cpu's performance.

However, you take a chip that's closer to the edge of the wafer, that's full of impurities, the transisters in them aren't equal, some are slower, some are faster, and you clock those up, some transistors don't turn 0->1 fast enough, and cause logic bombs and logic issues (so your program crashes)

What you're saying is, if you run the CPU for a while at low speed, and then clock them faster, suddenly it'll be fine. This is not true, it is almost entirely anecdotal and you didn't do enough testing to figure out why (flip a coin 500 times, you'll get heads in a row for a while)
 
Could it be the heat sink paste softening and becoming better distributed the first time you really push the processor to heat up?

It's possible, but not likely. In my case I would be attempting to overclock a system that had already been running for quite some time, so it's not likely that there would be any change in the coupling of the heat sink paste at that point.

But the overclocking hobby is rife with similar stories. Everyone knows, if your system crashes at your new OC, fall back to your previous stable OC and run the system for a while and try again.
 
What you're saying is, if you run the CPU for a while at low speed, and then clock them faster, suddenly it'll be fine. This is not true, it is almost entirely anecdotal and you didn't do enough testing to figure out why (flip a coin 500 times, you'll get heads in a row for a while)

That's not exactly what I'm saying. And telling me that something didn't happen that I witnessed many times, something that has happened to me personally many times, is not really a very convincing argument.

What I am saying is that I have taken a system and run it at an overclocked frequency, and it was operating reliably for a long time. I then attempted to increase the clock frequency. And we are not talking about huge jumps, but an increase nonetheless.

Sometimes it would work without issue.

But frequently, the system would crash. I would then revert back to the previously stable frequency and run the system for a few hours with it running a task that was a CPU intensive task. In my case I would use @Folding. I would then, after several hours, sometimes overnight, attempt the overclock again. And lo and behold, the system worked fine. No crashing, perfectly stable operation. Granted, this wasn't always the result, sometimes I couldn't get a stable overclock and had to permanently revert back to the previous frequency. But frequently it worked.

You don't have to take my word for it, a lot of overclockers have had similar experience.

You're correct, I didn't conduct any experiments to attempt to discover why. At that point I wasn't really concerned with why. I just knew it worked.

You are also correct that CPUs that are sourced from the center of the wafer would generally overclock more reliably than those from the edge. But I'm talking about efforts to get higher OCs from one system at a time, and what I recounted was a reliable method that has worked for me many times.

Of course, eventually I would simply stop attempting to increase the frequency and settled for the increase that I had achieved. At that time my interest was simply to rack up credits at Folding. One could buy a relatively cheap AMD CPU and overclock it and have it perform much faster. I understand that not everyone would find that fun, but I enjoyed it. This was back in the 2000s, so it was quite a while ago.
 
That's not exactly what I'm saying. And telling me that something didn't happen that I witnessed many times, something that has happened to me personally many times, is not really a very convincing argument.

What I am saying is that I have taken a system and run it at an overclocked frequency, and it was operating reliably for a long time. I then attempted to increase the clock frequency. And we are not talking about huge jumps, but an increase nonetheless.

Sometimes it would work without issue.

But frequently, the system would crash. I would then revert back to the previously stable frequency and run the system for a few hours with it running a task that was a CPU intensive task. In my case I would use @Folding. I would then, after several hours, sometimes overnight, attempt the overclock again. And lo and behold, the system worked fine. No crashing, perfectly stable operation. Granted, this wasn't always the result, sometimes I couldn't get a stable overclock and had to permanently revert back to the previous frequency. But frequently it worked.

You don't have to take my word for it, a lot of overclockers have had similar experience.

You're correct, I didn't conduct any experiments to attempt to discover why. At that point I wasn't really concerned with why. I just knew it worked.

You are also correct that CPUs that are sourced from the center of the wafer would generally overclock more reliably than those from the edge. But I'm talking about efforts to get higher OCs from one system at a time, and what I recounted was a reliable method that has worked for me many times.

Of course, eventually I would simply stop attempting to increase the frequency and settled for the increase that I had achieved. At that time my interest was simply to rack up credits at Folding. One could buy a relatively cheap AMD CPU and overclock it and have it perform much faster. I understand that not everyone would find that fun, but I enjoyed it. This was back in the 2000s, so it was quite a while ago.

There are people who think copper is flavored by electricity
 
This chart is from my thread regarding voltage instability of DAC. Indeed changes can happen from power on to multiple hours. Audible or not, don't know for sure. The chart provided to me only shows changes up to 3 hours. I suppose it would continue to change up to certain level after 3 hours.

View attachment 153131

Your graph is misleading - the voltage instability here is between 2.914 -> 2.9185, which is a +-0.0045V, barely audible in my opinion.

If you had it from 0 to 3, it would be a straight line.
 
There are people who think copper is flavored by electricity

I find your response curious. There is precious little copper in CPUs. Was that supposed to be pithy?

Bear in mind, I am not a champion for burn in. I don't believe that hours or weeks of burn in will result in significant changes in sound quality. Or any change at all for that matter. I don't believe you have read many of my posts.

I am merely recounting some experiences that make me curious. I don't feel that the changes I witnessed in CPUs would have any analogs in audio gear. But I am curious.
 
I find your response curious. There is precious little copper in CPUs. Was that supposed to be pithy?

Bear in mind, I am not a champion for burn in. I don't believe that hours or weeks of burn in will result in significant changes in sound quality. Or any change at all for that matter. I don't believe you have read many of my posts.

I am merely recounting some experiences that make me curious. I don't feel that the changes I witnessed in CPUs would have any analogs in audio gear. But I am curious.

My point is your experiences, and those of your peers, run counter to what I know from my CPU design courses. I'm sure you've read up on a ton of articles written by non-EE majors (gaming hardware experts), and thus you're building your knowledge base from that.

My point is, your experiences smack in the same group as people who think burning in cables or using cables for that matter make a difference in sound. Hence my flip a coin 500 times, there's going to areas there it's all heads and all tails.

What you said, about stability after running the CPU at 100% could just be a thermal thing, and has absolutely nothing to do with actual stability, which you can replicate by buying a better cooler. Because you've heated up the CPU prior, the CPU expands and the heatsink expand to squeeze each other better, thus improving the thermals in the CPU...

As I've said before - temperature plays a role in EE, and if you reduce temperature, you reduce resistance, and thus probably creating a higher current situation. Higher current naturally stabilizes the electron pump into transisters because current means volume if you take water analogy. Transisters can now flood faster, thus become more stable as you increase the clock speed. That's why those -300C cpus can hit over 8ghz.
 
Your graph is misleading - the voltage instability here is between 2.914 -> 2.9185, which is a +-0.0045V, barely audible in my opinion.

If you had it from 0 to 3, it would be a straight line.
Not my graph. ; )

Anyway, in the thread, someone commented that it is about +0.01db....So, I don't know if anyone could sense it.
 
My point is your experiences, and those of your peers, run counter to what I know from my CPU design courses. I'm sure you've read up on a ton of articles written by non-EE majors (gaming hardware experts), and thus you're building your knowledge base from that.

My point is, your experiences smack in the same group as people who think burning in cables or using cables for that matter make a difference in sound. Hence my flip a coin 500 times, there's going to areas there it's all heads and all tails.

What you said, about stability after running the CPU at 100% could just be a thermal thing, and has absolutely nothing to do with actual stability, which you can replicate by buying a better cooler. Because you've heated up the CPU prior, the CPU expands and the heatsink expand to squeeze each other better, thus improving the thermals in the CPU...

As I've said before - temperature plays a role in EE, and if you reduce temperature, you reduce resistance, and thus probably creating a higher voltage situation. Higher voltage naturally stabilizes the electron pump into transisters because voltage means pressure if you take water analogy. Transisters can now flood faster, thus become more stable as you increase the clock speed. That's why those -300C cpus can hit over 8ghz.

Well, I am not an electrical engineer, and I make no claims to be one. But I addressed the possibility that enhancement in contact with the thermal paste could account for the real-world results that I and other overclockers witnessed in an earlier post. A post which it appears you either didn't see or didn't read.

I don't believe that burn in with respect to audio gear is real, and I am sorry if you interpreted my post as arguing in favor of that belief.
 
Well, my Pre90 is now in my system, and it sounds wonderful straight out of the box, no burn in required.

Once again Apos Audio did an amazing job, I ordered it on 9/5/2021, I'm listening to it on 9/13/2021.
 
Back
Top Bottom