• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Class A vs AB vs D amplifiers

I had some time to think about this topic. So based on what I read recently and in this thread, all articles that published the result of blind testing, are irrelevant to me.

I have no idea what sort of "blind testing" articles you're talking about, or why they were published, but let me say this, just in case we're misunderstanding each other:

Blind tests are only relevant to the person undergoing the test! The test is to educate you, not us and not someone else. The tests show whether the subject (you) can discern differences that you believe you can discern, whatever those may be. IOW, the test teaches you your true (not imagined) capabilities.

As for other people ... you're not expected to care what someone else can or cannot hear, and no one else is expected to care about what you can or cannot hear. That's irrelevant.


Jim
 
This response is really two topics.
I had some time to think about this topic. So based on what I read recently and in this thread, all articles that published the result of blind testing, are irrelevant to me. If this testing is supposed to be scientific, the rule is that to make the result valid, anyone should be able to repeat. The problem I see is 1. You need to have the same amplifiers, the same speakers, the same speaker cables, the preamp and the same source hardware. Not to mention the same music.
Frankly I just don't get it, why I need to know if people can or cannot hear the difference between a certain model of a brand of amplifiers to another? That is is like telling you I have a rosebush that has wilting roses? Is that relevant to you? Almost never do i see the gearing test chart of the people listening. What if they are half deaf and can't hear half the arbitrarily defined spectrum?
On another hand why exactly do I need to do that at my house with my equipment? If I don't have different equipment to test that may or not sound different, why exactly do I want to do this? Now I have to buy even more equipment. This is my workbench. Let's just put this out there. I am retired RF engineer that has developed and design broadband RF amplifiers for CATV system amplifiers that hang on the pole, for almost 30 years. Never did I or any other engineer do any blind (no pun intended) to discern the difference between equipment. All faults that you see on your TV have measurable causes. I have designed audio amplifiers on the late 80's. Every one is still working today because I did not use crappy components. For me it is easy to determine if an audio amplifier is a crappy design.
Also I not only use frequency domain measures that Amir mostly uses, I also prefer to use time domain measurements. And a digital scope and decent signal generator cost a fraction of what Amir paid for his distortion analyzer. One thing I always test and amp for is a square wave to look for high frequency oscillation when I use a slightly capacities load. A 100 MHz scope is a good thing to have. In the picture I have a nice classic 100MHz HP scope, and the little lunchbox beside it is the latest Siglent 14bit 100MHz oscilloscope. Frankly ergonomics on the new digital stuff is complete crap. To do simple things you need to push so many buttons and some stuff is nested too many button pushes down. It does have an auto measurement button, but that is only good for the first time as it resets everything. Sorry for the rant. What the digital stuff is good for is to measure parameters that are preprogrammed. For an amplifier it is good to see the phase response over the 20-20K bandwidth. This gives you an idea on what kind of speaker load will it give it some trouble. With the signal generator and scope, it can be done in a few minutes.I also like to test an amp at full power at 20kHz to see how hot it gets after a few minutes and if it goes into current limiting or shutdown. Back when the first high power solid state amplifiers came out, the output stage would blow, because the transistors were too slow and as one turned on the other did not shut off so you had mutual conduction and the transistors went into avalanche destruction as you exceeded the safe operating area. Of course the manufacturers claimed it was not how the amplifier was going to be used, but I just call it a crappy design and cost cutting. Don't claim it can produce full power 20-20kHz. The class G amps switch at high frequencies. The output transistors are stressed as they have a finite bandwidth and much less margin than a transistor being used at audio frequencies. If you look at the specs of power Mosfets from the early devices made by Toshiba and the more recent ones developed for switching, the difference is not that same ration as 1MHz to 20kHz. All class G amps have to put in a dead zone where both transistors are cut off so the space charge can dissipate. So when one turns on the other is guaranteed to be off. I would worry that over a decade or longer that the timing circuit may get unstable and when both conduct, its game over.
When I buy audio equipment I try to to get a schematic either before or after. This way I can fix it if there is a problem. Some of the newer audio equipment is disposable, so it does not really earn my respects if it is not repairable.
I think this mostly covers what I wanted to say in this post.
Please learn to paragraph. Walls of text tend not to get read.
 
One of the first ABX tests was the one by the Boston Audio society many many years ago. Maybe around 1998 or so. The point of the testing was "prove" that all amplifiers sound alike. This is one of the ones that stuck with me and apparently influenced a lot of people that ended up believing the same thing. It all started with the article in Stereo Review, where Julian Hirsch was the first to make such a claim.

I also read on this forum where people really believe that there is no difference in sound between interconnect cables. To me the whole flaw in the conclusion is that no single item is used by itself alone. Each piece of gear needs an input and needs to be connected to something. Just analyzing a wire as a wire proves nothing.
Take speaker cables for example. I had a chance to look at the cables I have to use with my electrostatic speakers. By sheer fact that they represent a capacitance at higher frequency means that any inductance in the cable and the inductor at the output of a lot of amplifiers will mean that it will cause a resonance somewhere. The result is two things. One is to create a rise in output at the speaker and secondly interact with the feedback network in the amplifier. One may or may not be audible depending on how low in the band it is, the second will manifest itself with a change in distortion of the amp itself. The high frequency peaking can easily be measured with a signal generator and an oscilloscope at the speaker terminals.

The reason I don't do blind AB testing is once I look at the physics and eliminate the weak points, I just use what I have and be done with it. If I hear a difference, I go back to the bench and see what I can measure. After spending a decade or so measuring RF amplifier characteristics, I know a thing or two about making measurements.
 
I take issue with this statement. I know this is my first post, but I have been reading this forum for several years now.
Back in the 80's I designed and build my own amplifiers. My history with audio goes back to the 60 when i build the Popular electronics Brute 70 amp.
:) Built one. It had a low level buzz, and I didn’t have the resources to track it down.
 
YouTube music streaming compared to flac files and I made sure that the levels where matched. I could not tell the difference with my ears. I tried and tried and after hours it was obvious that streaming codecs is good enough for my ears at this time.
To be able to do this test properly, you need to be in control of the source. The other thing to know is the weakness of the algorithm. When you throw away 90% of the information (128kbs) something is going to be audible. I did this test some years ago.

To start off chose and original rip from a CD, so at least you have something to work with. When you get a flac file and an mp3 file from the internet you don't know if they are from the same source. There are many free file players that will convert to mp3 and flac.
For codec weakness, there are a few things to consider. It relies on redundancy. Get a piece of music that has significant channel separation with distinct instruments on right and left channel. Get a piece of music with tons of high frequency instruments, like cymbals, tambourine, xylophone, triangles, wooden blocks. The human ear has relatively poor ratio of hearing between close spectrum to the signal of interest. So it is important to have music where the high notes, in my preferred case percussion instruments, are clearly separated from the midrange.

I then ripped those pieces from my CD collection and encoded them with flac, and 3 different bitrates for mp3s. 128, 256, 320. For the most demanding stuff, 128 sounded just bad, 256 was generally ok, and 320 seemed fine for noncritical listening like on computer speakers. I pay the extra money for Amazon music just the get the higher bitrate stream for my main system. It is also nice to see the 94kHz data rate shown on my DAC. Anyways your mileage may vary.

As a sidenote, at my friends house, who did live recordings, I could clearly hear the difference between CD quality and his 94kHz sampled originals. The biggest difference was the recovery of the room ambience and the highs were clearer, but not significantly so. But this was on a high resolution $50k audio system.
 
Take speaker cables for example. I had a chance to look at the cables I have to use with my electrostatic speakers. By sheer fact that they represent a capacitance at higher frequency means that any inductance in the cable and the inductor at the output of a lot of amplifiers will mean that it will cause a resonance somewhere.
Any inductance in the speaker cables or (decent solid state) amp output will be so small that the resulting resonance will be way above the audible range.

You may get resonance in your speakers, but it will have nothing to do with cable (or amp - assuming it's a decent solid state amp with low output impdedance) inductance.

If you are using a tube amp with output transformers, then yes, all bets are off.

at my friends house, who did live recordings, I could clearly hear the difference between CD quality and his 94kHz sampled originals. The biggest difference was the recovery of the room ambience and the highs were clearer, but not significantly so.

Could you do that accurately level matched (to 0.1dB with a volt meter), blind, and 9/10 times? And with the CD quality files created from the HI res files with proper dithering?
 
Last edited:
:) Built one. It had a low level buzz, and I didn’t have the resources to track it down.
It had very poor power supply rejection. The filter caps were too small. I was warned about this and bought much bigger filter caps as to what was called for. You also ended up with lower power supply voltage droop. So when I tested at for clipping power in my high school electricity lab, it put out almost 90W at clipping.
 
Any inductance in the speaker cables or (decent solid state) amp output will be so small that the resulting resonance will be way above the audible range.

You may get resonance in your speakers, but it will have nothing to do with cable (or amp - assuming it's a decent solid state amp with low output impdedance) inductance.

If you are using a tube amp with output transformers, then yes, all bets are off.



Could you do that accurately level matched (to 0.1dB with a volt meter), blind, and 9/10 times?
absolutely, because he switched back and forth between the two many times. He was a recording engineer and remember he used software to generate the CD bitrate. He would have noticed if the levels dropped. Can't think of any reason why a conversion software would change the levels, after all this is numeric. He ended up buying two plugins, which he complained about the cost, because one sounded worse to him. We were using the better version. As to the dithering, he was an electrical engineer and very particular about this. He changed microphones several time during the time he made recordings at the professional level. There was no physical switch involved, he streamed from his laptop to a DAC.

You have to play loud like live levels, meaning just below 90dBSPL average levels.

This was also the first time I heard about the problems that some DACS had where the actually voltage level at 0dB digital level before conversion exceeded 6dB and cheaper ones clipped. I forget what the details of this problem were. I know the ones that I have bought now have 6dB headroom.
 
Last edited:
I meant Kbps. mp3 files are usually generated at various bitrates depending on how small a file you want. Back in the beginning with file sharing, the 12Kkbps was the most common
The minimum now is 320, except for free streaming. That’s at the edge of being detectable in double blind tests. I doubt if any male over 40 can hear it.

One of the unfairnesses of life is that by the time you can afford state of the art stuff, it doesn’t matter.
 
Last edited:
Any inductance in the speaker cables or (decent solid state) amp output will be so small that the resulting resonance will be way above the audible range.
I measured my 12 ft speaker wire. Two directions give 10uH. Add the 1-2uH output inductor of the amplifier you get say 12uH rounded off. The Hafler and the Perreaux 2150 which I own have those output inductors. If you are using a ML speaker which is equivalent to 2uF with a 2 ohm series resistance, the resonance calculates at 32 kHz. Depending on the Q, which will effect the amplitude you make get some amplitude lift starting at 16kHz. This is hardly above the audible range.

Back in the days when MC cartridges first appeared, many had resonances right above 20kHz. Some sharp eared people apparently determined that this was what caused the extra clarity of the playback. The Denon 103D was one of them. I thought it sounded excellent without knowing this fact.
 
On of the unfairnesses of life is that by the time you can afford state of the art stuff, it doesn’t matter.
I so agree. Back when I first auditioned my DW speakers at a dealer in 1978, the amp driving them cost $1800 and the matching preamp was $800. I was making $13k a year before taxes! If you scale this to someone earning 100k as an engineer for example, this would be $15000 for amp and $6000 for preamp. No wonder that the industry has been in pursuit of making everything cheaper. But now we just have disposable new gear because at today's prices repair becomes a large portion of the purchase price, and some thing are made not to be repairable. SO our landfills get bigger and bigger.
So buying them was out of the question. Now at my age, I just bought both in excellent working condition for $1000. Only additional cost would be to change out the power supply capacitors.
 
I measured my 12 ft speaker wire. Two directions give 10uH.
That seems pretty high. I'd expect around 0.7 to 0.8uH / m of "2 core", so < 3uH for 12ft. Which would push resonance to 50kHz if your estimate of 2uH for amp output is correct. (EDIT - which also sounds high for a low impedance solid state amp)
 
Last edited:
I have no idea what sort of "blind testing" articles you're talking about, or why they were published, but let me say this, just in case we're misunderstanding each other:

Blind tests are only relevant to the person undergoing the test! The test is to educate you, not us and not someone else. The tests show whether the subject (you) can discern differences that you believe you can discern, whatever those may be. IOW, the test teaches you your true (not imagined) capabilities.

As for other people ... you're not expected to care what someone else can or cannot hear, and no one else is expected to care about what you can or cannot hear. That's irrelevant.

Jim
One exception. It’s interesting to me if someone claims things that are humanly impossible or bordering to that .

IE we’ll performing DAC’s and amps are in my understanding not really possible to tell apart for any human being unless our genes change and we develop capacity that we currently does not posses .

I’m not young or a trained listener and have hearing damage so i can accept that some people do have good listening skills to a limit .

But we are past that with cables and electronics with >100dB sinad
 
The minimum now is 320, except for free streaming. That’s at the edge of being detectable in double blind tests. I doubt if any male over 40 can hear it.

One of the unfairnesses of life is that by the time you can afford state of the art stuff, it doesn’t matter.
It is still possible I think. I can't detect them, but apparently there are still some artifacts you can hear. 99,999% of people wont though.
 
It is still possible I think. I can't detect them, but apparently there are still some artifacts you can hear. 99,999% of people wont though.
This forum seems to cater to at least three concerns.
1. Absolute, measurable sonic transparency.
2. Useful measurements of non-transparent components (speakers)
3. Bang for the buck.

I’m too old to hear the difference between a $59 Fosi amp and the $30,000 monoblock, when not driven to clipping. But I have younger family members who can still hear. My observation is that they don’t listen for defects. They just don’t care.
 
Doesn't matter what hypothesis you give, if you can prove it exists. I don't have the explanation for gravity but I can prove it exists.
 
This forum seems to cater to at least three concerns.
1. Absolute, measurable sonic transparency.
2. Useful measurements of non-transparent components (speakers)
3. Bang for the buck.

I’m too old to hear the difference between a $59 Fosi amp and the $30,000 monoblock, when not driven to clipping. But I have younger family members who can still hear. My observation is that they don’t listen for defects. They just don’t care.
You won't hear the defects even if you listen for them if you don't know the type of artifacts to listen for. I did a lossy mp3 vs lossless ABX with a colleague en he got all of them right. He told me he just listens for artifacts and not the actual music.
 
You won't hear the defects even if you listen for them if you don't know the type of artifacts to listen for. I did a lossy mp3 vs lossless ABX with a colleague en he got all of them right. He told me he just listens for artifacts and not the actual music.
I look for artifacts in images and TV movies. Can’t stop myself. Too much time using Photoshop.

But there’s a cure for old eyes. I see better than I did at 14.
 
Back
Top Bottom