• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Anthem AVM70 Review (AV Processor)

Yes the class A portion maybe something I really doubt it though. I will say to my ears, I do not like the performance of the newer parasound amps. They sound different than the HCA line.
Mine is not the newer A21+, it is the A21 that I bought 10 years ago.

Anyway, you clearly is one who can hear different sound signatures of amps that in theory and based on measurements should not have audibly different signatures lol..
 
I think you may be right in specific cases, but not likely as a general rule. How about pick an example, naming two such amps, one AB, another D, both by reputable manufacturers, with comparable specs, confirmed by measurements.

Let me start first, I would name the ATI amps:


So, based on objective measurements, the AT6002 did meet (exceed slightly) the specified 300/450 W 8/4 ohms.
The AT522NC (Hypex class D), also meet and exceed specs (slightly also) 200/300 8/4 ohms.

Amir has not measured any comparable 200/300 W ATI class AB amps so it is hard to compare them very well, though if you read through the details provided in the linked reviews above, you will not find evidence of the class AB AT6002 doing better with peak output into 4 ohms, or 8 ohms.

Here's a subjective review/comparison by @RichB :


Personally, I don't put much value in such subjective reviews, and I always doubt whether Richb can pick the duts out every time in he did a truly controlled DBT even though I know he's really confident he could still do it or had actually done it that way. In this case, we go by Richb's, the class AB amp (AT4002) did not do better in bass, in fact the other way around seemed to be true, to him. To others though, the impression could well be the opposite to Richb's and that's the nature of subjective review, you won't find consistency among reviewers unless you limit the sample size to just a few, even then, it could still be all over the place.
Agree it's not a general rule. The amp manufacturers would need to provide both the max power and peak power at the same frequency range and THD level to understand whether there is truly a significant power difference. How about the onboard amps for the MRX1140 vs. the Recently reviewed Buckeye NC252MP? What do you think comparing the two?



I believe this is the relevant results table to compare the Anthem 2ch class A/B specs. Please correct me if I'm mistaken.
# of CH
Test Type
Power
Load
THD + N
2​
CFP-BW​
153 watts​
8 ohms​
0.1%​
2​
CFP-BW​
203 watts*​
4-ohms​
0.1%​
2​
1kHz Psweep​
163 watts​
8-ohms​
1%​
2​
1kHz Psweep​
155 watts​
8-ohms​
0.1%​
2​
1kHz Psweep​
176 watts*​
4-ohms​
1%​
2​
1kHz Psweep​
174 watts*​
4-ohms​
0.1%​
2​
Dynamic PWR​
187 watts​
8-ohms​
1%​
2​
Dynamic PWR​
312 watts​
4-ohms​
1%​
 
Last edited:
@MacCali

You can try recording music and then using DeltaWave to try to tease out bass differences.

Two easy possibilities

1) Sony has a mode for their Class D headphone amp to replicate “analog bass” by throwing the bass out of phase to 90 degrees as you go below 40 Hz. Phase is not supposed to be that audible but I just present it for your consideration. Maybe the Parasound phase is different.

2) Audiotec Fischer has a mode that generates additional subharmonic tones with half the frequency and adds them to the music signal. Only the frequency range from 30 - 100 Hz is taken into account. If there is a 40 Hz tone in the music signal, a 20 Hz tone will be added. With a 60 Hz tone this would be a 30 Hz tone accordingly.

It’s very possible that the non linearities of the classic Parasound amps (which I agree sound great!) contribute to something that you prefer.

Your best bet at convincing yourself that it’s not sighted bias is to try to make recordings yourself.
 
@MacCali

You can try recording music and then using DeltaWave to try to tease out bass differences.

Two easy possibilities

1) Sony has a mode for their Class D headphone amp to replicate “analog bass” by throwing the bass out of phase to 90 degrees as you go below 40 Hz. Phase is not supposed to be that audible but I just present it for your consideration. Maybe the Parasound phase is different.

2) Audiotec Fischer has a mode that generates additional subharmonic tones with half the frequency and adds them to the music signal. Only the frequency range from 30 - 100 Hz is taken into account. If there is a 40 Hz tone in the music signal, a 20 Hz tone will be added. With a 60 Hz tone this would be a 30 Hz tone accordingly.

It’s very possible that the non linearities of the classic Parasound amps (which I agree sound great!) contribute to something that you prefer.

Your best bet at convincing yourself that it’s not sighted bias is to try to make recordings yourself.

That's very interesting, but we are dealing with THD specs that seems too low for such sub harmonics (if existed) to work.
Note: unless Parasound's THD spec if not for 20-20,000 Hz, that is highly unlikely though.
  • Power Bandwidth:
    5 Hz - 100 kHz, +0/-3 dB at 1 watt
  • Total Harmonic Distortion:
    < 0.03 % at full power; < 0.01 % typical levels
Even if Parasound added distortions like the Audiotec Fischer did, I doubt humans can perceive such subharmonics in the deep bass range. And I don't think Parasound would do that. If they did use that kind of trick, and do enough of it to result in audible difference, then it should be reflected in the THD spec.

Whether the classic Parasound amps have audibly different sound signatures than the modern ones, it is subjective to say one would "sound better" than the other. On the other hand, if it can be shown by enough bench test measurements that one should indeed "sound better" to most people, then Parasound would have a winning formula lol.
 
Agree it's not a general rule. The amp manufacturers would need to provide both the max power and peak power at the same frequency range and THD level to understand whether there is truly a significant power difference. How about the onboard amps for the MRX1140 vs. the Recently reviewed Buckeye NC252MP? What do you think comparing the two?



I believe this is the relevant results table to compare the Anthem 2ch class A/B specs. Please correct me if I'm mistaken.
# of CH
Test Type
Power
Load
THD + N
2​
CFP-BW​
153 watts​
8 ohms​
0.1%​
2​
CFP-BW​
203 watts*​
4-ohms​
0.1%​
2​
1kHz Psweep​
163 watts​
8-ohms​
1%​
2​
1kHz Psweep​
155 watts​
8-ohms​
0.1%​
2​
1kHz Psweep​
176 watts*​
4-ohms​
1%​
2​
1kHz Psweep​
174 watts*​
4-ohms​
0.1%​
2​
Dynamic PWR​
187 watts​
8-ohms​
1%​
2​
Dynamic PWR​
312 watts​
4-ohms​
1%​

I think if you only compare the measured power outputs between those two, they are similar enough that there aren't much point using the class D amp instead of the Anthem's internal class AB amps. It doesn't which comparison point you choose, the difference in dB will be less than 1! That's 1 click on the volume button.

If I were to use external amps with AVRs, I would not go with anything that can output less than 200/300 W 8/4 ohms CFP-BW. That means the buckeyeamp NC502MP will be as low as I would go. As always, ymmv.
 
Mine is not the newer A21+, it is the A21 that I bought 10 years ago.

Anyway, you clearly is one who can hear different sound signatures of amps that in theory and based on measurements should not have audibly different signatures lol..
Well it’s just my take on it, and I’ve said previously that even though there’s the entire HCA line, 600 750 1000 1200 mk2 1500 2200 mk2 and 3500. At least those between 98-04, all internally are slightly different. You can check it out on the parasound website, but point being maybe that’s the differences I hear because amongst them.

The 1000, 1200, and 1500 are all ones I own and the only thing I’ve interchanged in my chain when listening.

But I have heard the rest, minus the 600 and 750, and those higher up models to me sound closer to newer parasound amps. The 3500 was basically the prototype to the jc5. I would clearly say though it’s unfair to say this without a doubt because I have not used the 2200 or 3500 in my set up in the fashion I mention.

I know it’s odd, and I do not mean there’s a night and day difference between them but I am able to hear a subtle tonality shift. That’s all I mean when I say they sound different.

Sorry for the late reply, but I’ve been super busy. But we can definitely move on from this discussion I wish I had the answers.
 
FW HD.80 is good, I updated it and I know it has fixed at least one minor bug that I know of.

1702059933668.png
 
The Information tab in the AVM 70’s web gui will tell you …

IMG_0427.jpeg
 
I know it’s odd, and I do not mean there’s a night and day difference between them but I am able to hear a subtle tonality shift. That’s all I mean when I say they sound different.
The great marketing lines have convinced many there is an audible difference, so people hear them.
Until their belief's are put to the test under tightly bias controlled DBT listening procedures.
Then the differences magically disappear.
 
That is very true. However, very small differences in level or frequency response can be audible, even in a DBT. I’ve tested this myself, and found that 0.5 dB overall level difference between two devices is easy to hear.

Since I have an AVM 70, I was eager to find out if a MiniDSP flex, that has a much better SINAD actually sounded better. I set up both devices as 2-channel preamps, with output levels matched within 0.1 dB. I have a balanced 2-input/1-output comparator box with relays that can switch the analog out between the two using a remote switch. Both devices were fed the same digital input signal from a Logitech Transporter streamer. I operated the A/B switch myself, but quickly lost track of which was which …. Anyway, I could not hear any difference what so ever.
 
The Information tab in the AVM 70’s web gui will tell you …

View attachment 338848
He's asking about release date, though it probably isn't too far from the build date.
That is very true. However, very small differences in level or frequency response can be audible, even in a DBT. I’ve tested this myself, and found that 0.5 dB overall level difference between two devices is easy to hear.

Agreed, I wouldn't say it is "easy" unless we define what easy mean lol.. Anyway, that's why when I do those silly DBT, more often SBT ("easier" to do) I level match them to less than 0.25 dB or less visually speaking, using REW. Those swear by subjective measurements don't seem to consider the 0.5 dB level difference, or the effects of just not sitting on the exact same spot.

Since I have an AVM 70, I was eager to find out if a MiniDSP flex, that has a much better SINAD actually sounded better. I set up both devices as 2-channel preamps, with output levels matched within 0.1 dB. I have a balanced 2-input/1-output comparator box with relays that can switch the analog out between the two using a remote switch. Both devices were fed the same digital input signal from a Logitech Transporter streamer. I operated the A/B switch myself, but quickly lost track of which was which …. Anyway, I could not hear any difference what so ever.

Yet on AVSF, most (seem like >95%) of the AVM 90 owners claimed the difference between the 90 and the 70 was not subtle, go figure... my guess is, none of them did their comparisons in SBT, let alone DBT. To me, audible difference is precious, I would have picked up the 90 the day I did the demo in the dealer's hi-end room, if I felt there was a difference.
 
Yet on AVSF, most (seem like >95%) of the AVM 90 owners claimed the difference between the 90 and the 70 was not subtle, go figure... my guess is, none of them did their comparisons in SBT, let alone DBT. To me, audible difference is precious, I would have picked up the 90 the day I did the demo in the dealer's hi-end room, if I felt there was a difference.

It’s most likely sighted bias, but I wonder if there are bigger differences at lower output voltages.

Anthem has 29 dB gain for their amps.

50 milliwatts of power is what you might be using for a quieter scene. With a 92 dB/W speaker, 15 feet listening distance, 50 mW is 65 dB, which is like running -20 on a setup calibrated to be 85 dB at +0 which seems pretty reasonable to me.

4 ohm speaker would mean around 0.45V at the amp.

So then you subtract 29 dB and now you are at 16 millivolt output at the preamp stage where SINAD is going to be way lower.

If I use the dynamic range measurement then residual noise is 0.025 mV. (4V - 104.5 dB).

0.025 mV is 56 dB below 16 mV.

Since I said we are listening at 65 dB, I assume there is 9 dB of noise. If you had 11 channels, 9 dB x 11 adds up to 20 dB, which is rustling leaves?

The math is probably off a bit since some 29 dB nominal gain amps only do 23 dB for XLR, but people have complained of hiss with the AVM60 (which was 10 dB worse of dynamic range). My math is also with high efficiency speakers and 15 ft listening distance.

@amirm, have I done the math correctly? The German Stereo magazine has always done 50mW and 5W measurements for their amps.

I know there are an infinite number of tests to add which eats up time, but is there value in adding 50 mW testing to your amps and maybe doing a SINAD sweep from 50-200 mV for AV processors to capture realistic loads at 29 dB gain?

It might allow better understanding of real world scenarios (unless the math from your 4V testing and dynamic range calculation is good enough and running the test just duplicates what we can approximate well with math).

Given that you have 5W for amps, maybe it makes sense to test AV products at

4V = compare standalone DAC against processor

150 mV = what you need to reach ~5 watts with a 29 dB gain amp or ~50 mW with a low gain amp?
(Rounding for convenience)

That will provide some value to both AHB2/LA90 users and Marantz AMP10 “in bridged mode” and be like running the dashboard and maybe a multitone at the 150 mV output level?
 
It’s most likely sighted bias, but I wonder if there are bigger differences at lower output voltages.

Anthem has 29 dB gain for their amps.

50 milliwatts of power is what you might be using for a quieter scene. With a 92 dB/W speaker, 15 feet listening distance, 50 mW is 65 dB, which is like running -20 on a setup calibrated to be 85 dB at +0 which seems pretty reasonable to me.

4 ohm speaker would mean around 0.45V at the amp.

So then you subtract 29 dB and now you are at 16 millivolt output at the preamp stage where SINAD is going to be way lower.

If I use the dynamic range measurement then residual noise is 0.025 mV. (4V - 104.5 dB).

0.025 mV is 56 dB below 16 mV.

Since I said we are listening at 65 dB, I assume there is 9 dB of noise. If you had 11 channels, 9 dB x 11 adds up to 20 dB, which is rustling leaves?

The math is probably off a bit since some 29 dB nominal gain amps only do 23 dB for XLR, but people have complained of hiss with the AVM60 (which was 10 dB worse of dynamic range). My math is also with high efficiency speakers and 15 ft listening distance.

@amirm, have I done the math correctly? The German Stereo magazine has always done 50mW and 5W measurements for their amps.

I know there are an infinite number of tests to add which eats up time, but is there value in adding 50 mW testing to your amps and maybe doing a SINAD sweep from 50-200 mV for AV processors to capture realistic loads at 29 dB gain?

It might allow better understanding of real world scenarios (unless the math from your 4V testing and dynamic range calculation is good enough and running the test just duplicates what we can approximate well with math).

Given that you have 5W for amps, maybe it makes sense to test AV products at

4V = compare standalone DAC against processor

150 mV = what you need to reach ~5 watts with a 29 dB gain amp or ~50 mW with a low gain amp?
(Rounding for convenience)

That will provide some value to both AHB2/LA90 users and Marantz AMP10 “in bridged mode” and be like running the dashboard and maybe a multitone at the 150 mV output level?
I know you always focus a lot on lower output levels and low SPL as well, but I think you also need to keep in mind at such low SPL level, distortions will also be low in absolute sense even for SINAD as low as 80 dB, -80 dB THD in a room with 20 dB noise floor means the harmonics are in the well below noise floor range, not sure even golden ears could hear those night and day, another league kind of better SQ.
 
I know you always focus a lot on lower output levels and low SPL as well, but I think you also need to keep in mind at such low SPL level, distortions will also be low in absolute sense even for SINAD as low as 80 dB, -80 dB THD in a room with 20 dB noise floor means the harmonics are in the well below noise floor range, not sure even golden ears could hear those night and day, another league kind of better SQ.

+1. It is because I listen a lot quieter than most people, for sure.

Agree about distortion, it’s always about noise. That’s the question in my mind. As you reduce the output, noise is dominant and when does that become a real issue in-room?

But on AVSforum, there is a person complaining of noise with the AVM60 so that may be the only difference between the AVM70 and 90 if it is not all sighted bias.

There is also another person on AVSForum who finds his HTP-1 to sound not as good as his Yamaha CX-A5100. Most sighted bias should be in favor of new and expensive. The only objective strength I can find for the Yamaha over the HTP-1 is higher SINAD in the 0.1 to 0.5V output range. The measured differences are small but the lower you go, the bigger the difference in favor of Yamaha.

And then you have Stereo.de which presumably has a good reason to use 50mW and 5W as their two reported levels.
 
+1. It is because I listen a lot quieter than most people, for sure.

Agree about distortion, it’s always about noise. That’s the question in my mind. As you reduce the output, noise is dominant and when does that become a real issue in-room?

But on AVSforum, there is a person complaining of noise with the AVM60 so that may be the only difference between the AVM70 and 90 if it is not all sighted bias.

There is also another person on AVSForum who finds his HTP-1 to sound not as good as his Yamaha CX-A5100. Most sighted bias should be in favor of new and expensive. The only objective strength I can find for the Yamaha over the HTP-1 is higher SINAD in the 0.1 to 0.5V output range. The measured differences are small but the lower you go, the bigger the difference in favor of Yamaha.

And then you have Stereo.de which presumably has a good reason to use 50mW and 5W as their two reported levels.

Amir is already doing the 5W, how do we convince him to do 50 mW everytime, and the corresponding pre out voltage for a 23 dB gain power amp?
 
Amir is already doing the 5W, how do we convince him to do 50 mW everytime, and the corresponding pre out voltage for a 23 dB gain power amp?

Between this post and this post, I think the AV testing approach has room for undergoing potential evolution. Yet, in terms of priority, it’s a lot of extra work and I am not sure if the relative rankings would change a lot except for a handful of devices.

The other questions are
1) Does it really matter? It’s still speculation perceived differences in sound is from the low level performance when more likely than not, sighted bias is the easiest answer
2) Should we insist on focusing on the 4V out and beyond in order to drive the industry toward low gain amplifiers and higher output preamps to make professional audio and consumer audio harmonized given that this seems to be the best approach with modern electronics.
 
Between this post and this post, I think the AV testing approach has room for undergoing potential evolution. Yet, in terms of priority, it’s a lot of extra work and I am not sure if the relative rankings would change a lot except for a handful of devices.

The other questions are
1) Does it really matter? It’s still speculation perceived differences in sound is from the low level performance when more likely than not, sighted bias is the easiest answer
2) Should we insist on focusing on the 4V out and beyond in order to drive the industry toward low gain amplifiers and higher output preamps to make professional audio and consumer audio harmonized given that this seems to be the best approach with modern electronics.
1). I think it depends, for my AVM 70, no it doesn't matter. I listen to average output from a 29 dB gain amp is about 0.2 W average with max peaks up to may be 20-25 W and preamp output voltage will be very low, but even 65 dB SINAD should be fine for me, given my room noise floor is typically at around 20 to 30 dB. The only time I heard hiss was when I had the Marantz AV8801, no noise with any of my other preamps, power amps, or even a Denon AVR-X3400H. I thought the low level was important for you.

2). I would say yes, because it seems to me many members do listen loud, and/or have power amps with much lower gain, and/or have low sensitivity speakers. So, in my opinion, it would be best if Amir measure the preout from say 0.1 V to 4 V, that would be 0.05 V to 2 V RCA.

He has been doing the curve already, he just have to start at a lower level that's all. Still more work for him but not a lot more, I assume.
 
It's worth noticing that when Amir tested AVM 70 he set the master volume to -4,5 dB to reach 4V RMS output with 0 dBFS input signal. That means that the output level of the AVM 70 is approx. 7 V RMS.
 
Yet on AVSF, most (seem like >95%) of the AVM 90 owners claimed the difference between the 90 and the 70 was not subtle, go figure...
Of course they sound different, they're using different power cords. :eek:
 
Back
Top Bottom