• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Review and Measurements of NAD T758 V3 AVR

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
With a few clicks you can see there haven't been very many av receivers/prepros testing. If my count is correct, only 8.
Testing here has primarily been separates like amps, dacs, etc and not receivers. While the ones measured so far haven't seem to have tested well so far, it seems quite the leap to say none will. I am quite interested in the upcoming Anthem MRX520 review as well as one for an upcoming Yamaha model. It will be interesting to see what companies are putting in the effort to at least hit the 16 bit/cd quality threshold.
We all should want the best engineered devices at the given price points and if manufacturers are aware that their products are getting measured in this way, it may result in better gear in the long run. It should be something everyone would be in favor of.

I don't think anyone is saying none will. But so far they have been disappointing. In the case of many of these, due to complexity of design, they'll all be pretty much the same up and down the line except for the power amp sizing. So if you've tested one Marantz or one Denon you probably have a good idea of how they all perform. I don't like painting with too broad a brush, preferring specific testing of specific models, but in this case we know the larger AVR makers keep the same underlying designs for years only changing formats as Dolby and DTS permutate them endlessly.

Here is one review you missed.
https://www.audiosciencereview.com/forum/index.php?threads/marantz-avr-7701-dac-measurements.3485/

So up to 9 and some of those aren't with Amir's AP of course.
 

audimus

Senior Member
Joined
Jul 4, 2019
Messages
458
Likes
462
A discussion on audibility vs measurability is a good one regardless of the distractions here but I do think both sides are missing the whole picture as I have posted here before as a criticism of the methodology or the assertions of the objectivists.

1. Without relating measurement numbers to audible manifestations, they are just numbers. It is not sufficient to say we do not know who can hear what and therefore unable to make a definite statement. If that low-end BMW measures 10’ more braking distance than the Toyota, does it have an impact on their evaluation for daily use of either?

2. It would not be science to assert that we have measured everything there is to measure that should affect audibility. It is Ok to hold that if there is an audible difference, we can measure it (but only if a test was devised to measure it despite the difficulty of measuring it). The measurements done here are likely to be correct but that is different from being necessarily complete. Saying that differences people experience between units is due to faulty A/B comparisons or due to a subjective preference for the sonic signature that is far from perfect reproduction (while it may be technically correct) is simply punting on that question.

What these measures show (which would be far less controversial) is the level of engineering especially in relation to the marketing material put out. The word distortion conjures up a negative audible manifestation that is hard to establish and so that message gets lost.

The conclusion I draw about this NAD unit is that it is one of over-promise and under-deliver. Exposing objective reasons for that is a good thing (knee-jerk reactions from the fan club not withstanding) to raise the bar for manufacturers. Otherwise, they will all become marketing entities than engineering entities because that is easier to do.
 
  • Like
Reactions: 617

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
So, objectively, which AVR would you recommend. Oh wait. I forgot. No AVR is good enough here, unless you spend 40 thousand dollars...
Well maybe this one:
https://www.audiosciencereview.com/...-sound-blaster-omni-5-1-dac.8931/#post-225402

Fed by a computer that is running Dirac, and connected to good power amps(or powered speakers) you can beat the Nad for less money. NAD was so bad they cause me to do something I never dreamed would happen. Suggest a Sound Blaster as an improvement. ;)
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,390
Location
Seattle Area
. Saying that differences people experience between units is due to faulty A/B comparisons or due to a subjective preference for the sonic signature that is far from perfect reproduction (while it may be technically correct) is simply punting on that question.
The time it takes to switch one AVR for another is huge. As such, there is no scientifically valid audio comparison with two AVRs done that way. Our short-term memory is not remotely that long.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
A discussion on audibility vs measurability is a good one regardless of the distractions here but I do think both sides are missing the whole picture as I have posted here before as a criticism of the methodology or the assertions of the objectivists.

1. Without relating measurement numbers to audible manifestations, they are just numbers. It is not sufficient to say we do not know who can hear what and therefore unable to make a definite statement. If that low-end BMW measures 10’ more braking distance than the Toyota, does it have an impact on their evaluation for daily use of either?

2. It would not be science to assert that we have measured everything there is to measure that should affect audibility. It is Ok to hold that if there is an audible difference, we can measure it (but only if a test was devised to measure it despite the difficulty of measuring it). The measurements done here are likely to be correct but that is different from being necessarily complete. Saying that differences people experience between units is due to faulty A/B comparisons or due to a subjective preference for the sonic signature that is far from perfect reproduction (while it may be technically correct) is simply punting on that question.

What these measures show (which would be far less controversial) is the level of engineering especially in relation to the marketing material put out. The word distortion conjures up a negative audible manifestation that is hard to establish and so that message gets lost.

The conclusion I draw about this NAD unit is that it is one of over-promise and under-deliver. Exposing objective reasons for that is a good thing (knee-jerk reactions from the fan club not withstanding) to raise the bar for manufacturers. Otherwise, they will all become marketing entities than engineering entities because that is easier to do.

You last sentence, they have become marketing entities rather than engineering entities. That much is apparent. They push all this Hires, multi-format, sound quality aura to market. And as you say over promise, under deliver. Hires isn't actually attainable by the AV gear tested so far.

Now faulty AB comparisons are simply not going to be related to performance.

There is gear which is so good you aren't able to hear its effect. There is gear so bad you can (though it isn't common). There is gear which might in some conditions be audible and that is where most gear lies. A definitely audible performance envelop is going to be drawn with a wide crayon and not a sharp pencil because of all the variables. I think the idea which is part of the ethos of this forum, is we can have gear which is so good it is a total non-issue. So this takes out any question of the gear effecting reproduction quality. Since some of it is inexpensive, seeing expensive gear that isn't close to that throws up a red flag. Maybe some of us react like a bull seeing red making it more of a issue than it may be in most uses.

The topic of this thread is simply sub-standard. If someone has it, enjoys it, and finds it not a problem then good for them. They shouldn't worry. They also shouldn't try and tell us how good it is.
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,981
Likes
4,838
Location
Sin City, NV
The time it takes to switch one AVR for another is huge. As such, there is no scientifically valid audio comparison with two AVRs done that way. Our short-term memory is not remotely that long.

And I think the market for 5-10 pair A-B switch boxes with subwoofer outputs is very, very small... so it'd be a DIY project if anything.

You last sentence, they have become marketing entities rather than engineering entities. That much is apparent. They push all this Hires, multi-format, sound quality aura to market. And as you say over promise, under deliver. Hires isn't actually attainable by the AV gear tested so far.
...
The topic of this thread is simply sub-standard. If someone has it, enjoys it, and finds it not a problem then good for them. They shouldn't worry. They also shouldn't try and tell us how good it is.

Exactly... I still might wind up giving the C658 a chance (even knowing that once Amir tests it, I'll likely feel regret). However, I'll keep hoping that there will be a release of a product that's the functionality of a miniDSP SHD combined with the DAC performance and design aesthetic of the latest offerings of Okto or Matrix Audio - but with HDMI switching for 2 inputs and ARC on the output. All for less than $3K.

It's a very long shot... and I'm getting real close to just murdering my budget and getting a Lyngdorf TDAI-3400 with full upgrades... and completely waste the integrated amp or just get a pile of little boxes that each only do one thing and hope I'm still married after explaining the "TV-to-BR-to-Music" requirements to my wife.
 
Last edited:

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,524
Likes
37,057
The time it takes to switch one AVR for another is huge. As such, there is no scientifically valid audio comparison with two AVRs done that way. Our short-term memory is not remotely that long.
Yes, this.

Like someone else mentioned, once I get an AVR setup I loath the idea of changing things up. That is all the more reason to use measurements instead of listening comparisons. You can't normally do valid comparisons with AVRs. If one will perform well on measurements, then you don't need that anyway. Or don't need it as much.
 

GrimSurfer

Major Contributor
Joined
May 25, 2019
Messages
1,238
Likes
1,484
but the performance it offers is sufficient to sound better than other, presumably better measuring products.

and loudspeaker distortion is much worse than any DAC or Amp (although this DAC is getting pretty close...)

The first statement is speculative. There is no evidence presented anywhere in this thread to support such a statement.

The reality is that total distortion is a sum of all the devices. One device can have a significant effect on the overall distortion of the audio chain. That's why any conscious decision to "accept" a device with poor distortion specs isn't a good one.

You can read more about this at:

https://www.audiosciencereview.com/.../thd-noise-thd-n-isnt-it-all-just-noise.7864/
 
Last edited:

audimus

Senior Member
Joined
Jul 4, 2019
Messages
458
Likes
462
The time it takes to switch one AVR for another is huge. As such, there is no scientifically valid audio comparison with two AVRs done that way. Our short-term memory is not remotely that long.

I am not disputing that at all. But it is also missing the boat.

It is like saying it is very difficult to A/B drive test two cars to pick one and therefore preference for one or the other is subjective in a negative way. Some people think speed dating is a good way to A/B test two prospects, but we do know how indicative of a good relationship that is. :)

There is a third possibility. Which is that A/B testing for sonic differences on the spot is a flawed methodology to pick one that would please us in the long term. So deficiencies of that methodology is itself irrelevant in that case.

My personal method to pick audio equipment is actually far from it. Every year when I get an itch to upgrade or find problems with current equipment, I try out a few units. But always over at least a couple of weeks and up to a month depending on return period. The listening is not in some sterile well-treated room with controlled room noise. It varies from noisy ambient noise to quiet periods, from low volumes as background to focused, critical listening.

Most of us live in an imperfect world of room noise and dimensions, far less than perfect speakers, imperfect source material, etc. To me a 5 minute or 15 minute audition or an A/B testing is like taking a single position measurement for room correction. Not the best way to do it. My approach is to average it over all the different conditions and see where I feel pleasure or pain.

I might be a bit more sensitive as a (amateur) practicing musician as a hobby with decades of setting up audio equipment for live performances but my impressions get formed over that weeks of listening time. Some of them induce fatigue to the point of having to turn off after 15 minutes or to lower volume. Some of them beg for volume to be turned up because it is enjoyable. Some of them seem to become harsh as I turn up the volume, some just increase the pleasure as I increase the volume. It is not A/B testing of two, it is getting to know one.

I am very certain that all of these can be captured in some measurement but I know for sure that none of these reactions are correlated with the published specs, brand or price. I have no reason to believe that a perfectly linear audio chain would give the maximum pleasure (except convincing myself of that because of the numbers), nor do I believe that what appeals to me would similarly appeal to someone else as well. So, taking that measurement perfection as the goal for lack of anything else might be convenient but is it really the definitive measurement?

And this is why I do not relate these measurement to audibility but to engineering levels. I like having well-engineered products but not if they do not appeal to me viscerally as above.

And that divide is exactly what Audio Science has failed to bridge so far.
 
Last edited:
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,390
Location
Seattle Area
And that divide is exactly what Audio Science has failed to bridge so far.
There is no divide. There is no failure. For there to be a divide, both assessments need to follow accepted guidelines. Measurements do. Your long term subjective testing simply doesn't. As such, there is no reason to expect rationalization between them. You can't rationalize an atheist to a christian. See this article of mine in failures of long term listening tests: https://www.audiosciencereview.com/...ity-and-reliability-of-abx-blind-testing.186/

1568229851135.png


Measurements have no responsibility to account for unreliable audiophile testing. Wanting to match will never happen.
 

peng

Master Contributor
Forum Donor
Joined
May 12, 2019
Messages
5,615
Likes
5,168
The idea that an AVR with Dirac and poor quality DACs would sound better than one with less sophisticated room correction and better DACs is totally reasonable. This is not a hifi product. It probably sounds great in ordinary use.

I agree in general but I guess it still depends on what is considered a "poor quality" DAC, or if such a thing even exist by definition (if any such definition). Based on the fact that the T758 V3's bigger brother T777 uses the PCM1690 for DACs, I believe the T758 V3 more than likely has the same chips. I don't consider the PCM1690 a poor quality DAC (and I know you are not saying that..), I would say it is more than reasonable to assume with Dirac Live on board, if set up properly, the T758 V3 could sound better, at least to people who prefer accuracy/neutrality, to exaggerated bass and treble than an AVR with better quality DACs on board. As to why the T758 V3 didn't measure better, we are going to have to wait and see if NAD is willing to shed some light on this.

NAD T777's PCM 1690 simplified specs:
THD+N: -94 dB, SN/DNR: 113 dB

Yamaha's mid range AVR's ES9007, spec sheet not Googleable, but it likely has the same specs as the ES9006.
ES9006 specs:
THD+N: -102 dB, DNR: 120 dB

-94 dB = 0.002%
-102 dB = 0.0008%
 
Last edited:

peng

Master Contributor
Forum Donor
Joined
May 12, 2019
Messages
5,615
Likes
5,168
I wonder if that's a special SKU just for Yamaha or ?? The premier line has had the 9006, 9006S, and SABRE9006AS - but there's no ES9007S anywhere on the web other than Yamaha references that I can find. Weird.

I noticed that too way back, but ESS Tech did mention it here: http://www.esstech.com/files/4314/4095/4318/sabrewp.pdf but as you said, no data sheet can be found on the www. It most likely is a slight variation of the 9006S, with the same key specs.
 

MrGoodbits

Member
Forum Donor
Joined
Sep 6, 2018
Messages
63
Likes
110
Location
Knoxville, Tennessee
I have no reason to believe that a perfectly linear audio chain would give the maximum pleasure (except convincing myself of that because of the numbers).
I'm as anxious about audio quality as anyone here. Having a provably "linear audio chain" for headphones reduces my audio neuroses. I can be reasonably sure that what I'm hearing is what's on the recording. If I don't like the sound, I can blame the engineer/artist, or live with it, or move on.

It's more complicated for speakers of course, but not for the electronics that drive them. When I played with blind AB testing of amps/preamps/integrateds and realized that the differences I had been hearing went away when I didn't know what was being used---but it still sounded fantastic--it was a relief. The listening pleasure came back!
 

speedy

Member
Forum Donor
Joined
Sep 8, 2019
Messages
56
Likes
64
Location
Seattle
Unless you can make instantaneous AB switching with levels matched, it is impossible to hear small differences we are talking about here. And that is not easy to do with multi-channel AVRs and no external equipment.
I'm not saying my methodology for subjectively testing AVRs myself is perfect, but this is what I've done in the past and it's been effective for making my own personal decisions...

When comparing AVRs I only use the front two channels (swapping all the surround channels and subs is too complicated and time consuming).
The front 2 channels are on Banana Plugs.
I leave both receivers plugged in and powered on.
I connect my Oppo either over S/PDIF or HDMI into a 1x2 splitter (I have both a S/PDIF and HDMI one).
This means that my Oppo is connected to 2 AVRs at once.
I then just move the 4 banana plugs back and forth (carefully making sure that I put the AVR in standby while swapping)... only takes about 10 seconds and it helps if you have a helper.

...I know that my method is far from perfect, but I've been shocked how much of a difference there is between AVRs in "Direct mode" without room correction that I've subjectively tested like this. I'm confident that the drastic audible differences I hear wouldn't show up as noise in an objective test.

To my possibly broken ears/brain... There's more too how these Pre/Pros & Receivers sound than a test can show and I think that its because the way these DACs are implemented can result in very different sound signatures. I have no idea how you can test or prove this aside from just experiencing it and most people don't typically have multiple receivers/processors lying around.

Regardless, these test results are very interesting and it only benefits consumers if these manufacturers up their game in terms of how these devices objectively test.
 
Last edited:

audimus

Senior Member
Joined
Jul 4, 2019
Messages
458
Likes
462
There is no divide. There is no failure. For there to be a divide, both assessments need to follow accepted guidelines. Measurements do.

...

Measurements have no responsibility to account for unreliable audiophile testing. Wanting to match will never happen.

Still not getting the point across. It is not rationalization between a limited formal system and experience, it is the inadequacy of the formal system to capture (and quantify) experience.

By forcing the test back into the limited formal system (of detecting distortion) over a short period or long, it is cutting out the possibility of what might be influencing experience (purely acoustically not other factors).

Let me take an example of an extreme to make the point.

Let us say, I am very sensitive above a certain frequency range say around 14k to be concrete and so react to sound output accordingly. An audio chain that affects (enhances or diminishes) these frequencies in a certain way is considered more (or less) pleasurable than another. There is no test in our existing audio science to arrive at or to establish that conclusion. But this does not imply that such a correlation does not exist. It has nothing to do with whether I can detect distortion numbers in a blind test. It would be whether I have a correlation between what I experience as good and some particular characteristic (when neither you are I necessarily know what that is to start with) that can be shown in a controlled test.

We already have anecdotal evidence from using different target curves that different people react differently to different FR signatures, so it is not an academic possibility that our experience is related to different characteristics.

To dismiss any such correlation as an individual aberration and so not of interest is like scientists who try to model physical world but ignoring every twist in the physical world that does not lead to a simple, generalizable and manageable model. Pretty soon, they have a model that does not reflect any physical reality and as such of academic interest but not of much practical use.

That is the divide I am referring to.
 

audimus

Senior Member
Joined
Jul 4, 2019
Messages
458
Likes
462
I'm as anxious about audio quality as anyone here. Having a provably "linear audio chain" for headphones reduces my audio neuroses. I can be reasonably sure that what I'm hearing is what's on the recording. If I don't like the sound, I can blame the engineer/artist, or live with it, or move on.
There is a problem with that. Our ears are not perfect microphones and their ability to react to different frequencies even vary with time. So, in a way, NOBODY hears (or to be more precise perceives) what is exactly on the recording or even what the musician (or the recording engineer) him/herself hears in these things even if all of them were using the same equipment. So I would claim that goal is illusory.

If knowing the “numbers” influences your perception, then you would be exhibiting the very same subjective bias of sighted perception. The knowledge of the numbers influencing perception is qualitatively no better or worse than knowledge of brand influencing it because that knowledge pre-disposes you.
 
Last edited:

BDWoody

Chief Cat Herder
Moderator
Forum Donor
Joined
Jan 9, 2019
Messages
6,948
Likes
22,625
Location
Mid-Atlantic, USA. (Maryland)
Still not getting the point across. It is not rationalization between a limited formal system and experience, it is the inadequacy of the formal system to capture (and quantify) experience.

You are getting your point across fine. It's the agreeing part you aren't getting.

Before you state the inadequacy of the current system, maybe more than anecdotal completely uncontrolled impressions are needed for anyone to see the need for what you are trying to describe.
 

MrGoodbits

Member
Forum Donor
Joined
Sep 6, 2018
Messages
63
Likes
110
Location
Knoxville, Tennessee
If knowing the “numbers” influences your perception, then you would be exhibiting the very same subjective bias of sighted perception. The knowledge of the numbers influencing perception is qualitatively no better or worse than knowledge of brand influencing it because that knowledge pre-disposes you.

“Knowing the numbers” tells us that a system is measurably accurate within the bounds of audibility. “Failing to tell the difference in a blind AB test” tells us that there is nothing else that is being missed by the measurements. It all comes down to the blind AB test. You probably have to take one before you can become a true disbeliever. And disbelieving will set you free.

Someone in this forum likes to say: “if you can’t hear it without peeking, it’s not real”. I’m going to add that to my sig if I can properly attribute it.
 

digicidal

Major Contributor
Joined
Jul 6, 2019
Messages
1,981
Likes
4,838
Location
Sin City, NV
There's also a point at which abandoning a simple, repeatable testing methodology and comparison protocol becomes more of a hindrance to application of the data than the expanded specificity becomes an improvement in that area.

Is there a potential for over-simplification in selecting distortion levels as the primary metric? Possibly... but then again distortion is also the primary degradation factor in the audio signal stream. At least outside of significant deviation in frequency response. In all but the most horrible products in the category (AVRs or more broadly integrated amplifiers) they are able to maintain complete linearity as far as FR is concerned - so just looking at distortion makes a great deal of sense to me.

Now if you show me something where accuracy in FR is a huge problem - I'll cross "that divide" and say THD is unimportant (relatively at least) - but now you're talking about something so horribly designed that it could reasonably be said to have failed as an audio reproducer completely! :eek: At least outside of speakers that is - but that's what DSP is for.
 

audimus

Senior Member
Joined
Jul 4, 2019
Messages
458
Likes
462
You are getting your point across fine. It's the agreeing part you aren't getting.

Before you state the inadequacy of the current system, maybe more than anecdotal completely uncontrolled impressions are needed for anyone to see the need for what you are trying to describe.

There is already significant number observations for which tests do not exist.

Right now, it is like if someone claims that they see green men, force them participate in a test to see whether they can see blue men and claim they cannot.

If you look at the history of science, models emerge to explain observations, sometimes from scientists in controlled situations, some from regular people in uncontrolled situation. In the latter case, you devise a test to see if the observations can be replicated in a controlled setting, not demand that the regular folk create their own controlled tests before they can report observations. Of course, you need some reasonably credible observations to start with.

But we already have some from the spreading DRC systems. For example, many AVRs give you choice to select from a flat target or some “reference” target curve to do their corrections to. Both smooth the resonances etc but they affect tonal balances. There is clear evidence in all the forums for these equipment that people have preferences for one or the other. Heck, even family members differ in their preference in the same equipment. Other than saying, they are subjective preferences, there is no science based explanation for it. You can either claim that such a subjective preference does not really exist if the test was controlled or have a theory that explains the correlation between what someone prefers and the FR curve that makes it pleasing for that person. So, you are not accounting for experience in real world at all in that case.

So, a valid enquiry is whether they will able to reproduce that preference in a controlled test. For example, play the sounds for each of the curves using the same equipment in a fully controlled test in a random repeating sequence between the two and see if they can pick the one they prefer correctly in a statistically significant way.

If they don’t, then that preference was illusory. If they do, then it needs a theory for why that is the case and it is easy to extend that into why people might prefer one unit over another. Not because of distortion numbers.

As far as I know, none of the existing studies or tests do that exactly. Denying those anecdotal observations because the typical consumer cannot do the controlled test to the level needed to establish the reality is not a defensible position.
 
Top Bottom