Actually yes they do. They refer to it as the Marantz sound. They have internal circuitry to make it work the way it does. Some people like the Marantz sound and some like the Denon sound. Denon makes conventional regular flat amps and Marantz is modified. It is very nice of Marantz to advertise it and offer it to people who like it. From the charts supplied in this thread I would "think" it might be a slightly warmer sound.
Cool. So, the Marantz "sound" is a rolled-off top octave. I can get behind that, to be honest. I still love the sound of my Advent loudspeakers, but they provide the "Marantz sound" without the Marantz.
To the topic of whether any audible effect can be measured, that gets litigated here repeatedly. The assertion is that any audible effect can be measured, and furthermore that the measurements we make are sufficient to identify them. The "Marantz sound" is audible, and it is also measurable and repeatable, and the experience shown here (with the Butterworth filter) demonstrates that.
But here are two problems:
1. Manufacturers often claim ultimately linearity, and furthermore, ultimate fidelity and accuracy (though, in any reasonable sense, linearity = fidelity). But then they claim that their secret sauce makes them different from all the other companies that make the same claims. Reviewers reinforce this claim by adding adjectives like "wow!", "obvious", "life-changing", "my wife heard it from the shopping mall", "my friend who cares nothing about audio was blown away", and so on. How does one sort out this apparent contention between fidelity and distinguishing effect? Any formal logic would insist that a distinguishing effect either precludes the possibility of fidelity from the device in question, or it claims that
all others lack fidelity. I think it takes something more than Harold Hill espousing the "think method" to sort this out. And I don't think rejecting the notion that all others lack fidelity requires any defense.
2. If one claims to hear a difference, but measurements (that actually cover the use case being evaluated) don't reveal it, then it stands to reason that the perception of that difference is the first thing to confirm with rigor. The reason for that is easy: Over and over and over again, people claim an obvious difference but are unable to demonstrate that claim when subjected to properly controlled subjective testing. Or, even when they can reliably detect a difference, they can't identify a preference (which is the second step). Usually, they can't detect the difference, or their detection has low reliability (and 65 out of 100 is low reliability, even if we can confidently reject the likelihood that it is mere guessing). If I can't notice a difference more often than 65 out of 100 times, then I think the adjectives I'd have to use would be "subtle", "notice it sometimes", "won't change your life even if you do detect the difference", etc.
I've argued in this thread that many tests don't actually cover the use case being evaluated, in that the comparison is made when the amps are clipping some amount that should be noticeable, while the claim insists that both amps be compared within their linear operating range.
As to what pleases us, the placebo effect doesn't claim that we didn't get better, it only demonstrates that it wasn't the drug that made us get better. If the ownership experience is enhanced by factors such as brand, aesthetics, price, story, history, exclusivity, reputation, peer acceptance and affirmation, expert review, power draw, increased electric bills, etc., then that may affect our perception of the sound. That absolutely counts! But it is orthogonal to the measured and perceived audio output.
That said, those other factors demand loyalty. I have a good friend who runs a high-end amplifier manufacturing company. I trust his ears and experience, and I don't challenge his advertising claims, because we are friends. The people who buy his stuff are grown-ups. His advice to me, given that I can't afford the stuff that company makes anyway, has always been balanced, reasonable, affirming, and deeply knowledgeable of the industry. But his business is built on quality writ differently than plain measured and audible performance. He is not a charlatan, and the people who buy his stuff will have an ownership experience they will enjoy, and they will tell others how much they enjoy it. It's when they conflate that high-end ownership experience with measurable audible effects that we run into trouble.
We cast this as a Grand Conflict between the data-driven and the feeling-driven (even if they
believe what they feel is based on audible but not measurable effects, or whatever). But both effects work on each of us. It seems to me that in this forum we try to parse out these two influences, and then focus on the data-driven effects. And those who are driven by how they feel about their audio equipment feel disrespected, because of that belief that their feelings are rooted in data when they are not.
So, back to topic, it's not that Atkinson at Stereophile turned objectivist when he sold his high-end amp after a blind test, and then turned back to subjectivism when he decided that had been a mistake. It's that he recognized that by any objective measure, the cheaper amp provided the same performance, but not the same satisfaction. His description (at a RMAF) was "I found myself not listening to music." That may be because of an audible effect, but it can also be because the ownership experience wasn't providing that overall satisfaction. We think we can separate the demonstrable audio effect from the ownership experience effect, but we simply can't.
Rick "who definitely chose products in part to get an approving nod from audio geeks of old" Denney