• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Audiofools turned objectivists or vice-versa?

Spkrdctr

Major Contributor
Joined
Apr 22, 2021
Messages
2,232
Likes
2,963
Last edited:

rdenney

Major Contributor
Forum Donor
Joined
Dec 30, 2020
Messages
2,295
Likes
4,036
Actually yes they do. They refer to it as the Marantz sound. They have internal circuitry to make it work the way it does. Some people like the Marantz sound and some like the Denon sound. Denon makes conventional regular flat amps and Marantz is modified. It is very nice of Marantz to advertise it and offer it to people who like it. From the charts supplied in this thread I would "think" it might be a slightly warmer sound.
Cool. So, the Marantz "sound" is a rolled-off top octave. I can get behind that, to be honest. I still love the sound of my Advent loudspeakers, but they provide the "Marantz sound" without the Marantz. :)

To the topic of whether any audible effect can be measured, that gets litigated here repeatedly. The assertion is that any audible effect can be measured, and furthermore that the measurements we make are sufficient to identify them. The "Marantz sound" is audible, and it is also measurable and repeatable, and the experience shown here (with the Butterworth filter) demonstrates that.

But here are two problems:

1. Manufacturers often claim ultimately linearity, and furthermore, ultimate fidelity and accuracy (though, in any reasonable sense, linearity = fidelity). But then they claim that their secret sauce makes them different from all the other companies that make the same claims. Reviewers reinforce this claim by adding adjectives like "wow!", "obvious", "life-changing", "my wife heard it from the shopping mall", "my friend who cares nothing about audio was blown away", and so on. How does one sort out this apparent contention between fidelity and distinguishing effect? Any formal logic would insist that a distinguishing effect either precludes the possibility of fidelity from the device in question, or it claims that all others lack fidelity. I think it takes something more than Harold Hill espousing the "think method" to sort this out. And I don't think rejecting the notion that all others lack fidelity requires any defense.

2. If one claims to hear a difference, but measurements (that actually cover the use case being evaluated) don't reveal it, then it stands to reason that the perception of that difference is the first thing to confirm with rigor. The reason for that is easy: Over and over and over again, people claim an obvious difference but are unable to demonstrate that claim when subjected to properly controlled subjective testing. Or, even when they can reliably detect a difference, they can't identify a preference (which is the second step). Usually, they can't detect the difference, or their detection has low reliability (and 65 out of 100 is low reliability, even if we can confidently reject the likelihood that it is mere guessing). If I can't notice a difference more often than 65 out of 100 times, then I think the adjectives I'd have to use would be "subtle", "notice it sometimes", "won't change your life even if you do detect the difference", etc.

I've argued in this thread that many tests don't actually cover the use case being evaluated, in that the comparison is made when the amps are clipping some amount that should be noticeable, while the claim insists that both amps be compared within their linear operating range.

As to what pleases us, the placebo effect doesn't claim that we didn't get better, it only demonstrates that it wasn't the drug that made us get better. If the ownership experience is enhanced by factors such as brand, aesthetics, price, story, history, exclusivity, reputation, peer acceptance and affirmation, expert review, power draw, increased electric bills, etc., then that may affect our perception of the sound. That absolutely counts! But it is orthogonal to the measured and perceived audio output.

That said, those other factors demand loyalty. I have a good friend who runs a high-end amplifier manufacturing company. I trust his ears and experience, and I don't challenge his advertising claims, because we are friends. The people who buy his stuff are grown-ups. His advice to me, given that I can't afford the stuff that company makes anyway, has always been balanced, reasonable, affirming, and deeply knowledgeable of the industry. But his business is built on quality writ differently than plain measured and audible performance. He is not a charlatan, and the people who buy his stuff will have an ownership experience they will enjoy, and they will tell others how much they enjoy it. It's when they conflate that high-end ownership experience with measurable audible effects that we run into trouble.

We cast this as a Grand Conflict between the data-driven and the feeling-driven (even if they believe what they feel is based on audible but not measurable effects, or whatever). But both effects work on each of us. It seems to me that in this forum we try to parse out these two influences, and then focus on the data-driven effects. And those who are driven by how they feel about their audio equipment feel disrespected, because of that belief that their feelings are rooted in data when they are not.

So, back to topic, it's not that Atkinson at Stereophile turned objectivist when he sold his high-end amp after a blind test, and then turned back to subjectivism when he decided that had been a mistake. It's that he recognized that by any objective measure, the cheaper amp provided the same performance, but not the same satisfaction. His description (at a RMAF) was "I found myself not listening to music." That may be because of an audible effect, but it can also be because the ownership experience wasn't providing that overall satisfaction. We think we can separate the demonstrable audio effect from the ownership experience effect, but we simply can't.

Rick "who definitely chose products in part to get an approving nod from audio geeks of old" Denney
 
Last edited:

gsp1971

Senior Member
Forum Donor
Joined
May 26, 2021
Messages
471
Likes
821
Location
Europe
and 65 out of 100 is low reliability, even if we can confidently reject the likelihood that it is mere guessing
65% can be interpreted as mere guessing with a few lucky guesses. :)

We think we can separate the demonstrable audio effect from the ownership experience effect, but we simply can't.
Rick, isn't that what blind testing is supposed to do? Separate the audio effect from the ownership experience? If you can't reliably identify the audio effect, then your ownership experience / satisfaction stems from other things, non-audio related.

Rick "who definitely chose products in part to get an approving nod from audio geeks of old" Denney
Noticing the use of past tense: 'chose'. Implying it is no longer happening.

Well said, though!
 

rdenney

Major Contributor
Forum Donor
Joined
Dec 30, 2020
Messages
2,295
Likes
4,036
1. 65% can be interpreted as mere guessing with a few lucky guesses. :)


2. Rick, isn't that what blind testing is supposed to do? Separate the audio effect from the ownership experience? If you can't reliably identify the audio effect, then your ownership experience / satisfaction stems from other things, non-audio related.


3. Noticing the use of past tense: 'chose'. Implying it is no longer happening.

Well said, though!

Numbered for ease of response.

1. Not really. The more tests one conducts, the less one must depart from the mean to demonstrate that the choices are not mere guessing (or random). 65/100 will reject the null hypothesis with high confidence. But that doesn't mean the listener will detect a difference any more often than two thirds of the time, which mitigates against any claim that the difference is obvious. We use that word "reliably" to mean two different things here: the first is whether the trials reliably represent an actual effect and are not mere random guessing, and the second that the listener can reliably detect the difference in any one trial. The former does not assure the latter, but a failure of the former eliminates the possibility of the latter. It seems to me that any effect that would draw the kinds of hyperbole we usually see should be detectable 100% of the time, or very nearly so.

2. Yes, blind testing is one method by which we control for the effects of feelings versus the effects of hearing.

3. Don't draw too much from that--it simply means that I've already bought my stuff and am not really in a buying mode at present. But before I bought my latest serious item (the Revel F12 speakers), I definitely recognized that Revel is a respected brand, and that the F12's in particular were very positively reviewed by those whose experience I trust (including Kal Rubinson). My previous purchase, a 20-year-old Naim CD5, was mostly driven by using a Philips transport that is mounted on a manual drawer, eliminating belts and the usual drawer machinery. But it wasn't lost on me that it was reviewed most favorably and is a name that garners respect. That said, I can't tell any difference in sound between it and any of my other CD players. Nor did I expect to.

Rick "whose latest purchase--a JDS Labs Atom head-amp--would get an approving nod from any audio geek" Denney
 

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,493
Likes
25,020
For looks, that Kenwood towards the bottom looks pretty good!
It is indeed, isn't it?
That one, model KT-9900 (not to be confused with another, later, and even nicer KT-9900) was a gift from our son, who stumbled upon it in a thrift store or some similar locale. It is a KT-8300 in bronzeface. ;)
The Kenwood is a very good tuner from an FM radio perspective, and it also even sounds quite good. The only better sounding tuners in the house, in fact, are the Mac MR-67 and those funny-looking little Sherwoods. The Sherwoods lack the bass extension of the MR-67 and the Kenwood. The Scotts are roughly on par with the Sherwoods, but on a "sound quality per dollar basis" the Sherwoods are untouchable, IMO.

The trick, of course, is to find FM stations that still broadcast with some attention to quality. We are fortunate to have a few here.

I guess this is sort of on-topic, in that there's both a subjective & objective component (and, again, IMO) to the performance of a radio tuner (AM or FM, in fact). The https://www.fmtunerinfo.com/ folks' preoccuparion with the "radio" performance of their "shootout" tuners, to me, puts some pretty unsatisfying to listen to tuners at and near the top of their list (with concomitant impact on their prices, even in today's nearly moribund tuner market). I even have one of them, in fact. ;)

001a by Mark Hardy, on Flickr
(i.e., the Sansui TU-717 at top left)
 
Last edited:

danadam

Major Contributor
Joined
Jan 20, 2017
Messages
1,014
Likes
1,582
65% can be interpreted as mere guessing with a few lucky guesses.
This can give you some perspective to how lucky you have to be to get 65 out of 100:

http://www.shodor.org/interactivate/activities/Coin/

Set "Number of Tosses" to 100, "Show Cumulative Stats" to No and "Display Results" to Table. Then start clicking "Toss 'em!" button and tell us how long did it take you to get 65 or more :)
 

gsp1971

Senior Member
Forum Donor
Joined
May 26, 2021
Messages
471
Likes
821
Location
Europe
This can give you some perspective to how lucky you have to be to get 65 out of 100:

http://www.shodor.org/interactivate/activities/Coin/

Set "Number of Tosses" to 100, "Show Cumulative Stats" to No and "Display Results" to Table. Then start clicking "Toss 'em!" button and tell us how long did it take you to get 65 or more :)

Notice I put a smiley face next to my comment, meaning I was joking. Don't worry, I am perfectly aware of the statistical significance between 50% and 65%. I should be, given my job.
 

rdenney

Major Contributor
Forum Donor
Joined
Dec 30, 2020
Messages
2,295
Likes
4,036
I...

I guess this is sort of on-topic, in that there's both a subjective & objective component (and, again, IMO) to the performance of a radio tuner (AM or FM, in fact). The https://www.fmtunerinfo.com/ folks' preoccuparion with the "radio" performance of their "shootout" tuners, to me, puts some pretty unsatisfying to listen to tuners at and near the top of their list (with concomitant impact on their prices, even in today's nearly moribund tuner market). I even have one of them, in fact. ;)
...

That preoccupation's relevance depends on one's location. Where I live, I have line of sight to precisely zero FM stations, and none of them come in with full quieting. I don't care how good the audio section is--if the RF reception isn't stellar, the audio will be poor.

If I lived in line of sight of all relevant station antennas, then the audio differences would become much more important. But then you run into selectivity issues picking a weaker station out of a crowd of stronger stations.

So, selectivity and sensitivity are often necessary, but perhaps not sufficient, to making a great tuner.

My best tuner is my Carver TX11a, and it's great because it controls marginal reception noise better than the others in the stable (including a Kenwood KT5300, an SAE T102, and a Proton 440. The SAE came in second. But I have to use noise-reduction tricks even with the strongest and cleanest stations I receive, and that's where the Carver excels (among vintage tuners, that is). (I use a very large outside antenna on a tower, but there is a ridge between me and the transmitting towers.)

I think the tuner most inflated in price as a result of FMTunerInfo (for whom I have written a shootout) is the Kenwood L-02T. And it was revered as much for its audio as for its RF reception. I'm betting that one wouldn't be too hard to listen to, but you are right that the price of it is out of my range.

But the audio sound reviews tend do tend to be poorly-controlled subjective impressions, including the one I wrote.

Rick "in the presence of multipath, fundamental overload, and multiplexing static, a so-so audio section seems of secondary concern" Denney
 

BluesDaddy

Senior Member
Joined
Mar 21, 2019
Messages
342
Likes
497
So, back to topic, it's not that Atkinson at Stereophile turned objectivist when he sold his high-end amp after a blind test, and then turned back to subjectivism when he decided that had been a mistake. It's that he recognized that by any objective measure, the cheaper amp provided the same performance, but not the same satisfaction. His description (at a RMAF) was "I found myself not listening to music." That may be because of an audible effect, but it can also be because the ownership experience wasn't providing that overall satisfaction. We think we can separate the demonstrable audio effect from the ownership experience effect, but we simply can't.

Rick "who definitely chose products in part to get an approving nod from audio geeks of old" Denney
I confess no matter what amplification I've had in my system(s) - I've three operational now and have two or three amps/receivers stored away in a closet - after setting up the way I wanted after five minutes I've always been "listening to music". It is the music that has always mattered to me and the system has always simply been a means to an end. Oh, I can't say there aren't amps I wouldn't love to own - as much for aesthetic and nostalgic reasons as anything - McIntosh has always held a special attraction for me, probably because their ads were so prominent in the late 60s and early 70s audio magazines and when you filled out those cards in Stereo Review requesting literature, McIntosh always sent this huge full color "catalog". I've just never been willing, even though able, to plunk down the required cash to outfit my system with Mac amps, no matter how much I love the look of those meters, et al.

But recognizing the difference between a perceptual experience that changes, not simply based on equipment but also time of day, day of week, month of year, the weather, and what brown liquid is in one's glass and the actual sound quality of the equipment playing is something that many don't do and too often conflate.

Edit: I will also note, and have mentioned this elsewhere, that for most of my "music only" listening sessions I'm playing LPs. Not because I think they have superior audio quality than digital - I'm fully convinced they don't - but because they sound very close to me (and quiet vinyl, which I have a good bit of, I can't tell the difference from the digital versions I have) and I prefer the LP experience for a variety of NON-sound quality reasons that I sum up as "it's more engaging for me". But, again, you'll never find me arguing my preference equates to superior SQ or trying to convince others that it is so.
 
Last edited:

gsp1971

Senior Member
Forum Donor
Joined
May 26, 2021
Messages
471
Likes
821
Location
Europe
I prefer the LP experience for a variety of NON-sound quality reasons that I sum up as "it's more engaging for me".

I have always thought that if they took a standard CD and put it in a case the size of a vinyl sleeve so that the album art is larger and you can print liner notes on the back cover, it would be super.
 

BluesDaddy

Senior Member
Joined
Mar 21, 2019
Messages
342
Likes
497
I have always thought that if they took a standard CD and put it in a case the size of a vinyl sleeve so that the album art is larger and you can print liner notes on the back cover, it would be super.
And maybe a player where you have to place the laser on the spinning disc and can't skip ahead remotely! ;)
 

gsp1971

Senior Member
Forum Donor
Joined
May 26, 2021
Messages
471
Likes
821
Location
Europe
And maybe a player where you have to place the laser on the spinning disc and can't skip ahead remotely! ;)

Only the CD would be upside down, but after the 2nd glass of Chivas Regal, who cares? ;)

1625076414639.png
 

Wes

Major Contributor
Forum Donor
Joined
Dec 5, 2019
Messages
3,843
Likes
3,790
65% can be interpreted as mere guessing with a few lucky guesses. :)
!

not if the sample size is large

the concept is sometimes called the power of the test
 

MarkS

Major Contributor
Joined
Apr 3, 2021
Messages
1,089
Likes
1,539
You didn't see he original P value. He updated his P value. It was something with a lot more zeros, something like 0.000001, as I recall.

So indeed he listened and updated his numbers. Good for him.

Now, that is not the only embarrassing thing in his response.

He responded to my ABX result with this:

Quote: "You have compared the Marantz to the Denon? I don't think so."

You be the judge if this is embarrassing or not.

I accidentally typed in an extra zero. I wrote "p<0.0002" when I should have written "p<0.002".

As for the Marantz vs Denon, I had written that I did not believe that anyone could tell these two receivers apart in a blind listening test. Although I did not say so explicitly, I meant by listening to speakers powered by these two receivers, playing music. This is, after all, what matters to purchasers of equipment: do they sound different when used as intended?

Listening over headphones to artificially equalized test signals, which is what you did, is not all the same thing, IMO.
 

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,219
Likes
1,172
I accidentally typed in an extra zero. I wrote "p<0.0002" when I should have written "p<0.002".

As for the Marantz vs Denon, I had written that I did not believe that anyone could tell these two receivers apart in a blind listening test. Although I did not say so explicitly, I meant by listening to speakers powered by these two receivers, playing music. This is, after all, what matters to purchasers of equipment: do they sound different when used as intended?

Listening over headphones to artificially equalized test signals, which is what you did, is not all the same thing, IMO.
Hmm.....it seems no one really paying attention to my ABX results details in
https://www.audiosciencereview.com/...tracking-my-listening-tests.24364/post-822052

: (

My two most difficult ABX tests were with speakers and real music clip. Both are more difficult than the 0.55 dB difference of the amps comparison.

One is 0.3db slope test with real music clip and with speakers.

The other is 0.2db one channel drop test with real music clip and speakers.
 
Last edited:

MarkS

Major Contributor
Joined
Apr 3, 2021
Messages
1,089
Likes
1,539
Hmm.....it seems no one really paying attention to my ABX results
The only question that interests me is whether actual, real-world components (like the Denon and Marantz receivers) can be distinguished.
 

Pdxwayne

Major Contributor
Joined
Sep 15, 2020
Messages
3,219
Likes
1,172
The only question that interests me is whether actual, real-world components (like the Denon and Marantz receivers) can be distinguished.
This whole thing started because one person said he heard a difference.

Real measurements were given. The measurements show real freq differences between amps. You were given ABX test results that prove that the measured differences are big enough to be audible,

Now you ask the question again?

I am done with you. You are not worth my time. Bye bye.
 

MarkS

Major Contributor
Joined
Apr 3, 2021
Messages
1,089
Likes
1,539
This whole thing started because one person said he heard a difference.

Real measurements were given. The measurements show real freq differences between amps. You were given ABX test results that prove that the measured differences are big enough to be audible,

Now you ask the question again?

I am done with you. You are not worth my time. Bye bye.
I am glad you are done, because once again your alleged facts are wrong.

First, the applied filter produced larger differences than the difference between the amps at some frequencies.

Second, there is no guarantee that the filter does not produce other audible artifacts (phase distortion).

Third, the person who passed this ABX test did so with pink noise, not music.
 

pogo

Major Contributor
Joined
Sep 4, 2020
Messages
1,353
Likes
444
I always miss the swing-in/out behavior over the entire frequency range of complex audio signals in the discussions. Exactly this explains why, for example, the low frequency range sounds more intense. This is not possible with a pure frequency response curve! A possible measurement can be found here:
Link
 
Top Bottom