• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

oh dear... "Cable Pathways Between Audio Components Can Affect Perceived Sound Quality"

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,633
Likes
240,673
Location
Seattle Area
@amirm Is this not an indictment of the AES for publishing this sort of low quality research in their journal, or is that not how things work?
It is a stain on their reputation. Need to find out what has changed to allow such things to get published.
 
  • Like
Reactions: GDK

Shazb0t

Addicted to Fun and Learning
Joined
May 1, 2018
Messages
643
Likes
1,231
Location
NJ
Huh back,

In the abstract the author clearly states that his goal is to test "cables and topology." That may be not your goal but it's his.

In your next paragraph, you replace the researcher's test objective with your own, which is irelevant in this context. You can define the parameters of your own research, but not the goals of others. The setup he used was certainly minimalist and it's a very difficult argument to make that Berkeley Audio and Spectral SE and XLR outputs/inputs are expected to sound audibly different at these cable lengths.

As I have explained, he tested an outlier or "design space corner" to determine that analogue pathways could make a difference.

And he did that, unless he falsified his data or someone can show that the Spectral amp became unstable with these cables (highly unlikely as he did properly terminate the speaker cables with an appropriate network).
You're absolutely wrong, bordering on being purposefully obtuse. It is very clear the goal of this "experiment" is to demonstrate audible differences between simple interconnects. The test was setup in a multitude of obviously deliberate ways in order to ensure this outcome. It is a joke. Defense of it is a joke.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,633
Likes
240,673
Location
Seattle Area
One is that his switching method is faulty. I would trust the transparency of the Spectral amp's input switch (one contact, couple of solder joints per channel) to an external box (one contact, couple of solder joints and a pair of connectors, plus shielding). And to the "tell" aspect, that implies essentially fraud and I don't go there. His subjects were grad students, not a secret army of subjectivist forum enthusiasts.
Fraud??? How did you make that leap? We are talking about experimental errors that he did not test for or catch. We recently had long threads on difficulties of setting these tests up to keep people from figuring out which is which. When Harman tested headphones blind, they quickly figured out that the listeners were able to tell which headphone was which through feel. So they both lived with this and created alternative tests (surrogate headphone EQ). Nothing about such fault in experiment led to accusation of fraud.

As to your attestation of fidelity of Spectral amp, you have no evidence of such transparency. And remember, this is not just switching between one socket in the back versus another: it is switching between two buffered paths internally since one is balanced and the other is not. So there is tons more complexity there than a simple external AB switch for RCA cables. The key here is that an assumption was made just like you are but not confirmed.

Remember, his students were forced to think there were differences given those two distinct choices of sound, piano versus violin. So if they listened and thought there was no difference, they would feel like they were failing altogether. Any "tells" that helped them identify which is which to satisfy the guess of their professor, would be a relief and sought after.

A third choice that "there is no difference" would have provided a relief valve of sorts against the above although not sufficiently so.
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
You're absolutely wrong bordering on being purposefully obtuse. It is very clear the goal of this "experiments" is to demonstrate audible differences between simple interconnects. The test was setup in a multitude of obviously deliberate ways in order to ensure this outcome. It is a joke. Defense of it is a joke.
I think I explained this the best I can, coming from a lifetime of aerospace testing.
 

Shazb0t

Addicted to Fun and Learning
Joined
May 1, 2018
Messages
643
Likes
1,231
Location
NJ
I think I explained this the best I can, coming from a lifetime of aerospace testing.
And it's been laid out for you by several people why you're clearly incorrect. Let's hope that liftetime of aerospace wasn't in Quality.
 

Ron Texas

Master Contributor
Forum Donor
Joined
Jun 10, 2018
Messages
6,220
Likes
9,338
Cable pathways? They must have traversed the Twilight Zone,
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,633
Likes
240,673
Location
Seattle Area
Your second objection is giving listeners subjective descriptors to judge by. I don't see how this should be statistically relevant, given that a blind testing methodology is observed. I would infer that he had assessed the sound difference inherent in the cables (not hard with the crazy Interlink) and used it as a differentiator aid in his test. This would have an effect of "training" his listeners to become more discerning to the UUT differences. And if his descriptors were irrelevant to the experience, his results would be statistically null.
Again, listen to my analogy. I ask if two flavors of vanilla ice create are a) sour and b) salty. Outcome comes out that one is 100% salty and the other is 100% sour. Is it your conclusion that this proved that vanilla ice cream therefore is either salty or sour? Or do you say, "wait a second... your test must be wrong as ice cream is neither. Let's review the test and see if there is a fault with the experiment." One follow up would be to survey the tasters. Find out if they voted that way because they were forced to vote that way or they really thought one was sour and another salty.

And no, this is not any kind of training. Leading people to a conclusion you developed using faulty means doesn't make it training any more than above test is training for ice cream taste.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,633
Likes
240,673
Location
Seattle Area
I think I explained this the best I can, coming from a lifetime of aerospace testing.
Aerospace testing? That doesn't teach you anything about the subject at hand. Someone doing audio testing can't do your job and vice versa.

In your space though, I am sure you don't create random tests with no precedence whatsoever.
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
Fraud??? How did you make that leap? We are talking about experimental errors that he did not test for or catch. We recently had long threads on difficulties of setting these tests up to keep people from figuring out which is which. When Harman tested headphones blind, they quickly figured out that the listeners were able to tell which headphone was which through feel. So they both lived with this and created alternative tests (surrogate headphone EQ). Nothing about such fault in experiment led to accusation of fraud.

As to your attestation of fidelity of Spectral amp, you have no evidence of such transparency. And remember, this is not just switching between one socket in the back versus another: it is switching between two buffered paths internally since one is balanced and the other is not. So there is tons more complexity there than a simple external AB switch for RCA cables. The key here is that an assumption was made just like you are but not confirmed.

Remember, his students were forced to think there were differences given those two distinct choices of sound, piano versus violin. So if they listened and thought there was no difference, they would feel like they were failing altogether. Any "tells" that helped them identify which is which to satisfy the guess of their professor, would be a relief and sought after.

A third choice that "there is no difference" would have provided a relief valve of sorts against the above although not sufficiently so.
I don't want to belabor my points nor overstay my welcome.

His subjects can't be both sophisticated listeners actually quing in to "tells" between Spectral RCA and XLR inputs and unsophisticated listeners trying to please their professor.

As to subjective training, it's a potential enhancement of discernment (i.e. can't be replicated in the general population) but given that, in no way invalidate the results.

It should be obvious that irrelevant subjective descriptors would result in random results, by definition. That the results were quite precise indicates that the descriptors were quite relevant.

I will leave it at that.
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
Aerospace testing? That doesn't teach you anything about the subject at hand. Someone doing audio testing can't do your job and vice versa.

In your space though, I am sure you don't create random tests with no precedence whatsoever.
I did take a general purpose graduate statistics class :)

Our tests are generally aimed at passing a threshold (qualification) or establishing a threshold (environmental stress screening,).

And yet I recognize the test in question as a legitimate attempt to define a corner of audibility for an expanded UUT.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,633
Likes
240,673
Location
Seattle Area
It should be obvious that irrelevant subjective descriptors would result in random results, by definition.
Only if the test is physically correct. The problem is, he picked a one of a kind setup that none of us can replicate. So the responsibility was his to sit back and think, hmmm, is this really right? Let me get an antagonist to check this out before I go and publish this. He wanted an answer, so he ran with that answer.
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
Again, listen to my analogy. I ask if two flavors of vanilla ice create are a) sour and b) salty. Outcome comes out that one is 100% salty and the other is 100% sour. Is it your conclusion that this proved that vanilla ice cream therefore is either salty or sour? Or do you say, "wait a second... your test must be wrong as ice cream is neither. Let's review the test and see if there is a fault with the experiment." One follow up would be to survey the tasters. Find out if they voted that way because they were forced to vote that way or they really thought one was sour and another salty.

And no, this is not any kind of training. Leading people to a conclusion you developed using faulty means doesn't make it training any more than above test is training for ice cream taste.
The responders correctly recognized each cable with high certainty based on the subjective descriptors.

In your analogy, if the ice cream was neither salty nor sour, the subjects would be forced to make a random choice and the result would be a ~50/50 split.

And the two pathways were the UUTs. If you apriory decide they were both the same (identical vanilla), this violates the premise of the test and is, therefore, impermissible. It is the result of the test (and subsequent challenges) that decide that question.

Finally, in wine tasting, training is perfectly fine. A teacher may offer glasses of wood and steel fermented wine and encorage the tasters to identify which wine has wood flavors.
 
Last edited:

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
Only if the test is physically correct. The problem is, he picked a one of a kind setup that none of us can replicate. So the responsibility was his to sit back and think, hmmm, is this really right? Let me get an antagonist to check this out before I go and publish this. He wanted an answer, so he ran with that answer.
Again, he chose a setup that would explore the highest variance corner of the design space. People do that all the time for all kind of reasons.

In aerospace we do that often too - worst-on-worst analysis and testing.

And you can absolutely replicate it. Perhaps disprove it. Maybe it fails in ABX testing. Maybe it fails without subjective descriptors. Maybe it fails with a low capacitance RCA cable, instead of the Interlink. All excellent questions.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,633
Likes
240,673
Location
Seattle Area
In aerospace we do that often too - worst-on-worst analysis and testing.
When your tests produce outcomes that are puzzling, sometimes it is your tests that are wrong, correct?
 

jsrtheta

Addicted to Fun and Learning
Joined
May 20, 2018
Messages
947
Likes
1,008
Location
Colorado
The responders correctly recognized each cable with high certainty based on the subjective descriptors.

In your analogy, if the ice cream was neither salty nor sour, the subjects would be forced to make a random choice and the result would be a ~50/50 split.

And the two pathways were the UUTs. If you apriory decide they were both the same (identical vanilla), this violates the premise of the test and is, therefore, impermissible. It is the result of the test (and subsequent challenges) that decide that question.

Finally, in wine tasting, training is perfectly fine. A teacher may offer glasses of wood and steel fermented wine and encorage the tasters to identify which wine has wood flavors.

Yeah, that dog won't hunt. You cannot compare apples/RCA to oranges/XLR and claim a significant result. You can compare apples/ RCA to other apples/RCA and claim a different result and a provisional conclusion. (And, btw, we can all be thankful that the testing protocols used to develop the COVID vaccines were a hell of a lot more robust than what you seem willing to settle for in an exceedingly simple, but erroneous testing protocol here.)

You cannot compare the performance of a balanced configuration to that of an unbalanced one and say, "Voila! Myth destroyed! They're different!" Well, duh.
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
When your tests produce outcomes that are puzzling, sometimes it is your tests that are wrong, correct?
Sure...

But I found nothing puzzling in the test results being discussed.

I would suggest that over decades DBTs were run with certain methodologies that essentially reduced subjects' aquity. Nothing wrong with that, but everyone got used to testing that way and this is now thought of as "the right way."

The author implemented a different methodology that increased subjects' aquity. Nothing wrong with that either, but to lay folk these seems "wrong way."

In message below, I state that subjects' training is perfectly fine. Take your ability to score highly on AAC/FLAC DBT tests. You trained yourself to become a knowledgeable listener and now you score high. If you now used your knowledge to train a group of others, their scores will rise as well. There is zero problem or controversy here.
 
Last edited:

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
Yeah, that dog won't hunt. You cannot compare apples/RCA to oranges/XLR and claim a significant result. You can compare apples/ RCA to other apples/RCA and claim a different result and a provisional conclusion. (And, btw, we can all be thankful that the testing protocols used to develop the COVID vaccines were a hell of a lot more robust than what you seem willing to settle for in an exceedingly simple, but erroneous testing protocol here.)

You cannot compare the performance of a balanced configuration to that of an unbalanced one and say, "Voila! Myth destroyed! They're different!" Well, duh.
Of course you can. There is no rule that cables (themselves complex systems) can not be incorporated into larger systems for comparison and study. This is routinely done in all fields and doesn't violate anything.

Neither is training the subject group to improve sensory aquity an issue in perceptual testing. Done all the time and entirely unremarkable.
 

DimitryZ

Addicted to Fun and Learning
Forum Donor
Joined
May 30, 2021
Messages
667
Likes
342
Location
Waltham, MA, USA
Well I do.
I will offer several functionality identical "toy" experiments that will attempt to demonstrate there is no issue here - beyond a pick of a peculiar pathways pairing to maximize audibility.

Everyone should understand that unlike ASTM, medical and IEEE standards, perceptual test protocols are purposefully flexible. There are many test models and methodologies.
 
Top Bottom