• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

NORMS AND STANDARDS FOR DISCOURSE ON ASR

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,312
Location
Boquete, Chiriqui, Panama
The listeners who exhibited high variations were of special interest - why was it happening? Fortunately I had done audiometric tests on all listeners and these outliers were found to be those with hearing loss. This was in fact how I discovered and quantified the effects of degraded hearing.
Thanks! That comment of yours is for the history books to write down. Much appreciated!

I find it amusing - and ironic, Svart-Hvitt, that you were utterly and completely unable to even suspect, much less recognize that Floyd had stumbled upon and recognized "new scientific findings" early in his acoustics and hearing research. You absolutely failed to acknowledge the possibility that there were valid reasons for excluding individuals from a test program that had specific goals - a program that was not simply pure "basic" research. (Although Floyd's findings were typical of basic research programs.)

And this after 13 pages of railing against ASR members for "excluding possibilities" - perhaps all of that pseudo-intellectual philosophical bloviation and several gish-gallops of marginally relevant references was for naught!
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
One third of the participants were removed from the data set due to these participants having random and/or high varibility in their stated preferences.
Your post reminded me of a conversation I was having this morning about comedy on BBC Radio 4. I was commenting on my perception that sometimes it is the people who have the least interest in, or affinity for, something who end up in charge of it because they seem 'reliable'.

In the case of Radio 4 comedy, I think we have a situation where people who have no sense of humour are given the job of commissioning comedy programmes. They don't really understand what is going on, but as they never had a sense of humour, they don't realise it. Instead, they reliably commission programmes that get a reliable reaction from people like themselves. For example, 'liberal' 'humour' (e.g. Trump-bashing) is guaranteed to get applause and praise and an impression of 'laughter' that is really just a signal of approval for the politics not the jokes.

Tightness of statistical grouping does not mean that the output is accurate. An algorithm that gives the tightest statistical grouping at its output may not be the optimal algorithm. Tightness of statistical grouping is a 'skyhook': an attempt to create an absolute reference when there is no absolute reference.
 
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Your post reminded me of a conversation I was having this morning about comedy on BBC Radio 4. I was commenting on my perception that sometimes it is the people who have the least interest in, or affinity for, something who end up in charge of it because they seem 'reliable'.

In the case of Radio 4 comedy, I think we have a situation where people who have no sense of humour are given the job of commissioning comedy programmes. They don't really understand what is going on, but as they never had a sense of humour, they don't realise it. Instead, they reliably commission programmes that get a reliable reaction from people like themselves. For example, 'liberal' 'humour' (e.g. Trump-bashing) is guaranteed to get applause and praise and an impression of 'laughter' that is really just a signal of approval for the politics not the jokes.

Tightness of statistical grouping does not mean that the output is accurate. An algorithm that gives the tightest statistical grouping at its output may not be the optimal algorithm. Tightness of statistical grouping is a 'skyhook': an attempt to create an absolute reference when there is no absolute reference.

I’m afraid this is going off-topic, but the point of yours is a big issue among historians who study thoughts of societal organization. Economists started to use the term spontaneous order to explain and defend a system that makes order out of chaos. Funny thing is, and this is where we may have some resemblance to your point, liberal economists are very concerned with forming markets according to their preferences! So what looks «spontaneous» is in fact iron-fisted engineering, and hardly a natural process.

«Let the data speak for themselves», said the Nobel laureate. Audio seems to be lacking in data quantity, so a good deal of judgment enters the statistical process to increase what’s considered signal and reduce what’s considered noise.
 
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
I find it amusing - and ironic, Svart-Hvitt, that you were utterly and completely unable to even suspect, much less recognize that Floyd had stumbled upon and recognized "new scientific findings" early in his acoustics and hearing research. You absolutely failed to acknowledge the possibility that there were valid reasons for excluding individuals from a test program that had specific goals - a program that was not simply pure "basic" research. (Although Floyd's findings were typical of basic research programs.)

And this after 13 pages of railing against ASR members for "excluding possibilities" - perhaps all of that pseudo-intellectual philosophical bloviation and several gish-gallops of marginally relevant references was for naught!

What came out of the discussion was that 1/3 of the population in Toole (1986) were, for practical purposes, deaf.

At the beginning of Toole (1986), all (i.e. 3/3) were described thusly:

«The listeners who participated in the subjective mea surements from which the present data are taken ranged from professional sound-recording engineers to audio philes. Many were musicians, but all of them had a background of serious critical listening».

In other words, no information on 1/3 being deaf.

Dr. Toole is a gentleman, wary of negative words when describing others, so he described the deaf participants as having high «judgment variability» and of low «consistency within individuals and the closest agreement across the group of individuals». That’s an ornate way of describing «deaf»; understandable, though, given the social setting he was in, surrounded by «golden ears» of high esteem that were in fact deaf for the purpose of listening critically.

The subsequent discussion by ASR members didn’t point to the fact that 1/3 of participants were deaf.

Dr. Toole’s participation casted light on the historic events as if he opened a door behind which there is information that was previously not given explicitly in Toole (1986). In another thread there is a discussion on Bayesian statistics, where I used the Monty Hall doors as a fun example of Bayesian.

In this thread I posted a question, see below:

- - - - - - -

Just a thought here concerning the robustness of some competently managed listening tests.

Say you read about a survey where 60 percent preferred a to b, and 40 percent preferred b to a.

What you didn’t know before you checked the footnotes in the article, is the fact that there were 100 participants in the survey, of which 30 participants preferred a to b and 20 participants preferred b to a. 50 participants were kept outside of the survey statistics.

Before you decide to call the author of the article, you see that there are two possible explanations why 50 participants were removed from the survey:

1) 50 participants heard a difference between a and b but couldn’t make up their mind which was the best.
2) 50 participants didn’t hear a difference between a and b.

Is it ok to say that 60 percent preferred a to b, or is the real number 30 percent?
Source: https://www.audiosciencereview.com/...atistics-in-listening-tests.8248/#post-209067
 

watchnerd

Grand Contributor
Joined
Dec 8, 2016
Messages
12,449
Likes
10,414
Location
Seattle Area, USA
Is it ok to say that 60 percent preferred a to b, or is the real number 30 percent?
Source: https://www.audiosciencereview.com/...atistics-in-listening-tests.8248/#post-209067

This is basic statistics.

All of which are based on samples with boundary conditions.

When we talk about the popularity ratings of politicians, children are not surveyed or included in the sample size math because they can't vote, even though they are part of the total population.

Get it?
 
Last edited:

SIY

Grand Contributor
Technical Expert
Joined
Apr 6, 2018
Messages
10,506
Likes
25,336
Location
Alfred, NY
When we talk about the popularity ratings of politicians, children are not surveyed or included in the sample size math because they can't vote, even though they are part of the total population.

My dog supports Tulsi Gabbard.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
I’m afraid this is going off-topic, but the point of yours is a big issue among historians who study thoughts of societal organization. Economists started to use the term spontaneous order to explain and defend a system that makes order out of chaos. Funny thing is, and this is where we may have some resemblance to your point, liberal economists are very concerned with forming markets according to their preferences! So what looks «spontaneous» is in fact iron-fisted engineering, and hardly a natural process.

«Let the data speak for themselves», said the Nobel laureate. Audio seems to be lacking in data quantity, so a good deal of judgment enters the statistical process to increase what’s considered signal and reduce what’s considered noise.
Well, yes, the comedy thing was an aside. The main point was to be suspicious of a claimed point of reference that originates as follows:

"In my experiments, an experimental preference indicates that the preferred speakers sound better than their rivals, and this can be extrapolated to the general population."
"But you have rejected half your test subjects because they gave answers you don't like".
"It wasn't that I didn't like the answers; it was that the subjects couldn't repeat their answers as reliably as the other half. So I rejected them".
"But how do you know that their answers didn't represent half of all listeners i.e. that most listeners wouldn't have a reliable preference for those speakers?"
"Because their answers were repeatable, and this indicates skilled listening. The preferences of skilled listeners are preferences for truth."
"Really? Surely it could just indicate a listener who may be accustomed to one sound and knows when he is hearing it, regardless of whether it is good sound or not."
"No, because I was able to train people to achieve the same skill."
"But if the skill is arbitrary and not reflective of truth or quality, then what has that gained you?"
"It means I have a larger pool of subjects so my results are even more reliable".
"But what do the results mean?"
"The truth"
"How do you know they represent the truth?"
"Because the subjects give reliably repeatable results."

Etc.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,759
Likes
37,600
Well, yes, the comedy thing was an aside. The main point was to be suspicious of a claimed point of reference that originates as follows:

"In my experiments, an experimental preference indicates that the preferred speakers sound better than their rivals, and this can be extrapolated to the general population."
"But you have rejected half your test subjects because they gave answers you don't like".
"It wasn't that I didn't like the answers; it was that the subjects couldn't repeat their answers as reliably as the other half. So I rejected them".
"But how do you know that their answers didn't represent half of all listeners i.e. that most listeners wouldn't have a reliable preference for those speakers?"
"Because their answers were repeatable, and this indicates skilled listening. The preferences of skilled listeners are preferences for truth."
"Really? Surely it could just indicate a listener who may be accustomed to one sound and knows when he is hearing it, regardless of whether it is good sound or not."
"No, because I was able to train people to achieve the same skill."
"But if the skill is arbitrary and not reflective of truth or quality, then what has that gained you?"
"It means I have a larger pool of subjects so my results are even more reliable".
"But what do the results mean?"
"The truth"
"How do you know they represent the truth?"
"Because the subjects give reliably repeatable results."

Etc.
Convenient revisionism. Damaged hearing matched up with those unreliable choices. Not like some high percentage of people for no reason were unreliable vs those who were.

So then your left with saying maybe 25-30% with damaged hearing aren't served by these results. Further research on the kind of speaker they would prefer might be in order. I'd think you'd find their choices so variable you could conclude those with good hearing chose the better speakers and those with damaged hearing show no reliable preference for or against the better performing speakers. So you could ignore them.

Perhaps you'd find there are repeatable preferences among sub-groups with particular kinds of hearing deficiencies. Which would further sub-divide that 25-30%. So as a business unless you plan to enter the niche market, you'd still ignore them.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Convenient revisionism. Damaged hearing matched up with those unreliable choices. Not like some high percentage of people for no reason were unreliable vs those who were.

So then your left with saying maybe 25-30% with damaged hearing aren't served by these results. Further research on the kind of speaker they would prefer might be in order. I'd think you'd find their choices so variable you could conclude those with good hearing chose the better speakers and those with damaged hearing show no reliable preference for or against the better performing speakers. So you could ignore them.

Perhaps you'd find there are repeatable preferences among sub-groups with particular kinds of hearing deficiencies. Which would further sub-divide that 25-30%. So as a business unless you plan to enter the niche market, you'd still ignore them.
I was addressing SV's point about some listeners being rejected, but the main point remains even if you don't reject anyone: repeatable results do not in themselves mean anything absolute in terms of whether the preferences mean 'truth'. I happen to believe that it's probable in this case that the results do mean something - but on the basis that they seem to confirm what I would have expected anyway :)
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,759
Likes
37,600
I was addressing SV's point about some listeners being rejected, but the main point remains even if you don't reject anyone: repeatable results do not in themselves mean anything absolute in terms of whether the preferences mean 'truth'. I happen to believe that it's probable in this case that the results do mean something - but on the basis that they seem to confirm what I would have expected anyway :)
So what kind of repeatable results aren't linked to something else? Such reliable pairings of stimulus and response have to be telling you something. It may not be easy to figure out, but it would be telling you something. Is that not a truth? What is it if not truth?
 

Floyd Toole

Senior Member
Audio Luminary
Technical Expert
Industry Insider
Forum Donor
Joined
Mar 12, 2018
Messages
367
Likes
3,907
Location
Ottawa,Canada
Convenient revisionism. Damaged hearing matched up with those unreliable choices. Not like some high percentage of people for no reason were unreliable vs those who were.

So then your left with saying maybe 25-30% with damaged hearing aren't served by these results. Further research on the kind of speaker they would prefer might be in order. I'd think you'd find their choices so variable you could conclude those with good hearing chose the better speakers and those with damaged hearing show no reliable preference for or against the better performing speakers. So you could ignore them.

Perhaps you'd find there are repeatable preferences among sub-groups with particular kinds of hearing deficiencies. Which would further sub-divide that 25-30%. So as a business unless you plan to enter the niche market, you'd still ignore them.

As I said in post 250: "
I cannot imagine any path for loudspeaker manufacturers to address the needs of hearing impaired individuals, but the availability of EQ/tone controls and forms of amplitude compression and expansion may be useful for those able to understand how to use them. This after all is what is used in hearing aids.

I have close association with some people needing hearing aids, and understand much of what happens in that domain. I have sat with highly regarded audiologists explaining basic auditory perception from an audio perspective - their training and goals are almost exclusively guided by concerns of speech intelligibility. Of the several I interacted with, only a few truly understood compression and expansion. Fitting hearing aids can be as much an art as a science, especially when binaural and signal-to-noise effects are involved. Satisfaction is not guaranteed for individuals with profound loss. One must presume that the same applies to their judgments of loudspeakers.

All that said, I cannot think of a rational reason to begin with anything other than a neutral reproducer of sound as a baseline. We can measure neutrality. - it is hard data."

To engage with this audience of hearing impaired individuals - there are many degrees and kinds of hearing degradation - is probably not practical. But, it could engage interested researchers for years and generate endless forum comments. The people I engaged in my pioneering 1985-86 research were not "deaf". In fact many would, by normal audiometric (speech intelligibility) criteria, be considered "normal". It is just that judging broadband sound quality is a very different and more demanding task. Likewise, hearing conservation criteria are irrelevant because they are designed to preserve enough hearing at the end of a working life in a noisy factory environment to be able to carry on a moderately intelligible conversation at a distance of 1 m in a quiet background. They are not intended to prevent hearing loss. It is in the literature - not my opinion.
 

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,312
Location
Boquete, Chiriqui, Panama
The people I engaged in my pioneering 1985-86 research were not "deaf". In fact many would, by normal audiometric (speech intelligibility) criteria, be considered "normal"

I noticed your words being twisted in the replies by SV using the word "deaf". I wonder if that is due to to language difference, poor debating skills, or purposeful trolling. Such tactics are, in fact, the creation of strawmen, and make it difficult to have an honest conversation.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
So what kind of repeatable results aren't linked to something else? Such reliable pairings of stimulus and response have to be telling you something. It may not be easy to figure out, but it would be telling you something. Is that not a truth? What is it if not truth?
I'm just saying that because some other audiophile has a preference for a sound, it doesn't mean that I will/should share it. Telling me that this listener can always spot his favourite sound doesn't change that. His ability to always spot his favourite sound doesn't mean that his favourite sound is better than my favourite sound even if I am less definite than him about it. His certainty/reliability/repeatability doesn't make his favourite sound more valid.

(I am anti-listening tests, anyway. I don't believe they have ever come up with anything surprising and valid, but they can certainly come up with garbage!)
 
Last edited:
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
I noticed your words being twisted in the replies by SV using the word "deaf". I wonder if that is due to to language difference, poor debating skills, or purposeful trolling. Such tactics are, in fact, the creation of strawmen, and make it difficult to have an honest conversation.

Come on. In audio, we have the usage of «deaf» already, as in «tone deaf».

@Floyd Toole has told us that some people are what one could call «speaker deaf».

When I wrote «deaf» in my comment, I wrote that the participants were, «for practical reasons, deaf».
 

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,312
Location
Boquete, Chiriqui, Panama
When I wrote «deaf» in my comment, I wrote that the participants were, «for practical reasons, deaf».

The people I engaged in my pioneering 1985-86 research were not "deaf". In fact many would, by normal audiometric (speech intelligibility) criteria, be considered "normal".

So, "for practical reasons", people with hearing that is slightly impaired, but still within the range considered to be normal, are deaf by your definition? :facepalm::facepalm:

For someone who claims to be deeply interested in reason and truth, I find that to be a very odd statement - in fact, they seem like weasel words!

No Weasel Words.jpg
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,769
Likes
8,139
I just think it's important to distinguish between listener preference on the one hand, and fidelity of reproduction on the other hand.

I fully accept that fidelity is necessarily multi-dimensional and that once certain performance thresholds are met, it can be difficult to linearly rank different equipment as more or less faithful in its reproduction of the source. But that does not change the fact that some equipment functions with lower distortion (of various kinds) of the original signal than others.

Now, I can see the argument that one still needs some way to rank and compare different types of distortion - for example, is lower harmonic distortion really more hi-fi than higher harmonic distortion if the unit with lower distortion produces predominantly odd-order harmonics while the higher-distortion unit produces predominantly even-order ones? I would say Yes, but I can see an argument for No if one has the notion that even-order harmonics do not audibly degrade the signal as much as odd-order ones do. Or perhaps a simpler example, is a little bit of intermodulation distortion better or worse than a moderate amount of harmonic distortion?

I can even see an argument that these types of questions can be answered only by resorting to studies of listener preferences. Still, though, even with that, there is plenty of equipment out there where if you compare two units, one beats the other on most or in some cases all measurements. In that case I think we can indeed say that one is better than the other in terms of the fidelity of its reproduction.

If someone prefers the worse-measuring device, I have no problem with that - and I also have no problem with doing research to see if there is perhaps some other characteristic that might explain that person's preference. What I have a problem with, though, are reflexive arguments that claim that better sound is a result of the poorer fidelity or conversely that the better sound must be the result of some other, unspecified, as-yet-unmeasured aspect of the gear. IMHO this is what leads to people saying that analogue is continuous while digital is stair-step, or doing frequency analyses of vinyl rips to show that those rips contain higher frequencies than CD rips do, or people showing those varying eye diagrams for USB signals going into DACs without mentioning that what comes out of the DAC (assuming it's properly designed) is identical regardless.

And so I do think it's essential to maintain a distinction, however flawed at the margins, between fidelity and euphony aka listening preference.

Finally, if listeners in a study cannot or do not express the same preference between two (or more) different types or pieces of equipment with any statistically significant repeatability. then the only conclusion that can be drawn from those listeners is that the differences between the equipment don't matter - statistically speaking, there is no meaningful difference between them.

That might be the case - but the only conclusion it would support would be, "Cool, a lot of these details don't really matter to most people - let's not worry about them." Yes?
 
Last edited:
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
I just think it's important to distinguish between listener preference on the one hand, and fidelity of reproduction on the other hand.

I fully accept that fidelity is necessarily multi-dimensional and that once certain performance thresholds are met, it can be difficult to linearly rank different equipment as more or less faithful in its reproduction of the source. But that does not change the fact that some equipment functions with lower distortion (of various kinds) of the original signal than others.

Now, I can see the argument that one still needs some way to rank and compare different types of distortion - for example, is lower harmonic distortion really more hi-fi than higher harmonic distortion if the unit with lower distortion produces predominantly odd-order harmonics while the higher-distortion unit produces predominantly even-order ones? I would say Yes, but I can see an argument for No if one has the notion that even-order harmonics do not audibly degrade the signal as much as odd-order ones do. Or perhaps a simpler example, is a little bit of intermodulation distortion better or worse than a moderate amount of harmonic distortion?

I can even see an argument that these types of questions can be answered only by resorting to studies of listener preferences. Still, though, even with that, there is plenty of equipment out there where if you compare two units, one beats the other on most or in some cases all measurements. In that case I think we can indeed say that one is better than the other in terms of the fidelity of its reproduction.

If someone prefers the worse-measuring device, I have no problem with that - and I also have no problem with doing research to see if there is perhaps some other characteristic that might explain that person's preference. What I have a problem with, though, are reflexive arguments that claim that better sound is a result of the poorer fidelity or conversely that the better sound must be the result of some other, unspecified, as-yet-unmeasured aspect of the gear. IMHO this is what leads to people saying that analogue is continuous while digital is stair-step, or doing frequency analyses of vinyl rips to show that those rips contain higher frequencies than CD rips do, or people showing those varying eye diagrams for USB signals going into DACs without mentioning that what comes out of the DAC (assuming it's properly designed) is identical regardless.

And so I do think it's essential to maintain a distinction, however flawed at the margins, between fidelity and euphony aka listening preference.

Finally, if listeners in a study cannot or do not express the same preference between two (or more) different types or pieces of equipment with any statistically significant repeatability. then the only conclusion that can be drawn from those listeners is that the differences between the equipment don't matter - statistically speaking, there is no meaningful difference between them.

That might be the case - but the only conclusion it would support would be, "Cool, a lot of these details don't really matter to most people - let's not worry about them." Yes?

You said:

«I just think it's important to distinguish between listener preference on the one hand, and fidelity of reproduction on the other hand».

If only one side effect of this thread were to make people aware of the distinction between SEEKING TRUTH and SEEKING (TRUE) PREFERENCES, I believe much has been accomplished.

The seeking of Truth is an old paradigm. Preference seeking via vox populi is a modern version; same or similar looking at first sight, but there are important differences.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,656
Likes
240,863
Location
Seattle Area
Come on. In audio, we have the usage of «deaf» already, as in «tone deaf».
Tone deaf is not the same as deaf any more than color blind is the same as blind.

Take caution in your tone. Sanctions will come otherwise.
 
OP
svart-hvitt

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
Tone deaf is not the same as deaf any more than color blind is the same as blind.

Take caution in your tone. Sanctions will come otherwise.

I am colour blind.

The appropriate term is «colour weakness» in my tongue, or «colour vision deficiency» in English.

I am not offended when people call me colour blind (which is not the correct term) or wonder if I see black-white only.

If I offended really deaf people or their friends and families with my wording «for practical purpose deaf» in a comment on listening test thresholds, I apologize to those I hurt.
 

Xulonn

Major Contributor
Forum Donor
Joined
Jun 27, 2018
Messages
1,828
Likes
6,312
Location
Boquete, Chiriqui, Panama
If only one side effect of this thread were to make people aware of the distinction between SEEKING TRUTH and SEEKING (TRUE) PREFERENCES, I believe much has been accomplished.

My interactions with many regulars here over the past year indicates that most of us are already well aware of this distinction.

Indeed, my choices in audio are ultimately subjective, and I am aware of "truth" and "preference", and the basics of their interaction. I also enjoy knowing that my DAC and amplifier faithfully convert the bits of my digital music data into an undistorted, low-noise audio signal to send to my loudspeakers.

The electrical signal carrying music from from my DAC through my amplifier to my speakers is relatively easy to monitor and measure, Then, the signal is converted to vibrations in the air - sound - that fill the room with music and causes my tympanic membrane to vibrate. Now things are suddenly very complex and highly variable and a room full of complex vibrations is much more difficult to measure than a signal in a conductor.

Those vibrations of the tympanic membrane stimulate the cochlea, which converts them back into an electrical signal - with different results for different people, and in ways that are difficult to measure. (Cochlear implants are available for those with severe hearing loss, but I have no idea whether current technology allow "high-fidelity" listening compared to functional, working ears.)

When the electrical signals from the ear reach the brain, that complex organic computer produces results that vary widely from person to person. Hello psychoacoustics! There is a lot of opportunity here for "truth-seeking"! But after auditory signal reaches the brain - unless I am mistaken - in a manner similar to particle physics, we can only explore and measure the "effects" of that musical signal and can no longer measure the music signal itself.
 
Top Bottom