• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Dr. Toole's comments in anticipation to his new book on Acoustics and Psychoacoustics

SPFC

Member
Joined
Mar 9, 2017
Messages
62
Likes
39
Location
My kind of town
Hi all,

I have borrowed the below text from a JBL thread at AVSforum.com

Really interesting read from Dr. Toole. I especially liked the part where he discusses imaging and what affects it. Fascinating!

------------------------------------------------------------------------------------------------------------------------

My entire lifetime's work began with a series of blind listening tests in 1966 at the National Research Council of Canada. It produced surprising results - people could reliably hear differences between loudspeakers, they generally agreed on which ones sounded good, and those that sounded good exhibited the best looking measurements. For me, the aspiring scientist, this seemed eminently logical, but for me, the audiophile, this was contradiction of popular beliefs. For the next fifty years my colleagues and I, and other researchers around the world have pursued the underlying truths, encountering more eye-opening revelations. The scientific process begins with double-blind listening tests - if something sounds good, it is good. If not, we need to discover why and chase down the gremlins. All of our listening tests involved comparisons of three or four loudspeakers, presented in randomized sequences. In the beginning we compared single loudspeakers for simplicity, but later discovered that listeners heard more problems and were more critical when listening in mono than in stereo. Several stereo vs. mono tests since then have shown that winners in the mono tests also win the stereo tests. The latest tests show that we are even less critical in multichannel comparisons. Interesting. The reality is that much of what we hear is mono - the dominating center channel in movies, the hard left and right, and the double-mono phantom center in stereo.

Having multiple loudspeakers in the comparisons helps listeners to separate the timbral contributions of the loudspeakers from the essential timbre of the recordings. A simple A vs.B test is a start, but if the speakers share the same problem it is not likely to be noticed. It is a remarkable feat that two ears and a brain perform in such tests. They are able to substantially separate the sound of the loudspeaker from the sound of the room it is in (except at low frequencies where the room dominates). They separate the timbral contributions of the loudspeakers from the inherent timbre of the recordings - even with studio recordings for which there is no "live" reference. The fact that listeners with normal hearing agree on what is good indicates that they are not only recognizing excellence, but are rejecting unnatural sounds. Listener comments usually describe shortcomings of flawed loudspeakers - sometimes at length and in colorful language - but reward more neutral loudspeakers with simple words of praise. If given the opportunity in an unbiased situation, humans are remarkably good "measuring instruments". In non-blind tests anything is possible.

Even now, I know of almost nobody who follows that degree of scientific rigor within the consumer or professional audio domains. It happens in some university research investigations, but most of the information available to consumers and professionals is seriously inadequate. Most loudspeaker specifications are an insult to intelligence forcing us to choose on the basis of imperfect listening evaluations and/or other persons' opinions. In product reviews we get few or no measurements, and subjective evaluations of the "take it home and listen to it" kind, where the test product is known, adaptation (a.k.a. breaking in) and bias are omnipresent. It doesn't mean they are without merit, but it does mean that opportunities for error are present. When a review begins with a sentence like: "I have always liked ##### loudspeakers" one can anticipate the conclusion, whatever the true merits of the product. Without trustworthy measurements the reader is at a severe disadvantage.

Fortunately a few consumer audio publications show measurements and some of them are reasonably accurate, although not everything one might wish for gets measured. Still, it indicates an admirable respect for the technical side of audio. If measurements indicate a poor performance and the subjective reviewer raves there is something wrong. It happens. Professional publications almost totally ignore measurements raising the question, who is more professional?

It takes time and money to do good product evaluations, and publishers and reviewers have lives to live and expenses to cover. Making meaningful loudspeaker measurements is not easy. Anechoic chambers are rare and very expensive, but the great outdoors or a large room can, with modern measurement methods, provide very useful data. However a loudspeaker cannot be described in a single curve, and the multiple curves that result from comprehensive measurements need to be processed to reveal to the eyes what the ears hear.

Over the years we have learned that human listeners cannot hear everything we can measure, and the possibility exists that there are things we can hear that we cannot measure. The problem with measurements is that they need to be in a form that correlates with audibility - not all are. E.g. we basically do not hear the ringing of resonances but we are very sensitive to the spectral component - the bump in the frequency response. That can be confidently identified in high resolution measurements of the right kind. Surprisingly, by this metric, we are less sensitive to high-Q resonances than lower Q ones. This is an example of the eyes anticipating what we hear, and being wrong.

It turns out that humans are essentially phase deaf, and can tolerate up to 2 ms of group delay. Non-linear distortions are tolerated at frighteningly high measured levels (LPs are a perfect example of more coming off the medium than went in) - simultaneous masking is at work on our behalf. So, right away we see an opportunity for the "we can't measure what we can hear" argument. It is absolutely true for some of the traditional audio measurements.

In the digital domain one cannot deny the existence of digital jitter. The question is "how audible is it?", "what are its effects on what we hear?" What are the technical metrics that correlate with audibility? The existence of a "distortion" does not ensure its audibility, but if it is audible, then let's nail down some meaningful metrics of it and establish thresholds of audibility. "Zero" is always an admirable objective for distortions of any kind, but in the real world it is not necessary.

Imaging is an undefined parameter, yet people routinely talk about being affected by everything in the playback chain, and some things that aren't, up to an including the plug that goes into a wall outlet. It clearly means different things to different people, and different people exhibit different expectations. I did my PhD on sound localization, so I have followed this topic with great interest. See for example, pages 126-138 in my existing book where three pairs of loudspeakers were compared - double blind - using positional substitution in the same room. Listeners responded in detail on sound quality and imaging. The evidence to date indicates that the principal determinant of a soundstage is the recording itself - the microphone choice and placement, and electronic control-room manipulations that went into it. For playback the principal requirement is that the direct sounds from the left and right loudspeakers be identical. Distortions in the signal paths are merged with the signal itself and become part of the soundstage.

Loudspeaker directivity is also a factor, but the effects of reflections turn out to be much less of a problem than might have been believed - instinct is not always right. This is especially true if the loudspeakers have relatively constant directivity with frequency, a parameter that is very rarely measured. When the timbral signature of the reflection is different from that of the direct sound, the reflection becomes more apparent. Significant lateral reflections have been shown to have insignificant effect on placement or precision of stereo images, again counterintuitive. The explanation: it is the direct sound that is responsible for sound localization. However reflected sounds can modify the sense of space which may or may not be desired. They also fill in the massive 2 kHz dip in the phantom center image spectrum (caused by acoustical crosstalk), so some reflected sound actually improves the sound quality of the featured artist in stereo reproduction. That is another reason to have a center channel.

I consider "envelopment" to be an important component of imaging. The impression of being in an acoustical space with the performers is the most important parameter of concert halls. It is deficient in two-channel stereo, as I explained in detail in my book, but multichannel audio can deliver as much of it as a recording engineer decides to incorporate. Upmixing of stereo to multiple channels is often a pleasurable addition, but some of the popular algorithms are - for my taste - overly aggressive in sending signals to the surround loudspeakers. Some experimentation with system setup may be required to find a good balance between leaving the soundstage intact, while adding the right amount of envelopment information.

All of this and much more is discussed in my new nearly 500 page book out in August if all goes well. The last chapter is a display of anechoic measurements on 50 years of loudspeakers. It is obvious that great improvements have been made. It is also obvious that over that entire period most loudspeaker designers started with the same objective: a flat, smooth on-axis frequency response. As the years passed they were able to achieve it more closely, and at the same time improve the off-axis response: the frequency dependent directivity. Sound quality improved. Both professional monitor loudspeakers and consumer loudspeakers benefitted. Now, the best of both domains are essentially identical in sound quality, and neutral is the objective. That is what we need in the audio industry, but sadly folklore and fantasy are still influential in both domains.

One always needs to remember the existence of the "circle of confusion" (the recordings we play back are influenced by the loudspeakers used in the recording process). We know how to make very "neutral" loudspeakers; transparent "windows" through which to view the art. But even the most perfect loudspeakers cannot always sound perfect - recordings are now the weak link. The recording industry is gradually adopting neutral loudspeakers as the new norm, but there are holdouts, mostly at the mixing end of the process. I think I hear their personal biases in some recordings. However smart mastering engineers are aware of the problem and can fix it before it gets into the final product.

I had a recording engineer at my home a week or so ago. He is successful, has two studios and has strong opinions. We ended up listening to TIDAL streaming and music Blu-rays until 2 AM. He was very impressed and wants to come back for more. My home system is better than he has in his control rooms or his home. My room looks more like an art gallery than a high performance listening room. Good sound is possible mainly because the origin of that sound is properly designed loudspeakers. The sound field managed multiple subwoofers were a special bonus - smooth, deep non-resonant bass. Bass management is not only for home-theater-in-a-box systems.

It is not a mystery how to get superb sound quality. Price does not correlate at all well, except that small inexpensive loudspeakers cannot play as loud as larger, more substantial ones. The information explaining the process is in the public domain. There are no excuses for less than excellent sound. I love science!

And then we have the whole topic of auto room EQ. Lots to talk about there, but a topic for another time and post…

As William Gibson said: "the future is already here, it is just not evenly distributed yet"
smile.gif
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Toole said:
The fact that listeners with normal hearing agree on what is good indicates that they are not only recognizing excellence, but are rejecting unnatural sounds.
This isn't a logical conclusion. He could have carried the experiment out in 1907, using 1907 equipment, and got a similar consensus based simply on what people were used to at the time.

In effect, his experiment restricts the possibilities to a small subset of systems and establishes a relative ranking of those possibilities based on listeners' subjective judgement. He finds that the listeners all tend to agree on the relative ranking. He then seeks to suggest this is proof of absolute objective excellence and naturalness.

Surely, establishing whether the listeners really are good judges of what is excellent and natural would be a whole different ball game, and not something that falls out of this experiment.
 
Last edited:

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
The logic seems to work like this:

In the general population, there is a statistical tendency for listeners to prefer speakers that ‘measure well’.

Therefore all listeners who prefer speakers that ‘measure well’ are ‘better’ listeners.

Because better listeners prefer speakers that measure well, this shows that our measurements are indicative of absolute quality.

Therefore listeners can be trained to be better judges of quality by learning to maximise their score with speakers that measure well.

Which leads to the question: why bother with the listeners at all, when everything is designed simply to confirm that the original 'measure' is a direct indicator of quality?
 
Last edited:

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
And it might lead to this kind of thing:

Trained listeners are better able to pick out the speakers with better measurements in mono. Therefore all listening tests should be mono.

Using mono (we've established it’s best) we find that listeners are insensitive to phase, so we don’t need to worry about phase.

We develop our speakers to give us the best measurements (not worrying about phase).

We unveil our speakers as a stereo pair and play some music. Imaging resembles just three mono sources (“left and right and double-mono phantom center”) showing that the reality of imaging doesn't match some of the flowery audiophile descriptions we hear.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,696
Likes
37,434
This isn't a logical conclusion. He could have carried the experiment out in 1907, using 1907 equipment, and got a similar consensus based simply on what people were used to at the time.

In effect, his experiment restricts the possibilities to a small subset of systems and establishes a relative ranking of those possibilities based on listeners' subjective judgement. He finds that the listeners all tend to agree on the relative ranking. He then seeks to suggest this is proof of absolute objective excellence and naturalness.

Surely, establishing whether the listeners really are good judges of what is excellent and natural would be a whole different ball game, and not something that falls out of this experiment.

He does begin with this:

My entire lifetime's work began with a series of blind listening tests in 1966 at the National Research Council of Canada. It produced surprising results - people could reliably hear differences between loudspeakers, they generally agreed on which ones sounded good, and those that sounded good exhibited the best looking measurements.

So this was surprising he says. Looking at what hifi listeners picked for speakers (even in 1966) there is a wide range of variability in what is favored. In fact some are so diametrically opposed you would think everyone has their own idea of what good is. Then these surprising results when the speaker was hidden from view, groups largely agreed upon which was better or worse. Further the preferred designs also measured better. If you go back to this time, what could have one made of it differently that makes for better logic?

Now it does sound suspiciously circular to me. Yet I don't know of a better approach. At different times they tried smoother vs uneven responses. Smoother is preferred. I believe they tried unbalanced as in too much or too little bass (or treble). A balanced bandwidth was preferred. They tried wide vs narrow dispersion I think. So on and so forth.

One of my big complaints about modern audiophiles is they assume they automatically will choose higher quality gear when they hear it. That they innately know what good sound is. On this they have failed spectacularly. Yet what is written and what is behind research Dr. Toole has done indicates, when blinded and not biased by non-sound related factors, people as a group prefer better measuring speakers. The particulars of the directivity and off axis preferred response is the main thing not obvious if one takes this approach. Or if one has confirmed such an approach seems to make for happiest listeners.

Now I do own two speaker products of their research. I have found them just too good. So good for so little money everyone else should be ashamed. If the marketplace weren't effected by something beyond just sound their products should nearly sweep the field of all other lesser designs. Or have I been biased by knowing of their research? I am not sure I can answer that.

Is this similar to food company testing of human taste preferences? The great majority fall into a narrow range of some saltiness, sweetness and fat content for mouth feel. All are most preferred at levels a bit too high for our good health. Yet such foods one way or another tend to sell very well. This doesn't seem all that far removed from Toole's loudspeaker results. Especially troubling is the same preferences seem to hold even with studio recordings which can never actually be a natural sound. Yet I find it hard to see my way clear as to what a better approach would be.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,696
Likes
37,434
The logic seems to work like this:

In the general population, there is a statistical tendency for listeners to prefer speakers that ‘measure well’.

Therefore all listeners who prefer speakers that ‘measure well’ are ‘better’ listeners.

Because better listeners prefer speakers that measure well, this shows that our measurements are indicative of absolute quality.

Therefore listeners can be trained to be better judges of quality by learning to maximise their score with speakers that measure well.

Which leads to the question: why bother with the listeners at all, when everything is designed simply to confirm that the original 'measure' is a direct indicator of quality?

I think you are compressing a whole body of work out of context.

There are articles about how some listeners were more consistent and in their judgement of quality covered a wider range. It seems very reasonable those are better listeners than listeners who rate speakers with more variability yet a narrow range of quality differences in their ratings. That seems to fit the definition of a more discerning listener. That such listeners proved upon testing to match more closely with measured speaker performance certainly would have caught the eye of anyone doing such research.

In fact after proving at least to their own satisfaction the veracity of their approach, they claim to be able to measure disparate speakers and predict before hand which will rank better or worse by listening panels. Which would allow them to design from measurements against their algorithms to achieve better performance. They then test them to serve as quality control and honesty control. To guard against, "grading their own papers".
 

Jakob1863

Addicted to Fun and Learning
Joined
Jul 21, 2016
Messages
573
Likes
155
Location
Germany
I don´t know if it´s a typo but the "we did (routinely) double blind tests since 1966" is really surprising as they were ahead roughly 10-12 years but nobody else seems really to realize it, imo it couldn´t have been such a "revolution" when Dan Shanefield published his articles about "double blind tests" around 1978. (Otherwise Olsen did it already in ~1954 but it did not became a regular practice).

@ Cosmik,

that´s why i wrote in the other thread that sound quality development is often a matter of majority decision. But everyone should be careful to draw categorical conclusions without deeply analizing the experimental conditions. One example was the famous loudspeaker shuffler in the Harmon listening room as it was obviously a mono setup device. (If i recall it correctly Harmon newer proposal expressed an approach to fix that, means enhance it for two channel stereo reproduction, but i don´t know if it is realized yet).

Another question would be, if the comparison takes place at the optimal position of each loudspeaker or at a somewhat compromised one.
And obviously the level of overall quality could matter too; room dimensions and features are additional variables as the programme material as well.

I think i´ve cited it before - in the 1950s there was a debate if people would prefer full bandwidth reproduction or the usual (at that time) version restricted to ~5-6 kHz. Experiments with reproductions systems were done and the majority of listeners preferred the limited version.
Olson decided to do this famous experiment without any reproduction system at all, comparing an original live event without filtering against a acoustically filtered version. Now ~70% of the listeners preferred the original/unfiltered event (the proportion of 30% listeners who vote for the restricted version is still a remarkable number which i found really amazing).

Which leads to the question: why bother with the listeners at all, when everything is designed simply to confirm that the original 'measure' is a direct indicator of quality?

But how should you know at the beginning which kind of "original measure" marks the right direction to go, if you don´t do listening experiments?
All you know for sure is, that the departure from reality is substantial....
 
Last edited:

Brad

Active Member
Joined
Nov 8, 2016
Messages
114
Likes
35
One way to determine if a listener is any good is to measure if their ratings are reproducible (ie what is the standard deviation of their ratings of the same event)

This can help to break the circular argument above
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,880
Likes
16,666
Location
Monument, CO
Nit-picking the experts is always a popular past-time... I don't always agree with them (especially if one of them happens to be me) but also have to allow for personal bias and all that jazz. I greatly respect Dr. Toole's work and opinions though don't always follow them. He just joined AVS but said he is swamped writing his new book and such; maybe he'll join ASR at some point. My experience with "pros" on the trumpet forum I help moderate is very mixed; too many get turned off by the arm-chair experts and school players challenging every assertion. That can be interesting and useful to a point, but eventually most get tired of it. The other big factor is when some anonymous poster sets out to destroy someone online; most pros (heck most of us, period) have worked hard to establish our credibility and reputations and it is not worth the risk of having it torn apart by some internet troll. NOT saying that is happening here, just an observation based on many years of forum participation and moderation (been with the trumpet forum since 1998 if that matters).

As for DBT, it has been around a long, long time... Heck, even the "new" ABX test was around in the 1950's: "The history of ABX testing and naming dates back to 1950 in a paper published by two Bell Labs researchers, W. A. Munson and Mark B. Gardner, titled Standardizing Auditory Tests.[1]" -- Wikipedia. As an engineer, it is both astounding and confounding to realize how many new electronic inventions were initially discovered using tubes in the 1930's...
 
OP
S

SPFC

Member
Joined
Mar 9, 2017
Messages
62
Likes
39
Location
My kind of town
Please pardon my ignorance if I am missing something here, but one thing that puzzles me about he Harman shuffle room is that by doing double blind tests in mono, the speaker is in the middle of the room and away from side walls. Wouldn't this fact alone influence (negatively) the performance of speakers with wide horizontal dispersion characteristics, such as Harman's very own? Dr. Toole sites many times that some sidewall reflections and preferable by the listeners.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,626
Location
Seattle Area
Therefore all listeners who prefer speakers that ‘measure well’ are ‘better’ listeners.
That's not what it says. He says that unexpectedly, what the largest populations of listeners prefer is a speaker that is uncolored. What is uncolored is considered "good."
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
I think you are compressing a whole body of work out of context.

There are articles about how some listeners were more consistent and in their judgement of quality covered a wider range. It seems very reasonable those are better listeners than listeners who rate speakers with more variability yet a narrow range of quality differences in their ratings. That seems to fit the definition of a more discerning listener. That such listeners proved upon testing to match more closely with measured speaker performance certainly would have caught the eye of anyone doing such research.

In fact after proving at least to their own satisfaction the veracity of their approach, they claim to be able to measure disparate speakers and predict before hand which will rank better or worse by listening panels. Which would allow them to design from measurements against their algorithms to achieve better performance. They then test them to serve as quality control and honesty control. To guard against, "grading their own papers".
I think it's still circular. There is no proof that people don't just respond to what they think a speaker should sound like, based on all the other speakers they have listened to. They could still form a consensus, and they could still be consistent while marking with wide variation.

Of course, I am sure he is mostly right about what makes a good speaker. Where I demur, is the (il)logical leap that goes from saying that consistency in subjective judgements means automatically that absolute excellence has been achieved. It is easy to see how such a leap could result in a false confidence in simplistic measurements, resulting in the issue of phase being dismissed as unimportant, for example.

(And we will never know whether people lose any of their listening abilities when they know they are taking part in an experiment, of course!)

I suppose what offends me most is how primitive audio still is. A layperson might approach the subject thinking that because robots can do brain surgery and a high definition camera can fit on a pinhead, the audio world might be able to make a speaker cone move in a predefined way. But even after Dr. Toole's lifetime of science, we are told that the apogee of speaker design is a cone under loose control of an amplifier (after passing through some passive filter) and that its phase doesn't matter.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,626
Location
Seattle Area
Please pardon my ignorance if I am missing something here, but one thing that puzzles me about he Harman shuffle room is that by doing double blind tests in mono, the speaker is in the middle of the room and away from side walls. Wouldn't this fact alone influence (negatively) the performance of speakers with wide horizontal dispersion characteristics, such as Harman's very own? Dr. Toole sites many times that some sidewall reflections and preferable by the listeners.
They have multiple rooms. The smaller one is as you say. The larger one is multichannel and as such, keeps the speakers in their respective location:
Harman Shuffler Testing Room #2.PNG


Harman Shuffler Testing Room.PNG
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,626
Location
Seattle Area
Adding on, when you listen in the small room, in mono, what strikes you immediately and massively is the inherent sound of the speaker itself. For me it was the vocals which was so different in each speaker. In that sense it is the property of the speaker vs the room that is being measured.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,626
Location
Seattle Area
I don´t know if it´s a typo but the "we did (routinely) double blind tests since 1966" is really surprising as they were ahead roughly 10-12 years but nobody else seems really to realize it, imo it couldn´t have been such a "revolution" when Dan Shanefield published his articles about "double blind tests" around 1978. (Otherwise Olsen did it already in ~1954 but it did not became a regular practice).
Just for reference, ABX testing is older than that: https://en.wikipedia.org/wiki/ABX_test

The history of ABX testing and naming dates back to 1950 in a paper published by two Bell Labs researchers, W. A. Munson and Mark B. Gardner, titled Standardizing Auditory Tests.[1]

The purpose of the present paper is to describe a test procedure which has shown promise in this direction and to give descriptions of equipment which have been found helpful in minimizing the variability of the test results. The procedure, which we have called the “ABX” test, is a modification of the method of paired comparisons. An observer is presented with a time sequence of three signals for each judgment he is asked to make. During the first time interval he hears signal A, during the second, signal B, and finally signal X. His task is to indicate whether the sound heard during the X interval was more like that during the A interval or more like that during the B interval. For a threshold test, the A interval is quiet, the B interval is signal, and the X interval is either quiet or signal.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,626
Location
Seattle Area
As for DBT, it has been around a long, long time... Heck, even the "new" ABX test was around in the 1950's: "The history of ABX testing and naming dates back to 1950 in a paper published by two Bell Labs researchers, W. A. Munson and Mark B. Gardner, titled Standardizing Auditory Tests.[1]" -- Wikipedia. As an engineer, it is both astounding and confounding to realize how many new electronic inventions were initially discovered using tubes in the 1930's...
As an aside, that part of Wikipedia page was added by me :). Erroneous credit was given to others decades later and I fixed that.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
cosmik said:
In the general population, there is a statistical tendency for listeners to prefer speakers that ‘measure well’.
Therefore all listeners who prefer speakers that ‘measure well’ are ‘better’ listeners.
That's not what it says. He says that unexpectedly, what the largest populations of listeners prefer is a speaker that is uncolored. What is uncolored is considered "good."
Not sure what you are saying there. A Harman AES paper says:
Special attention to the training and selection of listeners and program material can lead to more reliable and efficient listening test measurements. To this end, the authors developed a self-administered, computer-automated training program designed to improve a listener's ability to reliably identify and rate different types of spectral distortions added to different program material. The training has identified significant differences among listeners in their abilities to reliably identify and rate these distortions. ... This information can form an objective basis for selecting the most reliable and skilled listeners...
Pretty much what I said, I think. Listeners are considered 'better' if they are able to rate sounds (i.e. show a preference in the right order) on the basis of 'a measure'.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,626
Location
Seattle Area
Well that's the credibility of dons wiki quote obliterated:D
Not yet. When I go and change the page that talks about the history of the Internet and put in there that I invented it, then maybe, just maybe, you can say that. :D
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,595
Likes
239,626
Location
Seattle Area
Pretty much what I said, I think. Listeners are considered 'better' if they are able to rate sounds on the basis of 'a measure'.
No. The training involves identifying what part of frequency response is wrong in a speaker (i.e. colored). It does not create preference for sound. Just take the test and you will see.

As they have studied and shown, overall preference has nothing to do with training:

Harman Trained vs Untrained.png


Indeed when I took the test with my bit of training with their software, my vote agreed with all the other high-end dealers in the room who had no training. There was one person who said otherwise and the poor guy worked for Harman! :)
 
Top Bottom