• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

A Broad Discussion of Speakers with Major Audio Luminaries

To understand how the double-blind ratings were affected by the reflexes in Harman's listening room, the size of the room and the characteristics of the walls, floor and ceiling are important variables.
In the listening position, which were the dominant reflexes? How were they attenuated and delayed? Did they have a similar frequency curve to the direct sound?
My guess from the images I have seen is that the lateral reflexes were dominant and delayed by about 20 ms and attenuated by about 8 dB with a similar frequency curve to the direct sound?
Do you think it is possible to extrapolate from an individual speaker's spinorama data and from relevant data from each given room's relevant data via a custom algorithm how this specific speaker will sound in each specific room?
With a few more new double-blind studies of how different variants of reflexes are perceived with respect to reflex angle, attenuation and delay, I predict that this should be possible.
Maybe I am overoptimistic.
I think you are wildly overoptimistic. The basic research, and the model under discussion, apply only to sound quality. About 30% of that is bass quality, and that is dominated by individual listening rooms, and requires on-site evaluation and correction. In the tests the room was a constant factor, in life it is a variable. In terms of spatial qualities, loudspeaker directivity and room reflectivity matter significantly, and program/recording interactions are to be expected. In many respects, the recordings dominate. You need to read my book.
 
I can't find it now but there was a video on Youtube at one time where some guys went through a lot of effort set up a double blind carousel setup with monitor speakers that would randomly move in to position and play behind a blind. The thing I thought funny was that after a while the guys found the testing to be torture. All the speakers started to sound horrible. The guy was laughing about how hard it was to pick the least bad sounding speaker. I found the same problem when trying to compare active crossover settings on my speakers, which I could do semi-blind by not seeing the computer screen and just rapidly clicking the button to toggle through different settings. I definitely didn't know for sure which I was listening to, and the more I listened and compared, the worse everything sounded to me.
 
I can't find it now but there was a video on Youtube at one time where some guys went through a lot of effort set up a double blind carousel setup with monitor speakers that would randomly move in to position and play behind a blind. The thing I thought funny was that after a while the guys found the testing to be torture. All the speakers started to sound horrible. The guy was laughing about how hard it was to pick the least bad sounding speaker. I found the same problem when trying to compare active crossover settings on my speakers, which I could do semi-blind by not seeing the computer screen and just rapidly clicking the button to toggle through different settings. I definitely didn't know for sure which I was listening to, and the more I listened and compared, the worse everything sounded to me.
Brain Overdrive?
Too much information at a time can force a fatigue (or even migraine at worst).
 
Are you really pecking these novellas out on an iPhone? If so, I have equal amounts of admiration and antipathy towards you.

I certainly didn’t mean to inspire animosity.
Please consider using the block function to spare yourself further distress.
 
I don't think anyone should underestimate the listening skills of many great studio engineers. The abnormalities they can hear and, within seconds, know exactly how to address in the details of a mix if they are mixing engineers, or the overall balance of the sound if they are mastering engineers, is nothing but remarkable.

Yes, I pointed it out before too.

I’ve always been amazed by some of the excellent mixers I worked with in the studio.
Their ability to quickly zero in on an issue and know exactly what to tweak to fix it - eq or whatever - is extremely impressive. They are working under lots of time pressure, especially when we are doing evaluations with the clients in the room, and their skill often blows me away.

I would say that detecting speaker tonality errors is a child's game in comparison to what is done in a music studio, but don't get me wrong, I have all the respect in the world for the "trained listener" group of Harman and what they do. :)

Yes, in one sense with the mixers are able to do makes the narrower scope of detecting speaker differences to be “ child’s play.”

On the other hand, I seem to remember that sound engineers of some type have been included in the blind speaker tests and they didn’t perform necessarily any better than for instance the listeners trained to detect the speaker differences. That’s what I seem to remember. I don’t know if I’m wrong about that.

If that’s the case, perhaps we’re talking about the specific type of skills one develops - like the difference between a sprinter and a long-distance runner. Perhaps a specific set of skills in terms of what POST PRODUCTION mixers develop may or may not directly transfer to greater success in the speaker blind tests. (???)
 
I don't think anyone should underestimate the listening skills of many great studio engineers.
Listening skills related to their job, sure. Listening skils when it comes to fidelity matters, no. Let me give you a very specific and public example.

Some 15 years ago, someone posts a set of files to see if people could identify compression artifacts in a blind manner on AVS Forum. There were a set of four files from my memory. I listened to them and with all my skills in hearing compression artifacts, I could not identify any difference between two of the files. So I voted that way. Voting involved sending a message to the person conducting the blind test. His answer to me: "there were people who did worse" or something to that effect!

Testing finishes after a ton of people voted blindly that way. After a while, the person who set up the tests, reveal the results, showing the two files that I thought were identical to be different. Incredulous, I performed a binary comparison, showing the files to be identical. I post that and at first, the test conductor didn't believe what I was showing. Then he goes and checks and finds out he had uploaded the same file twice by accident!

Meanwhile, we have a very vocal member who was a major mixer of movie sounds including music. He had voted those two songs differently! He was extremely upset and almost refused to accept the truth. The truth was the truth and test conductor put a note that the test was invalid and that was that.

When I was at Microsoft, we would routinely conduct large scale tests, hoping they would uncover compression artifacts better than our trained team and I could. In no time did we ever find anyone that was remotely as good as our trained personnel. They would fail miserably and be no better than general public.

I myself had little ability to tell small impairments until I put myself in rigorous training. After six months of that, I could hear differences like it was child play, and couldn't believe others could not. Decades of being an engineer didn't help. What helped was a) that training and b) true understanding of what the impairments were and as a result, how to look for it.

Another story. :) Some audiophiles insist that musicians or audiophiles who play music have better hearing when it comes to determining fidelity. This too is false.

My Piano teacher was sent two electric pianos to test. I told him it would be a good angle to have his students evaluate them instead him as a Pro as these were very cheap pianos. I first sat at the cheapest piano. Instantly I was bothered by the thumping sound the keys were making. He hadn't noticed it, or noticed it as much. I turned off the piano and then played the same keys. All of a sudden he could hear it much better and while as not bothered by it as me, that bit of training did help him.

Next was the more expensive piano. I hit one or two key and am instantly bothered by the speakers that are below the piano and firing downward! So unnatural to me that it was hard for me to play on it even though it was a good electric piano. Again, I had to point this out to me with him saying he hadn't realized that until I mentioned it.

Bottom line, none of these people have as their job, detecting impairments in fidelity. They need to be trained and taught what the fidelity impairments are. You do this by teaching them what could be wrong and expose them to many instances of elevated error. Over time, you reduce the degree of error and continue. This is what Harman's How to Listen software worked. When I first listened, I too failed at level 2 or 3. But with a bit of training, reached the level 6 and 7. Despite my training in other areas of fidelity, hearing and determining nature of tonality errors took a bit of work.

So it is for good reason that I dismiss appeals to authority as I stated. Come with receipts. Show me these experts in controlled studies where we know the answer and see how they do. Otherwise, their opinion is best ignored if you want something reliable.
 
I don't think anyone should underestimate the listening skills of many great studio engineers. The abnormalities they can hear and, within seconds, know exactly how to address in the details of a mix if they are mixing engineers, or the overall balance of the sound if they are mastering engineers, is nothing but remarkable. I would say that detecting speaker tonality errors is a child's game in comparison to what is done in a music studio, but don't get me wrong, I have all the respect in the world for the "trained listener" group of Harman and what they do. :)
Agree with that 100% Recording/mixing engineers don’t listen to speakers for preference, or making a consumer purchase decision. They are tools to do what they do, and it doesn’t correlate, at all, with audio perception of a given speaker in a given room.

They are trained listeners in the art of recording and the art of mixing. They will use whatever they need to for their art-near field, the large, (normally horn loaded) studio monitors or cans.

You are absolutely right, what they have to listen for is radically different than what a consumer is listening to. They can hear a track and tell you the size of the kick drum, or the microphone and a lot of other things. They are listening to individual tracks to determine if they need to correct for phase, reverb, eq., limiting, compression, de-essing and a 100 other things, and they can hear those things that us mere mortals would never hear unless pointed out to them.

There ears are “trained” in that because they have a “reference” in what they are doing. The reference for mixing is the raw track vs. the new track, the before and after. Then after the individual tracks are dealt with they start a mix, adjust, listen, adjust, listen. For recording it’s setting up the microphones, selecting the microphones, record a test, play back, adjust, playback, adjust.

They all, at least the ones I know, all “tune” their monitors to their taste anyway. Eq. phase, so they can do their job.

The way they listen, and what they are listening to is completely different then what someone listens to when they are at home listening to playback of a final product through their speakers, or trying to compare one set with another, by memory, side by side, or any other way.
 
Last edited:
Some features such as tonality/coloration/resonances can be evaluated in mono, but those three words are not synonymous with "everything". Dr. Toole and his loyal parrots have not talked much else, but interpretations have slipped wider than perhaps intended. So let’s quickly repeat what has been claimed. And keep in mind that "everything" is greater blob than the most easily heard and significant feature in speaker comparisons:
Only yours have slipped wider, and I don’t parrot anything.
Yes, I pointed it out before too.

I’ve always been amazed by some of the excellent mixers I worked with in the studio.
Their ability to quickly zero in on an issue and know exactly what to tweak to fix it - eq or whatever - is extremely impressive. They are working under lots of time pressure, especially when we are doing evaluations with the clients in the room, and their skill often blows me away.



Yes, in one sense with the mixers are able to do makes the narrower scope of detecting speaker differences to be “ child’s play.”

On the other hand, I seem to remember that sound engineers of some type have been included in the blind speaker tests and they didn’t perform necessarily any better than for instance the listeners trained to detect the speaker differences. That’s what I seem to remember. I don’t know if I’m wrong about that.

If that’s the case, perhaps we’re talking about the specific type of skills one develops - like the difference between a sprinter and a long-distance runner. Perhaps a specific set of skills in terms of what POST PRODUCTION mixers develop may or may not directly transfer to greater success in the speaker blind tests. (???)
Their job isn’t to be able pick out what speaker would sound good in a home, stereo or mono, their skills have nothing to do with speaker tests, blind or sighted.

What they would be great at is selecting microphones in blind testing.

The better analogy is making the sausage and taste tasting it at various points in the process and adjusting it until it’s “right” vs. the consumer tasting the finished product.
 
I don't think anyone should underestimate the listening skills of many great studio engineers. The abnormalities they can hear and, within seconds, know exactly how to address in the details of a mix if they are mixing engineers, or the overall balance of the sound if they are mastering engineers, is nothing but remarkable. I would say that detecting speaker tonality errors is a child's game in comparison to what is done in a music studio, but don't get me wrong, I have all the respect in the world for the "trained listener" group of Harman and what they do. :)
Not so fast. Compared to the general public, studio engineers have a much greater incidence of hearing loss. Toole calls it a byproduct of the profession. And we are talking about damage, not decline. And it has been shown that such damage pretty much turns one into a random opinion generator when it comes to sound quality. The exact opposite of your claim.

Just because your "great studio engineer" gives an opinion, it's best not to cite him or her unless we also get to see a current audiogram. Which never happens, so.....

Which is why I reckon that recording engineers have a professional duty to implement an audience review process to assess their product (with pre-release options for the audience to choose from). Hey, how about a trained listening panel of the sort you gave a backhanded compliment to above? Perfect, as long as they are general public and not musicians or studio professionals, who suffer both professional hearing damage and have been shown to have an abnormal preference with regard to reverberation (which means they are not right for adjudicating sound of products that are for general consumption). Also not marketing staff, who apparently just send good productions back to the engineers with one command: "Louder! Make it Louder!"

Yep, the above would be a huge step forward for the profession (and would probably have aborted the Loudness War at birth). But they refuse. There are sensible objections like commercial imperatives and management imperatives to produce recordings quickly, efficiently and cheaply, but there is also hubris. Perhaps their opinion of themselves is much like your opinion of them. Not to mention that it would be professional suicide to admit or even hint that their ability to assess sound quality might be shot, and even if it isn't, they probably have abnormal listening preferences. That's a problem. A big problem.

So hey, maybe the 'trained listener group' is actually the best option.

cheers
 
Huh? What does last 20 year have to do with it. Fletcher-Munson equal loudness tests were performed some 80 years ago. You are going to dismiss that and ask for something new? To what end? Has human hearing changed?

As Dr. Toole showed, and I have said many times, the foundation of much of his research is from when he was part of independent Canadian national research commission (NRC). Even if we were talking about their work at Harman, his work has superb credibility. Certainly, far, far more than your appeals to this and that person with zero published data or research to invalidate the work we are talking about.

It is your job to come up with studies that counter what has been shown to be true in study after study by Dr. Toole's team. That you don't have this research ends any argument you have. It is not our job to keep coming up with more, just to have you refuse it with, "my mixing friends say the opposite." Your mixing friends need to start reading and learning the research and put it to practice.
My post above also contains some info about 'his mixing friends' as a cohort. ;)

cheers
 
[to Matt] You have outdone yourself with this post. This must be the longest ever... :eek:
Dante needs to resuscitate and create a special circle for you in the Inferno. :D
Then and only then will Matt know how it feels....
 
Your mixing friends need to start reading and learning the research and put it to practice.
I don't even want to take the argument apart. My opinion is that the mixing engineers and other personnel should adhere to the standard. Only what sounds good on standardized playback systems will also sound good to the consumer. The standard is the reference, plain and simple.

Then the engineer doesn't have to worry about the justification of the standard in the literature, or even be trained on it. In the end, the artistic design of a recording remains free, the standard sets no limits here. Imagine if another standard stipulated that the loudspeaker(s) had to have a boost of 10dB with Q=1 at 1kHz - all loudspeakers, in the studio and at home and in the car, etc. And - because all playback sounds the same, no problem! Typically, the mixer will then simply attenuate the range on the audio track by that much. Maybe it's too simple and straightforward to understand.

(The fact is that the current standard, which is checked for compliance with mono tests, was derived from listening to existing recordings. And these come from the hands of the mixers in question.)
 
Last edited:
Not so fast. Compared to the general public, studio engineers have a much greater incidence of hearing loss. Toole calls it a byproduct of the profession. And we are talking about damage, not decline. And it has been shown that such damage pretty much turns one into a random opinion generator when it comes to sound quality. The exact opposite of your claim.

Just because your "great studio engineer" gives an opinion, it's best not to cite him or her unless we also get to see a current audiogram. Which never happens, so.....

Which is why I reckon that recording engineers have a professional duty to implement an audience review process to assess their product (with pre-release options for the audience to choose from). Hey, how about a trained listening panel of the sort you gave a backhanded compliment to above? Perfect, as long as they are general public and not musicians or studio professionals, who suffer both professional hearing damage and have been shown to have an abnormal preference with regard to reverberation (which means they are not right for adjudicating sound of products that are for general consumption). Also not marketing staff, who apparently just send good productions back to the engineers with one command: "Louder! Make it Louder!"

Yep, the above would be a huge step forward for the profession (and would probably have aborted the Loudness War at birth). But they refuse. There are sensible objections like commercial imperatives and management imperatives to produce recordings quickly, efficiently and cheaply, but there is also hubris. Perhaps their opinion of themselves is much like your opinion of them. Not to mention that it would be professional suicide to admit or even hint that their ability to assess sound quality might be shot, and even if it isn't, they probably have abnormal listening preferences. That's a problem. A big problem.

So hey, maybe the 'trained listener group' is actually the best option.

cheers

As I said, I do have respect for the "trained listener group" at Harman who have gone through the learning process of identifying problems in loudspeakers' performance. What I mean is that, if some great mixing/mastering engineers were given the same task of identifying the same issues, and before that are given the same opportunity as the "trained listeners" to hear the specific programme testing material, and what that is supposed to sound like, I can assure you they would also be able to pinpoint the problems in a loudspeaker compared to another, and probably even faster than the Harman "trained listener group". They are used to identifying even smaller issues in their profession, and most of them will, within seconds, identify the problem without even looking at a test protocol.

The "loudness war" is not the mixing and mastering engineers' fault. They just do their job when asked by the record company "to make it louder"; otherwise, the job will just be taken to the next mastering engineer willing to do it. It's strange to use this as an argument that there is something wrong with their ability to hear. :)

I say it again, don't underestimate the professionals who make the programme material you so dearly optimize your reproduction sound system for. Most of it sounds tonally balanced to me. If you don't think that, what are you even balancing your system for if it's just "random opinion generators" who made the records you love to listen to?
 
Last edited:
What I mean is that, if some great mixing/mastering engineers were given the same task of identifying the same issues, and before that are given the same opportunity as the "trained listeners" to hear the specific programme testing material, and what that is supposed to sound like, I can assure you they would also be able to pinpoint the problems in a loudspeaker compared to another, and probably even faster than the Harman "trained listener group".
I agree - assuming their hearing was not compromised and they had gone through the months of training they should be as effective, if not more so, than other trained individuals.

The point is that they have not been so trained. They have been trained on studio work and mixing and have years of experience of recording. But their training and experience does not include being trained on identifying the limits of "engineering" and associated artefacts.
 
I agree - assuming their hearing was not compromised and they had gone through the months of training they should be as effective, if not more so, than other trained individuals.

The point is that they have not been so trained. They have been trained on studio work and mixing and have years of experience of recording. But their training and experience does not include being trained on identifying the limits of "engineering" and associated artefacts.
Sorry, maybe it was me to introduce the mixing engineers, the sound engineers etc as a reference.

The line of thought goes like this:
  • We have developed a standard (goal: linear, etc.) that is based on the conditions with previous recordings.
  • We want to develop loudspeakers that come as close as possible to this ideal.
  • So we want to measure the deviations from the ideal, such as remaining resonances.
  • To do this, we can build technical instruments, such as the NFS, which quantify the parameters by numbers.
  • We recognize that quantification is difficult with regard to human perception.
  • That's why we are looking for an assessment by humans that at least indicates better or worse - not necessarily ‘by how much’.
  • We see that people can be trained for the task at hand.
  • Furthermore, this task seems to be easier to fulfill if the loudspeaker is presented as a single piece, mono.
  • So far I don't think there's any contradiction?
How valuable is it, on the one hand, to first offer training to the addressed customers so that they can even perceive the possible improvements? On the other hand, should the difference become particularly apparent when the product, the loudspeaker, is used differently than planned, as a single piece?

I contrast this with the idea that the producers of the recordings, judge whether a particular loudspeaker model conveys their intentions correctly. This would mean that the very people who are supposed to know exactly how the recording should sound are the ones evaluating the loudspeakers. It may turn out that even small differences to their studio loudspeakers have a big impact. And because you have the people right there, you can measure this. Or those guys would tell, hey, there's a flaw in my work, thanks for the better speakers that reveal it!

Maybe you'll find evaluation criteria that go beyond linearity and directivity? Who wants to predict that? Or it's not all that bad, and this or that deviation is insignificant. It would appear to me to be the case.

Again, sorry for the distraction. I personally would rather accept the standard, and would for my very personal reasons better not question its foundation ;-)

To be clear on that, we need the standard,, and the recording industry better sticks to it.
 
I agree - assuming their hearing was not compromised and they had gone through the months of training they should be as effective, if not more so, than other trained individuals.

The point is that they have not been so trained. They have been trained on studio work and mixing and have years of experience of recording. But their training and experience does not include being trained on identifying the limits of "engineering" and associated artefacts.

I would say the audio engineers have already been "trained" for years doing the same task, but most likely for way longer than those "trained listeners" at Harman. It just happens to be that the test subject has changed, going from judging the tonality balance of the programme material to judging the tonality balance of a loudspeaker. In reality, it's still the same task, it's just the opposite way around.

If we assume that the research Mr. Toole and his team did at Harman is correct, that most people will have a preference for a tonally flat-measuring loudspeaker when listening to well-known, well-balanced sounding reference material, then we should also assume that's also the case for the professional individuals working as audio engineers.

So with that "out of the way", the audio engineers who have already worked for many years don't need more training. If well-known, well-balanced sounding reference material doesn't sound correct on a loudspeaker compared to another loudspeaker, they will definitely also hear the problem, and possibly even faster be able to identify the problem, as that's what they have been "trained" for throughout their whole career as audio engineers. I mean, a working professional has done this for maybe 10-20 years, 5 days a week, and 10 hours a day. How much training did the average "trained listener" at Harman get before they got the title?


I just hope no one is misunderstanding what I'm trying to say here. :)
I fully trust the listening skills of the "trained listeners" at Harman. I just don't think people should underestimate the listening skills the professional audio engineers, as many of them can judge things in audio that most of us could just dream of being able to do (including the average listeners at Harman).
 
I have no idea of their listening ability but looking at the threads on Gearslutz the technical knowledge is low and the subjective anecdote high.
Keith
 
I'll check with Sean Olive, but I don't know of any such tests. The correlations resulting from his analysis reassured us that the measurements being done were useful indicators of perceived loudspeaker performance.

We have never used the model as a universal predictor of sound quality. Others have done so in spite of our objections, and results are often misleading. We relied only on informed interpretations of the raw spinorama data and the results of double-blind subjective evaluations - the reference standard. Models are "never" perfect. But that all is now history--the research group at Harman no longer exists. The 4th edition of my book will document that and the preceding NRCC research eras.
That's sad. One can only hope another picks up the ball.
 
Then and only then will Matt know how it feels....

Nice as always. Easier than engaging the argument.

Whatever circle of hell is reserved for those who produce sincere but long-winded posts, I suspect a still lower circle is reserved for trolls. ;)
 
I just hope no one is misunderstanding what I'm trying to say here. :)
No it is clear. You value unproven intuition over real data and research. You have no need for any proof. Your argument can very well be used by audio reviewers. We know how that intuition turned out. We are not the forum for you if mere unproven intuition trumps real knowledge.
 
Back
Top Bottom