• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

"Bias" of some members towards headphone measurements?

Rock music can sound harsh or shrill on speakers with an emphasized mid-to-high frequency response. On the other hand, the same speakers might make genres like classical or jazz seem more "detailed" -you know, the kind of "showroom sound" some speakers are designed to produce. In my experience, these types of speakers are terrible for rock music.

When I was younger, I had a pair of Cerwin Vega "monkey coffin" speakers. I don't think they were the AT100 model; it was something like the CD series. I’ll have to look up some images online to jog my memory, but I digress. They had the typical Cerwin Vega setup: a 15" woofer for the bass, a 5-6" midrange driver, and a horn-loaded tweeter for the highs.

To my younger ears, they sounded amazing with loud rock music. However, I already then had a diverse taste in music, listening to everything from solo artists and acoustic guitar to country. Unfortunately, those speakers didn't handle those genres well at all. I ended up moving them to a friend's house who only listened to loud rock and some techno.

Someone reading this might mistakenly think that you need a specific speaker for each music genre, but that's not true at all. What you really need is a well-balanced, well-behaved, and well-built speaker that has near full-range or full-range capabilities and can handle high power or has high sensitivity. This is what I focused on during my adult years. While the speakers I chose years ago may not be the sleek, unobtrusive designs my wife envisioned, they are the ones we both agree sound fantastic across all musical genres.
 
Thank you, @NTK !

I notice that it says here, "While no single song is sufficient to fully evaluate a loudspeaker or headphone, certain tracks are particularly well-suited for the evaluation of specific attributes. For example, tracks with broadband, spectrally dense instrumentation are used for spectral judgements, while dynamic tracks with percussion and low bass are used for testing dynamic range and distortion."

I can see that the choice of recordings goes far beyond petty affection for style or genre.
Yeah, the assertion that rock was solely used is totally wrong. :mad: In fact the effect of program material on preference has been studied, contrary to what Preload keeps saying. Sean Olive has published multiple papers, for instance:
A Method For Training Listeners and Selecting Program Material For Listening Tests
97th AES Convention (November 1994)

Sean Olive's summary from his blog:
"programs with continuous broadband spectra (e.g. pink noise, Tracy Chapman, etc.) provide the best signals for characterizing spectral distortions whereas programs with narrow band spectra (e.g. speech, solo instruments) provide poor signals for performing this task."
And it's not rock, it's music with continuous broadband spectra that aids identification of preference in a statistical study (Fast Car by Tracy Chapman and Stars & Stripes performed by Cleveland Symphonic Winds topped the list). So does using trained listeners. Note that while program material is the single largest factor in listeners of all types being able to identify preference, it didn't prevent preference from being determined with less optimal program material. Same for trained vs. untrained listeners. Untrained can still identify preference, just with less efficacy.

I am having a difficult time with people coming here and just making stuff up. What Preload said is wrong, a fabrication, or just an error of ignorance.

Edit: typos
 
Last edited:
Yeah, the assertion that rock was solely used is totally wrong. :mad: In fact the effect of program material on preference has been studied, contrary to what Preload keeps saying. Sean Olive has published multiple papers, for instance:
A Method For Training Listeners and Selecting Program Material For Listening Tests
97th AES Convention (November 1994)

Sean Olive's summary from his blog:

And it's not rock, it's music with continuous broadband spectra that aids identification of preference in a statistical study (Fast Car by Tracy Chapman and Stars & Stripes performed by Cleveland Symphonic Winds topped the list). So does using trained listeners. Note that while program material is the single largest factor in listeners of all types being able to identify preference, it didn't prevent preference from being determined with less optimal program material. Same for trained vs. untrained listeners. Untrained can still identify preference, just with less efficacy.

I am having a difficult time with people coming here and just making stuff up. What Preload said is wrong, a fabrication, or just an error of ignorance.

Edit: typos
Sorry, but you clearly have not read or understood the Harman research. Let me explain briefly, and you're welcome to verify it for yourself (and then apologize).
The 1994 paper "A Method for Training Listeners" that you are attempting to cite is the Harman paper that tested the ability of listeners to reliably identify spectral distortions that had been introduced into different musical genres. It can be considered preliminary work that informed the use of rock in the subsequent research that ultimately correlated loudspeaker measurements with listener prerferences.

The 2 seminal Harman papers (AES 116th 2004, AES 117th 2004) that definitively linked loudspeaker measurements and listener preferences were:
A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements, Parts I and II.

The tracks used in this research were reported on page 5, table 2:
1737786345409.png

In case you're not familiar with these tracks, here is the music genre for each:
ArtistTrackMusic Genre
James TaylorThat’s Why I’m HereFolk rock
Little FeatHangin’ On to the Good TimesBlues rock
Tracy ChampanFast CarFolk/Blues rock
Jennifer WarnesBird on a WireCountry rock

I have bolded the word "rock" so you can find it easily.
 
Last edited:
Sorry, but you clearly have not read or understood the Harman research. Let me explain briefly, and you're welcome to verify it for yourself (and then apologize).
The 1994 paper "A Method for Training Listeners" that you are attempting to cite is the Harman paper that tested the ability of listeners to reliably identify spectral distortions that had been introduced into different musical genres. It can be considered preliminary work that informed the use of rock in the subsequent research that ultimately correlated loudspeaker measurements with listener prerferences.

The 2 seminal Harman papers (AES 116th 2004, AES 117th 2004) that definitively linked loudspeaker measurements and listener preferences were:
A Multiple Regression Model for Predicting Loudspeaker Preference Using Objective Measurements, Parts I and II.

The tracks used in this research were repoted on page 5, table 2:
View attachment 423653
In case you're not familiar with these tracks, here is the music genre for each:
ArtistTrackMusic Genre
James TaylorThat’s Why I’m HereFolk rock
Little FeatHangin’ On to the Good TimesBlues rock
Tracy ChampanFast CarFolk/Blues rock
Jennifer WarnesBird on a WireCountry rock

I have bolded the word "rock" so you can find it easily.
Cool. Yes, I understand that study too. Probably not worth discussing so narrowly anymore. Thanks.
 
You seem to be saying that even though the published Harman research that correlated listener preferences with analyzed measurements were only performed on rock music, the results should be generalizable "without impairment" to other musical genres, such as rap/hip-hop. If that's the case, then it's difficult for me to continue the conversation because we're not applying the same accepted principles of scientific research interpretation. I don't want to waste your time.
I think that depends on what you think is driving speaker preferences that varies with the music played on the speakers.

In my view, the more frequencies are excited during a test the more you'll learn about the speaker, so to that end rock tells you everything any other genre could. Along the same lines I think you (theoretically) get the best idea of a speaker's FR by listening to pink noise or similar.

But it is hard to express a preference about how pink noise sounds. Which is an interesting thought, and which tells us that probably flat FR is the most natural-sounding tuning in the end.

Of course this only makes sense in a world where speakers with smooth / flat FR are actually better.

To that end, I would just observe that

1) I'm not aware of any distinct trends in speaker preference other than subs vs. no subs that are related to music genre

2) Flat FR was highly valued in critical listening even before the Harman stuff came out.
 
How might Speakers A and B fair with rock vs rap/hip-hop? Same relative ranking? or different?

You could find some raps song where your 15" bat curve speakers perform well, but with a broader selection of rap music you'll start noticing the problems with that speaker and you'll rank it lower. Not all rap music is just bass and a vocal.

I know different people who are into rap or house music and got rid of their Beats headphones once the effect of their impressive bass wore off. They're better off with headphones that are capable off reproducing strong bass without distortion, but not necessarily pushing it that much.
 
Last edited:
You could find some rap song where your 15" bat curve speakers perform well, but with a broader selection of rap music you'll start noticing the problems with that speaker and you'll rank it lower. Not all rap music is just bass and a vocal.
I believe the question is valid.

Certain frequency responses tend to favor specific genres. While it may not be true for the long lasting relationship with the speakers, it certainly applies to short-term preferences.
The same technique is used at audio shows and in showrooms to highlight speakers with more "detail" and "clarity." I wouldn’t be surprised if many people have regrets about those later on as well.
 
Certain frequency responses tend to favor specific genres. While it may not be true for the long lasting relationship with the speakers, it certainly applies to short-term preferences.

Is finding audio equipment that you can enjoy on the long term not the main reason why people are on audio forums and what research focuses on?
 
2) Flat FR was highly valued in critical listening even before the Harman stuff came out.
You're right that flat FR was highly valued pre-Harman by so-called audio "experts," except that they were wrong. They reasoned that the it was the sound power measurement that should be flat. Of course, people just assumed that "flat" was desirable and nobody bothered to validate this theory against actual listening tests....until Harman. It was Harman research that demonstrated that a downsloping sound pressure FR curve was preferred.
[P.S. I read your other comments, but am choosing not to respond.]
 
You could find some raps song where your 15" bat curve speakers perform well, but with a broader selection of rap music you'll start noticing the problems with that speaker and you'll rank it lower. Not all rap music is just bass and a vocal.

Maybe, maybe not. But certainly you can see listener-rated loudspeaker preferences may differ when the test music is rap/hip-hop and not rock. Or maybe you can't. Fine with me.
 
In my view, the more frequencies are excited during a test the more you'll learn about the speaker, so to that end rock tells you everything any other genre could. Along the same lines I think you (theoretically) get the best idea of a speaker's FR by listening to pink noise or similar.

But it is hard to express a preference about how pink noise sounds. Which is an interesting thought, and which tells us that probably flat FR is the most natural-sounding tuning in the end.
I use for example Marduk - Life's Emblem track. It's quite far from traditional rock and closer to pink noise but the same idea applies. Very dense and that distortion is the icing on the cake. Differences between speakers always fit inside five second listening, very clear and instant. Especially good for crossover region problems, discontinuities and of course spikes in high mids. Not good for lowest octave energy etc. but I can always use a sub if the rest is ok so maximizing bass output is not on my list anyway.
It helps if you happen to like the track but in technical sense it's very good anyway. I don't listen to how realistic wood winds sound, I listen to how evenly distortion is spread. So while this is firmly rooted in preference it has certain technical advantages because while the problems may sound less glaring with hip hop they still are there.

Edit: For example KEF LS50 Meta handles this very cleanly so no, the track is not "broken".
 
You're right that flat FR was highly valued pre-Harman by so-called audio "experts," except that they were wrong. They reasoned that the it was the sound power measurement that should be flat.

More than 50 years ago studies were done where they measured the response of speakers in different studio's. That's before Toole got involved. The result of averaging these measurements shows a downward slope.
 
More than 50 years ago studies were done where they measured the response of speakers in different studio's. That's before Toole got involved. The result of averaging these measurements shows a downward slope.
Perhaps, but if you read the introductory statements in Olive's papers, it's clear that the purpose of their research was to disprove the prevailing loudspeaker models of the time, in particular the target curve of a flat sound power response used to review/rank speakers by the Consumers Union. This is also apparently why Olive's performed regression analysis on the sound power curve, specifically to demonstrate that it was not a flat horizontal curve that was preferred, by rather, a downslope. I'm not sure how to reconcile Olive's published work with your statement.
 
Welp, it's been a great discussion. From my perspective, it does seem that the ASR community continues to have a bias towards measurable parameters, as proposed by the OP. I sense a particular discomfort in accepting (let alone exploring) the possibility that measured parameters are not as predictive of speaker/HP sound quality as many would like to believe. The desire to disprove/dismiss any utility of subjective listening impressions, even when quantified and analyzed scientifically by Harman, is strong. Science is about having an open mind, testing hypotheses, and learning more about the natural world. Science dies when we decide we already know the answer.
 
I believe the question is valid.

Certain frequency responses tend to favor specific genres. While it may not be true for the long lasting relationship with the speakers, it certainly applies to short-term preferences.
The same technique is used at audio shows and in showrooms to highlight speakers with more "detail" and "clarity." I wouldn’t be surprised if many people have regrets about those later on as well.
I know what you mean but I like to look at it from another angle. Is there a genre which suffers from neutral FR?

Showroom detail and clarity are related to problems one has in any public event. People move and chatter and the space sounds different when stuffed with listeners. Yeah yeah, a bit more guitar, gotcha. But I don't see this as genre related, it's just general perception thing.
 
the purpose of their research was to disprove the prevailing loudspeaker models of the time, in particular the target curve of a flat sound power response used to review/rank speakers by the Consumers Union.

That's a better description than "so-called audio experts"
 
Welp, it's been a great discussion. From my perspective, it does seem that the ASR community continues to have a bias towards measurable parameters, as proposed by the OP.
The bias itself is not bad at all. We have enough purely subjective forums for everyone. The problem is that some people don't know much about these things and make pretty drastic shortcuts in interpretation. At least before they have more experience. Learning pretty much always comes with moments of 3AM shame reading your own old posts and ideas. A lively forum is like this, it's annoying but at least it's not dead.

I sense a particular discomfort in accepting (let alone exploring) the possibility that measured parameters are not as predictive of speaker/HP sound quality as many would like to believe. The desire to disprove/dismiss any utility of subjective listening impressions, even when quantified and analyzed scientifically by Harman, is strong. Science is about having an open mind, testing hypotheses, and learning more about the natural world. Science dies when we decide we already know the answer.
While I respect your points and ability to provide sources. I think you come in too hot on this and dismiss much of FR and statistics related discussion points.

However, I fully agree that headphone reviews are problematic. Also, I think ASR needs more discussions about subjective experience and how it relates to technical. (Not just subjective without any anchor to physics.) This would be much healthier than total dismissal that happens sometimes. And I mean healthier to science, general atmosphere and credibility of ASR.
 
  • Like
Reactions: MAB
Welp, it's been a great discussion. From my perspective, it does seem that the ASR community continues to have a bias towards measurable parameters, as proposed by the OP. I sense a particular discomfort in accepting (let alone exploring) the possibility that measured parameters are not as predictive of speaker/HP sound quality as many would like to believe. The desire to disprove/dismiss any utility of subjective listening impressions, even when quantified and analyzed scientifically by Harman, is strong. Science is about having an open mind, testing hypotheses, and learning more about the natural world. Science dies when we decide we already know the answer.
I think some people just want to default to making category errors when responding to about anything unfamiliar. Theres a part of audio knowledge where objective metrics have less explaining power, but other equally grounded concepts do like psychoacoustics, frequency masking, program dependence, equal loudness contours. Those eager to be argumentative and take on role of a myth buster are just showing the gaps in their knowledge.
 
The desire to disprove/dismiss any utility of subjective listening impressions, even when quantified and analyzed scientifically by Harman, is strong.

ASR not supporting Tooles work that's crazy, there's lots of references to his work even in this topic. And when it come to subjectivity, ASR's stance is when you want to do a subjective assessment and present it as truth, make sure to implement controls.
 
Back
Top Bottom