• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Harman preference curve for headphones - am I the only one that doesn't like this curve?

ZolaIII

Major Contributor
Joined
Jul 28, 2019
Messages
4,159
Likes
2,448
I said it is and what about it if it whose my subjective opinion.
That speaker went from a single man's home who couldn't afford better so he made those. From there it went to Japanese consumers home's and afterwards to studios and it's a most adapted and widely used studio monitor ever trough history with all of his reincarnations and copies.

And when you answered the hypothetical question the answer is only your subjective opinion? So you have to repeat it on significant number of subjects in order to say with a certain certainty based on a number of tested subjects to the facturial preferred answer whose yes or no.
Yes I am quite emotional.
 

Robin L

Master Contributor
Joined
Sep 2, 2019
Messages
5,263
Likes
7,691
Location
1 mile east of Sleater Kinney Rd
I said it is and what about it if it whose my subjective opinion.
That speaker went from a single man's home who couldn't afford better so he made those. From there it went to Japanese consumers home's and afterwards to studios and it's a most adapted and widely used studio monitor ever trough history with all of his reincarnations and copies.

And when you answered the hypothetical question that the answer is only your subjective opinion? So you have to repeat it on significant number of subjects in order to say with a certain certainty based on a number of tested subjects to the facturial preferred answer whose yes or no.
Yes I am quite emotional.
You brought a pocket knife to a gunfight.
 

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,062
Do you have any ideas you'd like to try to see if you can push that 86% figure up even higher for over-ear headphones?

Reading this I wonder: That correlation is between virtual headphones and actual headphone testing, where a test has a different headphone placed on their head for each headphone tested, correct? I'm assuming there are headphones with certain clamp or earcup shape and such that one can tell which headphone they are putting, making the test non-blinded. Am I getting this confused?
The 0.85 correlation is based on preference ratings given to a group of headphones tested virtually versus listening to the actual headphones. The virtual headphones do not include any nonlinear distortions, leakage effects (which vary between listeners depending on fit and seal) where the actual headphone tests include all those effects plus tactile/weight/clamping/fit biases (listeners could not see the headphones but felt them).

So to get better correlations you would have to somehow control for those effects in both tests... The fact that the correlations are so high is quite remarkable given those nuisance variables at play in one test and not the other.

In the in-ear tests we made binaural recordings of the different headphones and reproduced them over a high quality linear IEM. The recordings included all of nonlinear distortions. Those were compared to virtual versions without the nonlinear distortions. Things like fit/leakage/tactile/clamping force were controlled better, which explains the higher 0.95-99 correlations.

If a headphone is designed to a preferred target curve and controls for individual fit/leakage with low distortion I believe the predicted scores are quite valid. Of course, there are differences in taste (we identified three segments of listeners based on deviations in bass/treble from the Harman Target) that can be largely accounted for by broadband adjustments in bass and treble.
 
Last edited:

ZolaIII

Major Contributor
Joined
Jul 28, 2019
Messages
4,159
Likes
2,448
You brought a pocket knife to a gunfight.
You fail to understand that if something whose "not entirely" accurately simulated on headphones X there is no guarantee that it can be accurately repeated (experimental method) and that is not a pocket knife trust me.
 

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,350
Likes
1,850
The 0.86 correlation is based on preference ratings given to a group of headphones tested virtually versus listening to the actual headphones. The virtual headphones do not include any nonlinear distortions, leakage effects (which vary between listeners depending on fit and seal) where the actual headphone tests include all those effects plus tactile/weight/clamping/fit biases (listeners could not see the headphones but felt them).

So to get better correlations you would have to somehow control for those effects in both tests... The fact that the correlations are so high is quite remarkable given those nuisance variables at play in one test and not the other.

In the in-ear tests we made binaural recordings of the different headphones and reproduced them over a high quality linear IEM. The recordings included all of nonlinear distortions. Those were compared to virtual versions without the nonlinear distortions. Things like fit/leakage/tactile/clamping force were controlled better, which explains the higher 0.95-99 correlations.

If a headphone is designed to a preferred target curve and controls for individual fit/leakage with low distortion I believe the predicted scores are quite valid. Of course, there are differences in taste (we identified three segments of listeners based on deviations in bass/treble from the Harman Target) that can be largely accounted for by broadband adjustments in bass and treble.

As I understand, the already high over-ear headphone virtualization correlation of 0.85 (I presume 0.86 is a typo?) was actually using EQ based on measurements using the old artificial pinnae before you started using Todd Welti's more anthropometric custom pinnae which better simulate leakage on real human heads:

Screenshot_20211208-212127_Acrobat for Samsung.png


Considering this substantial improvement, I would expect the virtualization correlation to have increased significantly higher than the 0.85 found with the original pinnae. Have you managed to quantify this?
 

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,062
You fail to understand that if something whose "not entirely" accurately simulated on headphones X there is no guarantee that it can be accurately repeated (experimental method) and that is not a pocket knife trust me.
This is entirely a false statement The virtual method itself is very repeatable within acceptable margins of errors.

You can verify this either acoustically (measure virtual headphone several times on the same test fixture or microphones in listeners' ears) or experimentally by repeating the listening test many times.

At this point, you are making false statements on the basis of zero evidence.
 

ZolaIII

Major Contributor
Joined
Jul 28, 2019
Messages
4,159
Likes
2,448
Sure if you say so.
This discussion is pointless.
 

Robin L

Master Contributor
Joined
Sep 2, 2019
Messages
5,263
Likes
7,691
Location
1 mile east of Sleater Kinney Rd

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,062
As I understand, the already high over-ear headphone virtualization correlation of 0.85 (I presume 0.86 is a typo?) was actually using EQ based on measurements using the old artificial pinnae before you started using Todd Welti's more anthropometric custom pinnae which better simulate leakage on real human heads:

View attachment 171161

Considering this substantial improvement, I would expect the virtualization correlation to have increased significantly higher than the 0.85 found with the original pinnae. Have you managed to quantify this?
It was from memory but I looked it up in the paper below and it's 0.85. I corrected it above. The 0,86 is the correlation between predicted and measured ratings for the AE/OE headphone model.

The paper was written in 2013 before Todd Welti developed these pinna that better simulate human leakage. So yes, the more accurate pinna could improve the correlations. Another approach would be to individualize/personalize the virtualization for each listener. Measure the leakage effects of each headphone for each listener and include them in the virtualization. That would create a lot of work but I expect the agreement between actual vs virtualized would go up significantly if the data were analyzed on an individual basis

 
Last edited:

Robin L

Master Contributor
Joined
Sep 2, 2019
Messages
5,263
Likes
7,691
Location
1 mile east of Sleater Kinney Rd

Firefly00

Active Member
Joined
Aug 30, 2021
Messages
137
Likes
96
Location
New Zealand
The 0.85 correlation is based on preference ratings given to a group of headphones tested virtually versus listening to the actual headphones. The virtual headphones do not include any nonlinear distortions, leakage effects (which vary between listeners depending on fit and seal) where the actual headphone tests include all those effects plus tactile/weight/clamping/fit biases (listeners could not see the headphones but felt them).

So to get better correlations you would have to somehow control for those effects in both tests... The fact that the correlations are so high is quite remarkable given those nuisance variables at play in one test and not the other.

In the in-ear tests we made binaural recordings of the different headphones and reproduced them over a high quality linear IEM. The recordings included all of nonlinear distortions. Those were compared to virtual versions without the nonlinear distortions. Things like fit/leakage/tactile/clamping force were controlled better, which explains the higher 0.95-99 correlations.

If a headphone is designed to a preferred target curve and controls for individual fit/leakage with low distortion I believe the predicted scores are quite valid. Of course, there are differences in taste (we identified three segments of listeners based on deviations in bass/treble from the Harman Target) that can be largely accounted for by broadband adjustments in bass and treble.
Forgive me if this is a common question, how do you account for what listeners are used to from before they took part in the test? I.e if they’re already used to boosted treble and boosted bass, wouldn’t they prefer the same things in the listening test because that’s what they’re accustomed to?

Thank you!
 

Jimbob54

Grand Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
11,096
Likes
14,752
xD
What is HAPPENING right now LOL
Part of me hopes it's a very well crafted bit of trolling. But I fear it's just simple idiocy.
 

ZolaIII

Major Contributor
Joined
Jul 28, 2019
Messages
4,159
Likes
2,448
Problem with reducing something to absurd is that it's neither motivating nor a nice thing to do.
I tried to point out problematic part's in his methodological approach in hope he will see that as I do and improve it not to discredit anyone and before the real sizable work on representative statistical lot.
As I fail in that I don't see point in being rude or vogue as it ain't going to do any good to anything. So I don't want to bother with it.
Those who can not understand that much just carry on with having fun.
 

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,062
Forgive me if this is a common question, how do you account for what listeners are used to from before they took part in the test? I.e if they’re already used to boosted treble and boosted bass, wouldn’t they prefer the same things in the listening test because that’s what they’re accustomed to?

Thank you!
Using trained listeners is one way to cope with this. Most of them have been exposed to a wide range sound qualities and have an understanding of what is neutral or accurate. Including a neutral reference or anchor also helps.

When we use untrained external listeners we don’t normally screen them for their current playback system . it becomes a random variable. Does it matter or have an impact on the test results?

In the broad context of rating different loudspeakers or headphones in blind tests all our data show there is good agreement between trained and untrained listeners in their preferences. If you give them a bass knob you may get slightly different results, as indicated in our recent headphone papers, but they all tend to prefer products that are well-balanced without resonances.

The only evidence I’ve ever seen that what you listen to affects your preferences in controlled listening tests were shown in this 1956 paper. While some students didn't initially prefer a "neutral" wide-bandwidth speaker, after 6.5 weeks exposure to it, they preferred it to a narrow band version.

https://doi.org/10.1121/1.1918306


A similar argument was made by a Stanford researcher in 2010 who said kids prefer low quality MP3 over lossless audio because continued exposure to it makes it their new normal I did a study that showed young kids prefer lossless when comparing it against 128 kbps MP3.
https://seanolive.blogspot.com/search?q=MP3

My guess is that adaptation and learning happens much faster than 6.5 weeks, and may occur in the first few trials of a test when listeners are given several options that include a neutral reference.
 
Last edited:

BoredErica

Addicted to Fun and Learning
Joined
Jan 15, 2019
Messages
629
Likes
900
Location
USA
Using trained listeners is one way to cope with this. Most of them have been exposed to a wide range sound qualities and have an understanding of what is neutral or accurate. Including a neutral reference or anchor also helps.

When we use untrained external listeners we don’t normally screen them for their current playback system . it becomes a random variable. Does it matter or have an impact on the test results?

In the broad context of rating different loudspeakers or headphones in blind tests all our data show there is good agreement between trained and untrained listeners in their preferences. If you give them a bass knob you may get slightly different results, as indicated in our recent headphone papers, but they all tend to prefer products that are well-balanced without resonances.

The only evidence I’ve ever seen that what you listen to affects your preferences in controlled listening tests were shown in this 1956 paper. While some students didn't initially prefer a "neutral" wide-bandwidth speaker, after 6.5 weeks exposure to it, they preferred it to a narrow band version.

https://doi.org/10.1121/1.1918306


A similar argument was made by a Stanford researcher in 2010 who said kids prefer low quality MP3 over lossless audio because continued exposure to it makes it their new normal I did a study that showed young kids prefer lossless when comparing it against 128 kbps MP3.
https://seanolive.blogspot.com/search?q=MP3

My guess is that adaptation and learning happens much faster than 6.5 weeks, and may occur in the first few trials of a test when listeners are given several options that include a neutral reference.
What was the volume of the music used for headphone testing to get the Harman Target? I'm assuming due to the way our equal loudness contours work, if the testing was done at a higher SPL (a weighted?), if I listen at a lower SPL the target would be different. But if I then modify Harman Target EQ based on those I think I'd mostly be shooting in the dark and it's open to too much error on my part.

And it's tricky for speakers too because a slide I saw showed one of the speaker testing was done at 76db (b weighted) and I have no way to measure how loud I listen at b weighted currently.
 

Sean Olive

Senior Member
Audio Luminary
Technical Expert
Joined
Jul 31, 2019
Messages
334
Likes
3,062
What was the volume of the music used for headphone testing to get the Harman Target? I'm assuming due to the way our equal loudness contours work, if the testing was done at a higher SPL (a weighted?), if I listen at a lower SPL the target would be different. But if I then modify Harman Target EQ based on those I think I'd mostly be shooting in the dark and it's open to too much error on my part.

And it's tricky for speakers too because a slide I saw showed one of the speaker testing was done at 76db (b weighted) and I have no way to measure how loud I listen at b weighted currently.
In general we do listening tests at a moderate- level somewhere around 78-80 (B-weighted, slow). I believe you can find some free sound level meter apps for an iPhone that will give you an approximation of that. Alternatively you can purchase a cheap sound level meter or use a calibrated mic and REW (free). Of course that is for measuring the SPL of a loudspeaker. For a headphone you can either try subjectively matching the SPL to the loudspeaker. Otherwise, you need an ear simulator like a GRAS 45-CA 10 or some calibrated in-ear microphones.
 
Last edited:

Spyart

Member
Joined
Dec 25, 2020
Messages
54
Likes
22
Harman curve headphones were been always too harsh for me. One day I found not pretty famous AKG k77 model pretty nice playing without any ear fatique (nearly as calibrated near field monitors). I was been curious and tried to figured out why. And then found that AKG k77 and all of those two numbers AKG models have almost the same signature - actually cutted pinna boost. I swear it's not muddy at all and imaging just better even hd650, I can do pin point location of all playing instruments and macro dynamics as well. Also, I know many Audeze headphones bypass pinna boost, maybe that's why people can do mastering or even mixing on them with pretty cool results.
Also on the bass boost I have to agree with Harman curve, that's pretty nice fit to Harman in room speaker curve which I love.
 
Top Bottom