• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

HBK Headphone Measurement Talks from Head-Fi and Sean Olive

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,874
Likes
6,672
Location
UK
As far as I'm aware there's no other evaluation that has the backing of public research demonstrating its efficacy in verifying and improving listening abilities, consistency and discrimination. And that's the key point. The term 'trained listener' is thrown around quite a lot, often without detailed delineation. What Harman have done is give the term a verifiable, quantifiable definition so we don't have to just take someone's word on what their personal training/experience has conferred on their listening abilities anymore, they can now prove it.


Not according to Harman's definition. He's said he got up level 3 on the first attempt of How to Listen, which is no better than the general public from his comments here. So whatever previous training he's done in the specific task of identifying digital lossy codec compression artifacts did not confer an improved ability to identify frequency response differences and so better discern differences in sound quality of transducers. After practice he got up to level 5, still 3 levels short of a trained listener according to Harman. And that was over 10 years ago. As @Sean Olive demonstrated in this post, a lot can change in 10 years in terms of hearing, and so listening ability as Floyd Toole has shown.


A hypothetical that doesn't exist unfortunately. No reviewer can be 100% impartial, because no-one is 100% free of subconscious biases.

The latter's already blankly refused. Knew I'd forgotten someone though - @oluvsgadgets up for giving the below a try?
I'm wondering if I should give it a go at putting myself through that training, it's free beta software anyway, so nothing to lose. Do you know how much time I'm likely gonna have to set aside to go through the training?
 

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
I'm wondering if I should give it a go at putting myself through that training, it's free beta software anyway, so nothing to lose. Do you know how much time I'm likely gonna have to set aside to go through the training?
Took me about a week getting to level 8 across all tasks, just doing it casually. You do have to take a lot of breaks, though, as listening fatigue sets in after 15-20 minutes, and you start to wonder if the software is bugging out after switching between the original and the modified stimulus and hearing no difference at all :rolleyes: Some of the tasks can be completed in a matter of minutes. The 'dips only' task is the real killer.
So 40 minutes of listening per day, with 1-2 short breaks between listening sessions, for 5-7 days should get you there.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,874
Likes
6,672
Location
UK
Took me about a week getting to level 8 across all tasks, just doing it casually. You do have to take a lot of breaks, though, as listening fatigue sets in after 15-20 minutes, and you start to wonder if the software is bugging out after switching between the original and the modified stimulus and hearing no difference at all :rolleyes: Some of the tasks can be completed in a matter of minutes. The 'dips only' task is the real killer.
So 40 minutes of listening per day, with 1-2 short breaks between listening sessions, for 5-7 days should get you there.
I think I've gotta give this a go! Do you think it's important or useful if you try to do it all within a week or so, as in would you "forget stuff" or "untrain yourself" if you have days or many days break between each part of training?

EDIT: is it also something you have to revisit every year to maintain your training, or do you "train" yourself once and then the majority of the perception/learning stays with you?
 
Last edited:

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
I think I've gotta give this a go! Do you think it's important or useful if you try to do it all within a week or so, as in would you "forget stuff" or "untrain yourself" if you have days or many days break between each part of training?

EDIT: is it also something you have to revisit every year to maintain your training, or do you "train" yourself once and then the majority of the perception/learning stays with you?
The training teaches you to single out instruments and figure out which frequency ranges they belong to. Once you've learned it, I think it stays with you for a long time and little retention is needed, but I don't have any research to support this belief. Maybe that question can better be answered by @Sean Olive. I personally haven't revisited the software after completing the training.
Long term auditory memory is demonstrably poor, so it's not something you should rely upon. The only way I see you benefiting from it, is if you you get the same filter two trials in a row. Then the "prior" training can be useful - because the same answer would be correct for both trials.
My advice would be to take your time, both long term and short term. You sometimes need to listen at elevated volumes to clearly hear the difference. Listening fatigue should not be underestimated.
 
Last edited:

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
Not according to Harman's definition. He's said he got up level 3 on the first attempt of How to Listen, which is no better than the general public from his comments here. So whatever previous training he's done in the specific task of identifying digital lossy codec compression artifacts did not confer an improved ability to identify frequency response differences and so better discern differences in sound quality of transducers. After practice he got up to level 5, still 3 levels short of a trained listener according to Harman. And that was over 10 years ago. As @Sean Olive demonstrated in this post, a lot can change in 10 years in terms of hearing, and so listening ability as Floyd Toole has shown.

That's an interesting take on it. In looking back at some of the older Harman papers that were based on "trained listeners," I didn't see any descriptions of this computerized "how to listen" program or its levels. This suggests that the type of training used for those studies was different. How do you know that level 8 is equivalent to the Harman training that preceded the computerized training? And how do you know that level 8 isn't more advanced than what was required for the listeners in the body of work that was published prior?
 

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,348
Likes
1,804
That's an interesting take on it. In looking back at some of the older Harman papers that were based on "trained listeners," I didn't see any descriptions of this computerized "how to listen" program or its levels. This suggests that the type of training used for those studies was different. How do you know that level 8 is equivalent to the Harman training that preceded the computerized training? And how do you know that level 8 isn't more advanced than what was required for the listeners in the body of work that was published prior?
Just because they didn't mention it publicly doesn't necessarily mean they weren't using the program (or something similar) internally. You'd have to ask Sean, but I suspect they chose level 8 because they found those who could pass that tend to have around the same discrimination and reliability (together quantified by their F-statistic) as previous trained listeners.
 
Last edited:

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
Just because they didn't mention it publicly doesn't necessarily mean they weren't using the program (or something similar) internally. You'd have to ask Sean, but I suspect they chose level 8 because they found those who could pass that tend to have around the same discrimination and reliability (together quantified by their F-statistic) as previous trained listeners.
I've learned by now that one can never "assume." The How to Listen software appears to have been released around 2009. Sean Olive's blog from May 2009 describes How to Listen as "a new computer-based software application." However, the paper demonstrating the difference in performance (quantified by the F-statistic) between trained and untrained listeners was published in 2003. And the paper correlating measurements with preference score, where the listeners were trained, was published in 2004. And neither paper in 2003 or 2004 seems to reference any such "How to Listen" software. This strongly suggests that there could be differences between the training that was utilized in the published research prior to 2009 and the training that is employed in How to Listen. Now it's possible that How to Listen is essentially identical to the training received by the Harman trained listeners, just in a computer-based form, but I don't think it's wise to assume that.
 

MayaTlab

Addicted to Fun and Learning
Joined
Aug 15, 2020
Messages
924
Likes
1,512
How to Listen doesn't have to include such nebulous terminology in order to have an influence on their usage. In fact, it's because it doesn't include this vague terminology but instead well-defined terms that represent concrete physical properties that means it could reveal to a listener through training that their nebulous terms are either illusory, or can be explained by standard terms, which invariably describe features of frequency response.

It isn’t because How To Listen provides conventional definitions of the attributes it tests for that they necessarily aren’t nebulous.

Some of the “attributes” being tested are less attributes than just EQ filters, in which case it would have simply been more straightforward to simply call them “low shelf” or “high shelf”.

I would also argue that How To Listen’s conventional definition for “coloured” or “uncoloured”, is, in fact, extremely confusing and non-sensical when applied as it is for the sort of studio recordings that are used, requiring listeners to rank samples from “more coloured” to “uncoloured”, even though all studio recordings are by definition “coloured” (starting from the room selection and mic placement). Even if we accept that the un-EQed recording is “uncoloured”, it frequently occurred to me, when presented with two deviations from it that seemed similar in degree but with different filters applied, that I found them both equally “coloured”, but apparently How To Listen had a specific opinion about what should have been ranked “more coloured” than the other.
For what it’s worth I have no intention to use How To Listen’s definition of “coloured”. It doesn’t make any sense to me.

The other rather amusing thing is that some of the reviewers you called out already are quite convinced that FR remains all that there is or ready to accept it, and quite experienced with EQing headphones (which is another way of training your listening abilities in my opinion) - in fact some of them spend less time bothering with measurable data other than FR than Amir - and yet will routinely still use terminology such as “muddy”, “detailed”, whatever. Heck, Amir will frequently make use of the terminology “spatial quality”. Being convinced that FR is for the most part responsible for what they hear and continuing using audiophile jargon is not exclusive.

My understanding was Harman's how to listen provides an indication whether or not one's ears are trained. The idea is reviewers should train their ears, Harman's how to listen is just one indicator as to whether or not a listener is trained.

There is no such thing as a “trained” listener outside of the specific application of How To Listen to Harman’s research.
“Trained” for Harman has a specific definition (level 8) which I’m going to assume isn’t entirely arbitrary and came up with a bit of stats involved.
“Trained” means something specific in the context of Harman’s research, and correlates with some observed trends when it comes to scoring equipment within that research’s format.
That. is. all. Everything else remains a conjecture.

Are trained listeners more capable than untrained listeners to EQ two different headphones to the same on-head target by ears than untrained, for example ?
Are “more than trained” (ie listeners who reached a higher level than 8) listeners more capable of doing so than merely “trained” listeners ?

Also, Harman’s How To Listen only partially trains you. I make extensive use of sweeps and noise to assist in EQing headphones and that’s not something you’ll be tested for in How To Listen for example.

It’s better to understand Harman’s How To Listen, outside of the scope of Harman’s research, as an educational, training program, that harman was kind enough to graciously offer to the public (and that’s worth experimenting with, I’m glad I did).
Outside of that research, the “trained” notion is meaningless.

As @GaryH put it, at best - but that also remains to be seen, see as an example my question above regarding “more than trained” listeners and some of my concerns below - it’s a continuum.

The problem with @GaryH uneducated insistence that reviewers take that training and then publish it is that he sees it as far, far more than a training program, but as a “standard”, that would then be correlated to how much “trust” you’d then be able put in the overall subjective judgements of a reviewer :

If reviewers want their subjective judgements to be some kind of useful data point, they must in some way expect or at least hope their readers trust in their ability to adeptly discern good sound quality. In the same way that we require measurement rigs to conform to industry standards that demonstrate the accuracy and reliability of their readings, if subjective reports are to have any utility to readers, these 'measurement rigs' i.e. the reviewers ears should be subject to provable standards of accuracy too, and being a Harman level 8 trained listener is as good a standard for this as I can see.
Well it's no more of a competition than asking a reviewer what the capabilities of their measuring equipment are before looking at their measurements, except the equipment in this case is your ears. If you 'win' you get more trust in your subjective judgements, just as the owner of a GRAS 45CA, Klippel NFS, or Audio Precision APx555 gets more trust in their measurements (as long as they know how to use them properly). And of course just for taking part, being the (so far) only reviewer to post their results you'll get something expensive measuring equipment can't buy - respect :cool:

We’ll gloss over his comments on industry standard equipment, as if the idea that knowing “how to use them properly” had been settled (to which I’d add : does it actually have to be settled ? Humanity ceaselessly find novel, creative ways to use a hammer after all).

The first thing I have a problem with is the notion of “trust”. If you’re presented with data and know the methodology that was used, you don’t need “trust”. You just need to know the degree of confidence you can have in the presented data, which is not quite the same. “Trust” is unquantifiable, margins of error, deviations and intervals may. In the case of headphones it probably involves far too many measurements of HRTF / HPTF / samples / test rigs to make it a practical reality, but it theoretically doesn’t have to be a question of “trust”.
I, for one, don’t “trust” DIYaudio’s measurements any less than ASR’s. I just guesstimate what degree of confidence I can have in them for various parts of the spectrum, various headphones types and designs, various types of measurements being conducted, etc., based on the knowledge of the methodology and test rigs used, my own experience and the articles I’ve read. That’s all.

More importantly, I greatly object to the idea that “trust” in someone’s overall subjective judgments would be automatically applied to reviewers who would reach a higher level in How To Listen.

So why shouldn't it ?

First, because How To Listen can’t be used as a standard outside of the scope of Harman’s research, particularly if self administered in uncontrolled conditions.
The insistence that reviewers should publish their results, and even more so that one’s level of “trust” would be correlated by the level reached in How To Listen (if it's indeed a continuum), creates a strong incentive for, well, untrustworthy behaviour. You can totally cheat your way through the levels if you’ve set your mind to do so. The heck it’s even possible to see the differences with real time analysers on the output for some of the tests. A standard you can cheat your way through isn’t a standard.
Since some of the ways you can cheat your way through involve the use of EQ, it stands to reason that the playback equipment used also has a role to play in your performance. There is no possible standard in this case since reviewers would self-administer it on varied equipment.

Second, because Harman’s How To Listen, IMO, is an incomplete training exercise. There are other, just as interesting ways to “train” your ears so that you get better at performing some tasks.
You may not perform that well at How To Listen and yet, perhaps perform better than a "trained" listener at, for example, the exercise of EQing two headphones to the same on-head target by ears, because you found other (perhaps more efficient) ways to do so than being able to identify FR bands by playing music.
It’s really important to note that Harman’s How To Listen doesn’t test your listening abilities overall. It comprises very specific tests, and that’s all it tests. No more than that.

Third, because, like all tests of this kind, beyond some point you realise that what you’re doing is not just training yourself at improving the capabilities under test, but simply at “beating" the test format itself. More on that latter.

Fourth, because beyond some point, for some tasks (particularly the ranking tasks), reaching a higher score might be less related to your listening capabilities than to your patience. At some point I felt that How To Listen was less testing my listening capabilities than my patience. Ranking between 12 different more or less noisy samples starts to become incredibly time-consuming and tedious, even when the audibility of the differences (switching between two adjacent samples) remains fairly obvious. Someone without patience may stop at 8, while someone else might go up to 13 without this meaning anything about their respective listening capabilities. More on that latter as well.

Let’s put it in more concrete terms. Perhaps it might be interesting for me to talk about my experience with that program and what I learnt from it (I did learn something very useful from it that I now make regular use of, but not what you’d expect).

The context : the apparition of object based formats and surround simulation systems have rejuvenated my interest in headphones and I've been trying to find ways to improve my capabilities to EQ headphones to reach a more desirable FR at my own DRP. Harman's How To Listen looked like an interesting thing to do and I tried my hand at it not such a long time ago.

This is where I left the program a few months ago after getting bored with it :
Screenshot 2021-11-01 at 11.58.52.png

Apparently, that means that I’m “trained” for the most part. It also means, if you’re naive like @GaryH, that you should find, overall, my subjective impressions more trustworthy than someone who didn’t reach these levels, and less than someone who reached higher levels (word of advice : you shouldn’t).

I promise, for starters, hand on heart, that I didn’t cheat (but you don’t have any way to verify that).

I didn't bother with the practice sessions and went straight on for the band ID "peaks" session. I went about it a little bit carelessly until I reached a plateau at around 9-10. I then switched to another pair of headphones (equalised HD650 + Dekoni pads), turned the volume a little louder than I'm used to, reached another plateau around 12. So I then did what every competitive student does in this situation : I started to find ways to “beat the test”.

I realised that I was doing it quite wrong as I would listen to the “flat” profile for 10-15s, then switch to the “EQ” profile for 10-15s again, switch back, etc. Something “clicked” when I found out that if I let my brain adjust for enough time to the “flat” profile for each sample, and then only switch to the “EQ” profile for a very brief amount of time, so that it doesn’t get used to it, and particularly doing so during the portions of the samples that comprised the right spectral components, the location of the peak would jump at me in a more obvious way. It became then a very, very boring exercise. I would take as much time as I needed to get confident in my choice, and if it meant spending 3min on one trial, I’d spend 3min.

The weird thing is that it took me far fewer tries to go from 12 or so to 15 or so than to go up to 12 in the band ID test after I learnt that psychoacoustic trick, but a lot, lot more time as it required my brain to fully “normalise” the flat response for each sample.

For the “dip” exercise, I just did it in reverse, turning the “dip” into a “peak” by normalising the “EQ” profile. For the “dip” + “peak” exercise, I simply first examined the change in SPL, if higher I treated it as a “peak” exercise, normalising my brain to the “flat” profile, if lower I’d treat it as a “dip” exercise, normalising my brain to the “EQ” profile.

This approach is typically the sort of thing that can work within the format of How To Listen but wouldn't systematically be applicable in another training program's format. That's why I consider it "beating the test format" itself, and not necessarily evidence that my listening capabilities improved.

Immense boredom took hold of me at this point and I couldn’t go further.

So, did it help me in getting better at EQing headphones, which is, for me, the main area of concern right now ? Not really. I’ve done far more useful things in the last year to help me doing so.

Did it teach me something very useful about psychoacoustics and how my brain works when comparing two different EQ profiles / filters ? Hell yeah. I’m now quite careful in listening to them asymmetrically so that my brain gets used to one of them but not the other, and doesn’t get “mixed” signals.

I expect that I’ll be talking to a wall if I call out @GaryH again, since it’s quite evident that he never bothered to walk the talk and actually open the app and try it (trying my best at doing some form of reverse psychology here but I’m not hopeful).
But if there is one person in this thread who should feel obliged to published his results, it’s him. Not that anyone should care that much about the level he would reach IMO (my opinion of his subjective impressions won’t budge that much regardless of where he ends up - if he actually is interested in sharing them BTW because his posting history seems to mainly be comprised of looking for arguments instead of sharing experiences), but at least so that we know that his misguided insistence that reviewers should take it and publish their results comes from a place of actual knowledge of what he’s preaching and not ignorance.

For others, treat How To Listen as a very interesting tool from which I hope that you’ll learn something, as I did (and I wouldn’t expect everyone to get the same thing out of it). But please don’t consider it for what it isn’t. And don't get too worked up by the skill levels you reach.
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,891
Likes
35,912
Location
The Neitherlands
For others, treat How To Listen as a very interesting tool from which I hope that you’ll learn something, as I did (and I wouldn’t expect everyone to get the same thing out of it). But please don’t consider it for what it isn’t. And don't get too worked up by the skill levels you reach.

Yep.
And the program does not say anything about possible financial, or other gains, a reviewer may have hyping certain devices.
It doesn't measure nor prove scruples.
 

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,348
Likes
1,804
The phrase 'Don't let the perfect be the enemy of the good' couldn't be more apt here.
 
Last edited:

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,348
Likes
1,804
I've learned by now that one can never "assume."
This strongly suggests that there could be differences between the training that was utilized in the published research prior to 2009 and the training that is employed in How to Listen.
That sounds like an assumption to me ;)
The How to Listen software appears to have been released around 2009. Sean Olive's blog from May 2009 describes How to Listen as "a new computer-based software application." However, the paper demonstrating the difference in performance (quantified by the F-statistic) between trained and untrained listeners was published in 2003. And the paper correlating measurements with preference score, where the listeners were trained, was published in 2004.
That's the point, the F-statistic predates everything, and is what is used to judge the performance of Harman's listeners. How to Listen helps improves the F-statistic by improving the listener's ability to reliably discriminate small frequency response differences. Even if the minutiae of the previous training was different to How to Listen, what would matter is that they are functionally equivalent in producing the desired end-result (improving the F-statistic to some minimum required level).
Now it's possible that How to Listen is essentially identical to the training received by the Harman trained listeners, just in a computer-based form, but I don't think it's wise to assume that.
I haven't assumed that. I said "Just because they didn't mention it publicly doesn't necessarily mean they weren't using the program (or something similar) internally." So, maybe they did, maybe they didn't. As I said before, you'd have to ask Sean if you want more information on all this.

By the way, I'm glad you mentioned his blog post on How to Listen, because in the comments he says this:
having a common descriptive language is not only a benefit to Harman listeners, but would be great tool for audio reviewers and consumers alike
So looks like I'm in good company in believing it would be a great idea for reviewers to train themselves using How to Listen, one of the benefits being helping them to use conventional terminology intstead of amorphous descriptors with no real meaning.
 
Last edited:

DualTriode

Addicted to Fun and Learning
Joined
Oct 24, 2019
Messages
893
Likes
593
Hello All,

Fine tuning your listening skills is a fine thing to do for fun.

In terms of evaluating headphones I trust the GRAS 45 CA-9 on my bench much more (actually on a stool in my linen closet across the hall).

Listener subjective preference and objective headphone measurement are established technology.

The industry knows how to make headphones that match the target curves. That is a done deal.

Harman, and other industry research has moved on to other things, as reported by Sean Olive in recent videos. Other products (new) are in the pipeline.

My speculation is development of the 3D sound stage. I much prefer near field monitors to headphones with their between your ears sound stage.

It is easy to produce a set of AKG K371 or similar Harman Target matching headphones. What is missing is the 3D sound stage.

Computing size and cost has come a long way. What is coming next is head position blue tooth buds or headphones and real time convolution of 3D sound stage.

With the 3D computation of head position and multiple channel recordings the cone of confusion will disappear.

Yes “Cone of Confusion” is a real term that belongs to the Psychology of Hearing. You will be hearing it more often.

Thanks DT

For a little peek.
 
Last edited:

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
Hello All,

Fine tuning your listening skills is a fine thing to do for fun.

In terms of evaluating headphones I trust the GRAS 45 CA-9 on my bench much more (actually on a stool in my linen closet across the hall).

Listener subjective preference and objective headphone measurement are established technology.

The industry knows how to make headphones that match the target curves. That is a done deal.

Harman, and other industry research has moved on to other things, as reported by Sean Olive in recent videos. Other products (new) are in the pipeline.

My speculation is development of the 3D sound stage. I much prefer near field monitors to headphones with their between your ears sound stage.

It is easy to produce a set of AKG K371 or similar Harman Target matching headphones. What is missing is the 3D sound stage.

Computing size and cost has come a long way. What is coming next is head position blue tooth buds or headphones and real time convolution of 3D sound stage.

With the 3D computation of head position and multiple channel recordings the cone of confusion will disappear.

Yes “Cone of Confusion” is a real term that belongs to the Psychology of Hearing. You will be hearing it more often.

Thanks DT

For a little peek.

Yep, you’re an AI bot.
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
That sounds like an assumption to me ;)

Not by a mile. If How To Listen wasn’t described in the methodology, you can’t assume it was used in the study. Period. I was being polite because you seem to have the background to understand that but you’re choosing not to use it.
And if Sean Olive wants to take the time out of his day and say yes we used the exact equivalent of level 8 How to Listen then great. I’ll wait for that.

By the way I’m not sure what the point is you’re trying to make. Is it that undergoing and achieving level 8 could improve a reviewer’s ability to render reliable subjective speaker ratings? If so great I don’t think anyone would disagree with that. But as I and others have pointed out, the main issue with speaker review reliability isn’t the reviewer’s inherent capabilities, but rather the long list of other sources of bias that exist. It’s not like the average YouTube reviewer is listening blindly under level matched conditions using the exact same set of 4 test tracks.
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
Does anyone know what correction is needed to convert Amir's measurements to Harman's measurements?

Or if not, I bet one could be created if there are headphones that were measured both by Amir and by Harman.
 

MayaTlab

Addicted to Fun and Learning
Joined
Aug 15, 2020
Messages
924
Likes
1,512
Does anyone know what correction is needed to convert Amir's measurements to Harman's measurements?

Or if not, I bet one could be created if there are headphones that were measured both by Amir and by Harman.

Am I understanding you correctly that you're asking whether or not a transfer function has been derived between the following two measurement systems, based on the individual measurements shown in Sean Olive's presentation, and the measurements available on ASR ?
Screenshot 2021-11-03 at 08.05.35.png
If so not that I'm aware of but I'll send you something you may be interested in.

I can see a few caveats however to that endeavour. Harman's process reduces the influence of two potential variables on the results, the operator's methodology (which is the same) and in all likelihood the sample variation (I'm assuming here that the exact same samples were used). You'll be exposed to both when comparing Harman's measurements on their modified 45CA and Amir's.

The other one, visible already between the 5128 and 45CA + mod measurements from Harman, is that at best you'll obtain a general idea of how they differ, as a constant transfer function for all headphones is unlikely to exist. Individual headphones may still differ more than other from the obtained average in more or less unpredictable ways.
 

Jimbob54

Grand Contributor
Forum Donor
Joined
Oct 25, 2019
Messages
11,063
Likes
14,694
Does anyone know what correction is needed to convert Amir's measurements to Harman's measurements?

Or if not, I bet one could be created if there are headphones that were measured both by Amir and by Harman.
I naively thought they were compatible.
 

MayaTlab

Addicted to Fun and Learning
Joined
Aug 15, 2020
Messages
924
Likes
1,512
I naively thought they were compatible.

Possibly more so than with HMS or B&K fixtures, but not quite either. That was mentioned in Sean Olive's presentation (slide 75). Harman has the actual data, we don't :D.
Industry standards or not we're far from dealing with degrees of precision and accuracy below threshold of audible differences anyway, so personally I don't see much of an objection to use Harman's target as a reference point (and only as a reference point) on GRAS fixtures without the modified pinna (particularly the ones like the 45CA with the exact same geometry around the pinna and everything but the pinna ceteris paribus), as long as it's understood that like all headphones measurements on ear simulators, a degree of inaccuracy comes with the baggage and individual experiences may differ to a degree once you're actually wearing the headphones.
 
Last edited:

DualTriode

Addicted to Fun and Learning
Joined
Oct 24, 2019
Messages
893
Likes
593
Does anyone know what correction is needed to convert Amir's measurements to Harman's measurements?

Or if not, I bet one could be created if there are headphones that were measured both by Amir and by Harman.

Hello,

First, you need to understand that the GRAS 45 CA MOD used by Harman to derive the Harman target curve is different than the GRAS 45 CA-10 used by ASR (and by me). They are two completely different test fixtures.

The Harman 45CA Mod uses a different pinna and a RA0045 coupler or ear-simulator. GRAS says that the RA0045 ear simulator has a useful frequency range up to about 10 kHz.

The newer high frequency RA0401/2 version coupler or ear-simulator is potentially more accurate and GRAS says that it extends the useful frequency range to 20 kHz.

The thing is that there is no direct conversion between the old and new versions of the GRAS ear-simulators. Perhaps below 10kHZ where the frequency response and impedance of the two different couplers are similar, they are both close enough to the Harman target response without recalculating everything all over again.

Thanks DT
 

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,348
Likes
1,804
Not by a mile. If How To Listen wasn’t described in the methodology, you can’t assume it was used in the study.
Ah, looks like the meaning of the winking smiley in my previous post wasn't conveyed. If you believe that me saying "Just because they didn't mention it publicly doesn't necessarily mean they weren't using the program (or something similar) internally" (i.e. 'it's a possibility') is equivalent to me assuming the definite 'How to Listen was used in the study', then by that (il)logic you saying "This strongly suggests that there could be differences between the training that was utilized in the published research prior to 2009 and the training that is employed in How to Listen" is equivalent to you assuming 'The training used prior to 2009 was not even similar to How to Listen'. Of course, both accusations of assumption are faulty, hence the ironic winking smiley. In reality, neither of us are assuming anything, just guessing at possibilities, but not ruling anything out. Which is why I said 'maybe they did, maybe they didn't' at the end of my post, which you've chosen to overlook for some reason.

By the way I’m not sure what the point is you’re trying to make. Is it that undergoing and achieving level 8 could improve a reviewer’s ability to render reliable subjective speaker ratings? If so great I don’t think anyone would disagree with that. But as I and others have pointed out, the main issue with speaker review reliability isn’t the reviewer’s inherent capabilities, but rather the long list of other sources of bias that exist. It’s not like the average YouTube reviewer is listening blindly under level matched conditions using the exact same set of 4 test tracks.
Once again, don't let the perfect be the enemy of the good. Of course there is a lot else wrong with audio reviewers and their methods (not sure why you just said speaker reviewers). But attempting to address one area (listener discrimination and reliability) will be a stepwise improvement along the continuum from a Zeos/Darko listening session to a Harman listening test. Blind listening is relatively difficult to set up properly (especially for headphones), even proper level-matching is hard. How to Listen is all ready-made and waiting however, and there can only be benefits to improving the reviewer's listening abilities. I would like all the things you mentioned done (same set of test tracks is another no-brainer and should of course be done), but listener training is a good first step on that road to better subjective audio testing.
 
Last edited:
Top Bottom