• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Dan Clark Expanse Headphone Review

Rate this headphone:

  • 1. Poor (headless panther)

    Votes: 10 2.7%
  • 2. Not terrible (postman panther)

    Votes: 13 3.4%
  • 3. Fine (happy panther)

    Votes: 66 17.5%
  • 4. Great (golfing panther)

    Votes: 288 76.4%

  • Total voters
    377
@solderdude , to just interject slightly into the conversation you're having with GaryH. I know the argument for following the Harman Curve closely boils down to "why build any more inaccuracies into the process". We know that measurements can deviate depending on methodology (on same measuring rig), or they can deviate due to unit to unit variation, or through specific interactions the headphone can have with your own head vs the measurement rig, and even deviate during pad wear as the pads get older.........but the argument against that is "why build any more inaccuracies into the process" by not following the Harman Curve closely - both in terms of when EQ'ing to the target and also in headphone assessment/reviews.......I mean that's assuming the listener or reviewer has a Harman Curve preference. Hell, even if you don't EQ to the Harman Curve and you EQ instead to another target of your choice, then it would be silly not to EQ from measurements accurately to the target of your choice (obviously while still trying to obey EQ best practice in terms of not using sharp filters too far up the frequency range and not boosting areas too much, etc). As long as the EQ filter(s) in question for a portion of the frequency response you're scrutinising add up to an audible difference then you may as well EQ accurately to the Target Curve of choice as a starting point at least.

Let me ask you something this way.... IF you had the perfect headphone that would not even change response depending on seating and you would measure it with the fixture on which the Harman curve was created... do you believe/think there would be a measured trace perfectly overlapping the current Harman curve/compensation for that particular fixture or would it have deviations above a few kHz ?

My point is the 'smoothness' of the target curve is NOT accurately representing the actual transfer function of the used HATS.
 
it would make more sense to represent the FR measurement with error bars, or perhaps a thick line showing +/- 1SD across multiple measurement trials with slight earcup repositioning.

This is what Oratory does b.t.w. and is handy to have alongside other measurements.
 
This is what Oratory does b.t.w. and is handy to have alongside other measurements.
Oh you're right! Well I definitely approve then. :)

I haven't check the Oratory site in a while, but I remember trying a bunch of his EQ settings for a variety of headphones, and I hated all of them. I was curious why that might be, and it dawned on me that his measurement rig wasn't the same as the Harman rig - which meant that unless he had a way to adjust/correct the Harman target curve to the equivalent curve on his rig, his proposed EQ corrections wouldn't be completely valid. Perhaps I'm not thinking about this correctly.
 
There are people that don't like Ora's EQ and those that swear by it.
There are those that don't like the Harman target and there are those that swear by it.
There are those that swear by Optimum Hifi target, DF target, certain 'room' targets or a target invented by some manufacturer or measurement guy.
There are people that use EQ which was simply 'winged' (turn up the bass and treble) or do some random pad swaps or uncontrolled modifications and find it great.
Also when looking at the 'what headphone do you own thread' and seeing totally opposite preferences between owners of the same headphone ranging from bass-shy ones to bass-head headphones.

This tells me something. :)
It may have something to do with the word 'preference' and not everyone seems to have exactly the same preference... maybe that's why its called preference and nearly all targets are also based on preference just a different one than the others.
 
For instance, due to the amount of imprecision introduced through small/subtle differences in earcup positioning, it would make more sense to represent the FR measurement with error bars, or perhaps a thick line showing +/- 1SD across multiple measurement trials with slight earcup repositioning.
I wouldn't introduce to measurements error bars as errors are not quantified and measured enough. It's better to leave it as it is, and they are not constant as new measurement gear keeps appearing . Knowledge is needed for measurement interpretation anyway.
 
I wouldn't introduce to measurements error bars as errors are not quantified and measured enough.
Not sure what you're saying. Also, error bars would not be used to represent "error" in the measurement device, per se, but rather the uncertainty or imprecision in the overall measurement technique. For instance, if you remove, reattached, and remeasure the headphones repeatedly (say 10 times), how different are the FR curves each time? And how might you convey that variation when you display the data? Certainly, you could present a simple average, but you're not presenting the whole story. Error bars, or a shaded region of uncertainty conveys more information. I believe Harman has also presented data in this manner for as well (IIRC, with their target curves for a home theater).
 
Because, for example, as it's already been occasionally proven (and Harman is in the process of gathering a lot more data on the subject, and hopefully it will be published), the measurement itself builds an inaccuracy in the process if there isn't a constant transfer function between all headphones and the average listener.
Ie, EQing headphones A to the target might indeed bring them closer to deliver the target to most listeners, but doing so for headphones B might brings them less close to the target - or at least merely shuffle the error curve around.
Yes, I agree, but your argument doesn't exclude mine. You would certainly EQ accurately to the Harman Curve as a starting point (which I did mention), and then you'd tune it by ear if you can.....because you can't know in advance the deviations that you mention.
 
Agree, and I think that also speaks to the degree of imprecision in measuring headphones, let alone trying to compared their measured FR to a target curve. For instance, due to the amount of imprecision introduced through small/subtle differences in earcup positioning, it would make more sense to represent the FR measurement with error bars, or perhaps a thick line showing +/- 1SD across multiple measurement trials with slight earcup repositioning.
Through my experience of measuring my various headphones on my miniDSP EARS rig, it's actually not that hard to get reliable measurements, it's a flat cheek so it similar in that vein at least to Amir's GRAS rig he uses, in terms of being able to get reliable measurements through more reliable bass seals created by having the flat cheek. I found that headphone positioning differences weren't really a big problem on my miniDSP EARS rig, some headphones did vary more than others with slightly different positions, and of course above 10kHz the variation increased with position, but on the whole the measurements were quite reliable. I decided from my experience that a roughly centralised placement on the measuring rig was the best methodology, generally looking to place it on the rig to prevent the pinna from deforming, which is how you'd place your own headphones on your own head - you put them on roughly central and move them around a little until your ears aren't touching any part of the earcup (if your ears aren't too large and your earcups on your headphone too small). I then do around 5 to 10 measurements of the headphone (depending on how much time I want to dedicate), and remove and roughly replace the headphone back on the rig between each measurement. Occasionally, depending on headphone, if it is less reliable in it's ability to get a good seal (just the NAD HP50 out my headphones) I'd have to remove the odd outlier that is obviously wrong because it's not sitting on the rig properly. I then average those to create an average measurement, and this last step goes a long way to more accurately represent the general energy above 10kHz where it can vary more depending on placement, and of course it will average out the lesser deviations below 10kHz too. So I think there is most value in doing multiple measurements at roughly centralised headphone placement and then creating an average from that which you would then EQ from (or publish if you were a reviewer). It is interesting to see a spread of the measurements though as it helps to show how sensitive that particular headphone is to placement differences, so that would be a useful thing to see in a review. In my experience of measuring headphones on my miniDSP EARS rig I did not find it imprecise at all, it was quite easy to get reliable and precise results, following for example are 3 units of my HD560s measured showing all measurements so you can see the kind of deviations I got between individual measurements for each unit, and then the last graph (4th graph) is an average of each unit graphed against eachother showing the unit to unit variation, so I don't find headphone measurements to be imprecise in my experience with my miniDSP EARS rig beyond the limitations I described a little earlier in this post:
HD560s Unit 1 All Measurements.jpgHD560s Unit 2 All Measurements.jpgHD560s Unit 3 All Measurements.jpgHD560s all units AVG of left & right channel.jpg
I would think the same level of precision could be attained on the GRAS flat cheek rig that Amir has, but I don't have any experience using a GRAS rig so I can't directly say for sure that would be the case, I think it should be though.
 
Last edited:
Yes, I agree, but your argument doesn't exclude mine. You would certainly EQ accurately to the Harman Curve as a starting point (which I did mention), and then you'd tune it by ear if you can.....because you can't know in advance the deviations that you mention.

You're right, but I think that it only applies to headphones that deviate sufficiently from a target and of a type for which these fixtures to average humans inconsistent deviations may not be that significant. In the case of the DCA Stealth and Expanse the incertitude is too great vs the error curve to Harman to consider that EQing them to the target is worthwhile in the first place. For these the starting point should probably be the headphones as they are in my opinion.

Also, not knowing these deviations = / = making them disappear :D. This is a problem with headphones testing ATM : we simply need to test for them on a per headphones basis to assess whether or not a particular model will actually deliver or not a particular target if it is EQed to it according to ear simulators measurements. A pair of headphones that is reasonably consistent across listeners but deviates vs the average transfer fonction between fixture A and the average listener (the DCA Noire could potentially be an example of that kind) should not be criticised then for deviating from the target - intentionally or not - if it turns out that the measured deviation actually enables it to get closer to delivering the target once it's on people's heads. Meanwhile, headphones that stick well to the target on a fixture but deviate from it once it's on people's heads, either because of a highly inconsistent coupling, or because their transfer fonction is mismatched vs. the average transfer fonction (or at least the transfer fonction of the sort of headphones that were used to test the target vs others), should be heavily criticised, even though they'd look ideal on a fixture and get glowing reviews as a result.
 
Let me ask you something this way.... IF you had the perfect headphone that would not even change response depending on seating and you would measure it with the fixture on which the Harman curve was created... do you believe/think there would be a measured trace perfectly overlapping the current Harman curve/compensation for that particular fixture or would it have deviations above a few kHz ?

My point is the 'smoothness' of the target curve is NOT accurately representing the actual transfer function of the used HATS.
I'm not sure I fully understand the inferences you're making, but I get the impression that they wouldn't be relevant, for the simple reason that the Harman Study was based on actually EQ'ing a headphone accurately to a "baseline eardrum measurement" and then the participants of the study altered the bass & treble to taste in order to create the Harman Headphone Curve, so the relevance of the Harman Curve's statistical preferability amoungst the participants was dependant upon it being accurately EQ'd to the Harman Curve - therefore if you purposefully deviate away from that as a starting point (ie don't EQ accurately to the curve) then you're already introducing more errors away from the research.

Ah, ok, I've thought about what you were saying, yes I think that if a perfect Harman Headphone had been manufactured that magically didn't vary with seating position, then if you measured it on the original Harman rig, then it would follow exactly the Harman Curve, (that's also assuming there are no other variables like unit to unit variation). The reason being that the perfect Harman Headphone would be designed to follow the Harman Curve. The one exception to this is the 10kHz notch, you'd want the 10kHz notch, as that's not pictured as part of the Harman Curve.
 
Last edited:
You're right, but I think that it only applies to headphones that deviate sufficiently from a target and of a type for which these fixtures to average humans inconsistent deviations may not be that significant. In the case of the DCA Stealth and Expanse the incertitude is too great vs the error curve to Harman to consider that EQing them to the target is worthwhile in the first place. For these the starting point should probably be the headphones as they are in my opinion.

Also, not knowing these deviations = / = making them disappear :D. This is a problem with headphones testing ATM : we simply need to test for them on a per headphones basis to assess whether or not a particular model will actually deliver or not a particular target if it is EQed to it according to ear simulators measurements. A pair of headphones that is reasonably consistent across listeners but deviates vs the average transfer fonction between fixture A and the average listener (the DCA Noire could potentially be an example of that kind) should not be criticised then for deviating from the target - intentionally or not - if it turns out that the measured deviation actually enables it to get closer to delivering the target once it's on people's heads. Meanwhile, headphones that stick well to the target on a fixture but deviate from it once it's on people's heads, either because of a highly inconsistent coupling, or because their transfer fonction is mismatched vs. the average transfer fonction (or at least the transfer fonction of the sort of headphones that were used to test the target vs others), should be heavily criticised, even though they'd look ideal on a fixture and get glowing reviews as a result.
I'd agree that there are unknowns in headphone measurement, such as the main topic in your post (the transfer functions can differ between headphones), but the problem is that you can't really know how they deviate and in which way, and by how much. When you have this kind of uncertainty I see the best cause as EQ'ing to the Target Curve and then tune it roughly by ear if you can. I don't know how you can have enough information to reliably do otherwise. It might be similar to the meme of "Moores Law is Dead", inasmuch that the limits of what both set target curves and measurement rigs alone can do to enhance our listening accuracy/experience. You're then into the individualised HRTF dsp processing of The Smyth Realizer and the Impulcifier Project as the next evolution/improvement. We kind of have to work with what we've got, fortunately the Harman Curve is a good solution in my experience, but it has the limitations that we've all mentioned.

I'm unsure how we can get more information & clarity out of it. How would you go about reviewing headphones, measuring them, EQ'ing them (the EQ decisions), when combined with GRAS mesurements? In terms of ironing out the transfer function variable in particular.....(and perhaps even creating a modified Harman Target that takes that headphone's unusual transfer function into account?).

EDIT: thought about it briefly and come up with an idea of how you could theoretically account for peculiarities in transfer functions of some headphones in terms of them exhibiting large differences when worn on a persons head vs the GRAS measurement. It could look like the following crazy process:
Step 1: Take the headphone that was used in the Harman Research during the process of the Harman Curve Creation, it might have been the HD800s IIRC. EQ it accurately to the Harman Curve. Get a representative sample of the population (20 people who best represent humanity in the world?). Fit them with in ear mics. Measure the EQ'd HD800s on their heads and create an average measurement from that - this is now your Real World Harman Target for the representative population.
Step 2: You're reviewing headphone model X for example. You measure it on the GRAS and create your average measurement. You then EQ the headphone accurately to the Harman Curve. You then measure EQ'd headphone X on your 20 people (representative sample of the worlds population I talked about in Step 1), and create an average of the measurements. You then compare this real world average on head measurement with the Real World Harman Target that was created in Step 1.........the difference between them is the difference in headphone transfer functions between the two headphones.
Step 3: You then apply the difference in transfer functions that you've determined at the end of Step 2 to the Harman Curve to create a modified Harman Curve that is specific to that model of headphone (headphone X in Step 2). People would then be in a position to EQ headphone X accurately to the originally intended Harman Curve because the headphone transfer function difference variable has been removed. You'd have to do this process with every single different model of headphone you reviewed and you'd publish a modified Harman Curve for each different headphone model, thereby creating a database for every reviewed headphone that has had the headphone transfer function variable effectively removed. Wow, what a palaver, that's not gonna happen, but you are removing the headphone transfer variable that we've been talking about! It would also elucidate how significant or insignificant this variable was, and how widespread the differences occurred, perhaps only isolated to a few strange headphone models or perhaps widespread differences between many different headphone models......maybe they'd be trends of variability based on overall design, maybe related to size of earcup or angled or closed back or open back, etc......there could be quite a lot to learn, but it's a big undertaking.
 
Last edited:
Not sure what you're saying. Also, error bars would not be used to represent "error" in the measurement device, per se, but rather the uncertainty or imprecision in the overall measurement technique. For instance, if you remove, reattached, and remeasure the headphones repeatedly (say 10 times), how different are the FR curves each time? And how might you convey that variation when you display the data? Certainly, you could present a simple average, but you're not presenting the whole story. Error bars, or a shaded region of uncertainty conveys more information. I believe Harman has also presented data in this manner for as well (IIRC, with their target curves for a home theater).
Well, what you presented is the easiest to measure kind of uncertainty. What about the differences between measurements on different HATS? Even if someone will quantify differences on all existing GRASSes and other HATS (good luck with that), will you adjust these bars, or shaded regions of uncertainty when new measuring gear will appear making then deprecated?
 
yes I think that if a perfect Harman Headphone had been manufactured that magically didn't vary with seating position, then if you measured it on the original Harman rig, then it would follow exactly the Harman Curve, (that's also assuming there are no other variables like unit to unit variation). The reason being that the perfect Harman Headphone would be designed to follow the Harman Curve. The one exception to this is the 10kHz notch, you'd want the 10kHz notch, as that's not pictured as part of the Harman Curve.

Chicken egg story. Unfortunately the actual compensation curve for a HATS is just 'an average' based on certain calibration standards and would never be as 'smooth' as the actual fixture's compensation curve. For this reason alone following that averaged smoothed line cannot possibly be correct and not only at 10kHz for that one specific fixture.

You are free to believe that the actual compensation needed for a fixture is such a smooth 'curve' but that is just an act of faith.
Scientific studies have been made of HATS and what has been made public does not support your belief.
The compensation curve + target for a specific fixture simply cannot be a smooth line. Only when one averages many, many different measurements and smooths that over you may reach a compensation curve + target (which are 2 different things) that is what is used as a reference today.

That 'target' curve (tonal preference curve + compensation for the changes that are made by the acoustic filters (fake pinna + fake ear canal + response of the mic) in reality will differ from that smooth line and so will also differ each headphone and each seating (somewhat to substantial) so even when one has a 'smoothed' response over multiple seatings and end up with a 'squigly' that 'squigly' will only be that, an average and may not be how most people hear it and that averaged and only 1/6 or 1/12th octave smoothed squigly needs to be smoothed much much further to be able to follow the overall target/tonal balance.
Yet, all that smoothing and averaging will have substantial variations in reality which will alter how they sound while the average (as smoothed over as the well known target curve) may well 'hug' the curve.

So now the questions arises... what smoothing is good enough ? How to handle peaks that are smoothed by smoothing or by fixture 'errors' yet contribute to the sound character.

Let's face it. Regardless how good and expensive measurement gear is it will always be inaccurate and so that curve can not be 'accurate' even if we want it to be.
Headphone measurements is NOT an as exact science as it is is purported to be by users of that gear.
 
Chicken egg story. Unfortunately the actual compensation curve for a HATS is just 'an average' based on certain calibration standards and would never be as 'smooth' as the actual fixture's compensation curve. For this reason alone following that averaged smoothed line cannot possibly be correct and not only at 10kHz for that one specific fixture.

You are free to believe that the actual compensation needed for a fixture is such a smooth 'curve' but that is just an act of faith.
Scientific studies have been made of HATS and what has been made public does not support your belief.
The compensation curve + target for a specific fixture simply cannot be a smooth line. Only when one averages many, many different measurements and smooths that over you may reach a compensation curve + target (which are 2 different things) that is what is used as a reference today.

That 'target' curve (tonal preference curve + compensation for the changes that are made by the acoustic filters (fake pinna + fake ear canal + response of the mic) in reality will differ from that smooth line and so will also differ each headphone and each seating (somewhat to substantial) so even when one has a 'smoothed' response over multiple seatings and end up with a 'squigly' that 'squigly' will only be that, an average and may not be how most people hear it and that averaged and only 1/6 or 1/12th octave smoothed squigly needs to be smoothed much much further to be able to follow the overall target/tonal balance.
Yet, all that smoothing and averaging will have substantial variations in reality which will alter how they sound while the average (as smoothed over as the well known target curve) may well 'hug' the curve.

So now the questions arises... what smoothing is good enough ? How to handle peaks that are smoothed by smoothing or by fixture 'errors' yet contribute to the sound character.

Let's face it. Regardless how good and expensive measurement gear is it will always be inaccurate and so that curve can not be 'accurate' even if we want it to be.
Headphone measurements is NOT an as exact science as it is is purported to be by users of that gear.
I'm not sure what you're talking about, it might be the terms you're using. Don't use the word "compensation" as that isn't meshing in my head.

As I said before, the Harman Study was based on actually EQ'ing a headphone accurately to a "baseline eardrum measurement" and then the participants of the study altered the bass & treble to taste in order to create the Harman Headphone Curve, so the relevance of the Harman Curve's statistical preferability amoungst the participants was dependant upon it being accurately EQ'd to their "smoothed target". Of course headphones can differ in measurements as you go up the frequency range in terms of headphone positioning which is where multiple measurements and averaging comes in. And of course headphone transfer function "predictability" can differ between humans and the GRAS measurement fixture for particular headphone models, which is the headphone transfer variable that I've been discussing with MayaTlab. I'm not really certain that we're actually disagreeing, but in absence of other information I think it makes sense to EQ headphones accurately to the Harman Curve as a starting point in EQ, which I believe is where this current discussion thread originated. What we've discussed doesn't really negate that.
 
In order to still stay on topic... the DCA Expanse, for a passive headphone, measures remarkably close to the 'reference' that is used to calibrate Amirs fixture and includes the research done by Harman regarding the 'emulation' of the tonal balance of a very good speaker in a 'standard' Harman listening room (not your living room without room correction). But.. there are some small deviations not everyone prefers. The majority of people are likely to like the sound but that same majority can probably not afford it or are not willing to pay as much.

Below a long and probably boring attempt to try to visualize why 'THE Harman target' is only a 'guideline' and no 'hard fact that must be followed exactly'

The Harman study is only about finding the preferred tonality the majority of people seems to like. In order to educate their own engineers and ensure better sales in the future.

It may confuse you but the 'well known Harman target' you see used is fixture dependent. And it differs from the Harman one as they used a different pinna.
The familiar 'Harman curve' (the one with the bump at 3kHz) actually consists of a correction for the fixture itself and on top of that the 'found EQ settings from the Harman research' are added to that correction curve for that specific fixture.
For me (engineer) I see the 'Harman curve' as the EQ settings they arrived at during the research - the actual response of the used headphone (which was measured on a non standard fixture with their own fake pinna). As not every listener arrived at the same settings (and probably song dependent as well) an 'average' was created.

I see it as measuring the output of a DAC (which is linear in response) by using an ADC that has a bunch of filters between it that differ in response depending on what DAC you connect to it and also differs at different times in a day.

Translate different response as: different pinna activation, seal, positioning, product variance.

Now if we want to measure 10 different DACs then all of them would measure differently as someone is messing with the filters before the measurement (within a certain range for each filter).
When we were to look closely and even listen to the DAC the measurement squigly of each DAC would differ. Now... If I were a manufacturer of said 'measurement device' I would want those filters standardized and ideally not having a guy inside changing the settings depending on that guy seeing what DAC is going to be measured.

The goal is to measure as closely as possible to equal amplitude for all frequencies. This, however, is not possible because of the filters in front of the otherwise linear ADC used.
One HAS to use filters that change here and there. The technical solution is to add a compensation for the (varying) filters.
The manufacturer does this by performing a shitload of (standard) measurements using a well characterized 'reference'.
They get a bunch of different squigly's and average them out. This is the 'average' of the filters used and is offered as 'compensation'.
The idea behind this is that all similar performing DAC's measured on that device have a very similar response but not the same. The average is all that is available.

Now... with speakers in a dead room at 1 single standard position this works fairly well and the guy altering the filters only adjusts the filters slightly.

No one likes to listen in a dead room nor do they own one. Such a speaker sounds different in a 'standard' room (which differs from your own room with 100% certainty)
So people measure how the overall 'tonal balance' changes and correlate that with what they hear.
Then they do the same but with headphones measured on an 'uncertain' device.
The researchers give people so 'filters' they think should be 'correct' ones needed to make the sound they like.
They find settings that differ within a range. They select an average (probably by excluding outliers and averaging the medians) and arrive to EQ needed to mimic the flat measuring speaker in a room.

To me that EQ opposite the sound of the speaker measured in the dead room is the 'Harman target'. That's what needs to be done to the 'flat in the dead room' device.

The 'flat in the dead room device' is the equivalent of the DAC measurement device with the changing filters + the 'average' compensation they found.
To make that sound good (preferred by the majority of people, not universally nor everyone) that 'found and averaged EQ' needs to be added to result of the not so trustworthy measurement device in order to come to a squigly that has a high correlation to how we would perceive a good speaker in a 'standard' room.

Now ... all DAC measurement device manufacturers use somewhat to substantially different filters and all also arrive at some 'average but not exact' compensations.
The 'Harman target' (the found average EQ) is always the same.
The result is the 'raw measured squigly' differs on all DAC measurement devices and adds the 'Harman target' to it in order to arrive at the 'most ideal sound'.

For this reason (the DAC measurement devices manufacturers that all add the same 'Harman target' end up with different raw squigly's and attempt to use an 'average' filter response they found.

All these manufacturers make something that use a 'standard' DAC and manufacturers of the DAC measurement gear do their best to get the guy, that sets the filters in a random way, to set the filters in a specific way so that when that DAC is used the results (DAC + alteration + compensation) shows a 'flat' response (an agreed upon (standard) speaker in the dead room at the same distance) and add the 'Harman correction' to it and say.... this is industry standard.
And indeed when the measurement gear from manufacturer A goes to the manufacturer B the results are remarkably similar on the same exact conditions (DAC).

Yet... secretly the guy that comes with the DAC measurement gear, as soon as another DAC is measured (different HP, different conditions) is allowed to play with the filter settings at will.
So while the 'standard' DAC measures a nearly perfect response (after the compensation) it won't be the case with other DACs. Some are closer than others yet... we don't really know which it is as the guy changing the settings keeps it to himself.

That's why the 'Harman target + correction curve' (often referred to as 'THE Harman curve) is a fine 'starting point' but there IS an uncertainty we do not know and that could be substantial (dB's) at different frequencies.

So I agree with this:
I think it makes sense to EQ headphones accurately to the Harman Curve as a starting point in EQ

It does make sense. What I am trying to convey here is that the 'Harman curve' is not the exact curve to follow for each headphone when the goal is to get 'perfect sound quality' and that 'average' line we see and love in all the plots of the 'raw squigly' is not necessarily 'correct' as in 'exact'.
Trying to get 'close to that average' in general will bring the tonal balance closer to what has been found in the research (in this case Harman).

Now consider that aside from the guy that secretly operates the filters and the 'Harman EQ' being added there is a second guy playing with, yet again, a different set of filters that have some resemblance to the ones used in the DAC measurement device also plays with his filters (your ear canal, pinna) and there is an automated device (your brain) inside your head 'undoing' that second guy's filtering based on an ever (slowly) changing reference that is based on perception of real sounds around you.

This means... headphone measurements and actual perception does have some but not an exact correlation (2 sets of filters and one 'correction' being fixed and the other semi fixed') it is a mess and the 'somewhat randomized and highly averaged measurement' squigly in-between the DAC and brain is not an 'exact' one. It is merely an indication.
Some are more accurate than others within a certain (limited) frequency range.

So don't put too much faith in a highly averaged correction + EQ 'line' as being holy and final. It isn't but understand most people assume it is (has to be).

Headphone measurements are indicative at best. It is not as accurate as measuring a DAC without any, not accurately compensation-able filters between it.
So the 'THE Harman curve' for a specific HATS or other fixture is also just 'an average and far from exact to be followed' curve.

That is the point I am trying to get across. Not to discredit standards, nor to question Harman's research. Just that 'folllowing THE Harman curve' and an EQ based on that particular fixture, while at least giving improvements over none (in most cases) does not mean the result is 'exactly' correct. Most likely it isn't even though a nice plot shows it is based on potentially not correct 'measurements'.
 
Well, what you presented is the easiest to measure kind of uncertainty. What about the differences between measurements on different HATS? Even if someone will quantify differences on all existing GRASSes and other HATS (good luck with that), will you adjust these bars, or shaded regions of uncertainty when new measuring gear will appear making then deprecated?
I think you might be describing a different use case. I'm simply explaining why I think it would be more elegant to depict the variation and range of uncertainty with a single set of headphones on a single rig. That's it.
It sounds like oratory figured that out (as pointed out by solderdude).
 
Tyll also showed several measurements in one plot to show response at various positions many, many years ago but without a target.
The way Oratory makes this visible is more intuitive but only shows the extremes and band in between.
Schermafdruk van 2022-10-20 15-57-52.png
You don't know how much variation to expect when worn 'normally' with a good seal.
Plots like this also are enlightening. Notice how above 5kHz the variations can be +/- 3dB or more.
index.php
 
Tyll also showed several measurements in one plot to show response at various positions many, many years ago but without a target.
... I miss Tyll. And I forgot he actually did that. My wife was always asking me why I am watching this Santa Claus person.
 
In order to still stay on topic... the DCA Expanse, for a passive headphone, measures remarkably close to the 'reference' that is used to calibrate Amirs fixture and includes the research done by Harman regarding the 'emulation' of the tonal balance of a very good speaker in a 'standard' Harman listening room (not your living room without room correction). But.. there are some small deviations not everyone prefers. The majority of people are likely to like the sound but that same majority can probably not afford it or are not willing to pay as much.

Below a long and probably boring attempt to try to visualize why 'THE Harman target' is only a 'guideline' and no 'hard fact that must be followed exactly'

The Harman study is only about finding the preferred tonality the majority of people seems to like. In order to educate their own engineers and ensure better sales in the future.

It may confuse you but the 'well known Harman target' you see used is fixture dependent. And it differs from the Harman one as they used a different pinna.
The familiar 'Harman curve' (the one with the bump at 3kHz) actually consists of a correction for the fixture itself and on top of that the 'found EQ settings from the Harman research' are added to that correction curve for that specific fixture.
For me (engineer) I see the 'Harman curve' as the EQ settings they arrived at during the research - the actual response of the used headphone (which was measured on a non standard fixture with their own fake pinna). As not every listener arrived at the same settings (and probably song dependent as well) an 'average' was created.

I see it as measuring the output of a DAC (which is linear in response) by using an ADC that has a bunch of filters between it that differ in response depending on what DAC you connect to it and also differs at different times in a day.

Translate different response as: different pinna activation, seal, positioning, product variance.

Now if we want to measure 10 different DACs then all of them would measure differently as someone is messing with the filters before the measurement (within a certain range for each filter).
When we were to look closely and even listen to the DAC the measurement squigly of each DAC would differ. Now... If I were a manufacturer of said 'measurement device' I would want those filters standardized and ideally not having a guy inside changing the settings depending on that guy seeing what DAC is going to be measured.

The goal is to measure as closely as possible to equal amplitude for all frequencies. This, however, is not possible because of the filters in front of the otherwise linear ADC used.
One HAS to use filters that change here and there. The technical solution is to add a compensation for the (varying) filters.
The manufacturer does this by performing a shitload of (standard) measurements using a well characterized 'reference'.
They get a bunch of different squigly's and average them out. This is the 'average' of the filters used and is offered as 'compensation'.
The idea behind this is that all similar performing DAC's measured on that device have a very similar response but not the same. The average is all that is available.

Now... with speakers in a dead room at 1 single standard position this works fairly well and the guy altering the filters only adjusts the filters slightly.

No one likes to listen in a dead room nor do they own one. Such a speaker sounds different in a 'standard' room (which differs from your own room with 100% certainty)
So people measure how the overall 'tonal balance' changes and correlate that with what they hear.
Then they do the same but with headphones measured on an 'uncertain' device.
The researchers give people so 'filters' they think should be 'correct' ones needed to make the sound they like.
They find settings that differ within a range. They select an average (probably by excluding outliers and averaging the medians) and arrive to EQ needed to mimic the flat measuring speaker in a room.

To me that EQ opposite the sound of the speaker measured in the dead room is the 'Harman target'. That's what needs to be done to the 'flat in the dead room' device.

The 'flat in the dead room device' is the equivalent of the DAC measurement device with the changing filters + the 'average' compensation they found.
To make that sound good (preferred by the majority of people, not universally nor everyone) that 'found and averaged EQ' needs to be added to result of the not so trustworthy measurement device in order to come to a squigly that has a high correlation to how we would perceive a good speaker in a 'standard' room.

Now ... all DAC measurement device manufacturers use somewhat to substantially different filters and all also arrive at some 'average but not exact' compensations.
The 'Harman target' (the found average EQ) is always the same.
The result is the 'raw measured squigly' differs on all DAC measurement devices and adds the 'Harman target' to it in order to arrive at the 'most ideal sound'.

For this reason (the DAC measurement devices manufacturers that all add the same 'Harman target' end up with different raw squigly's and attempt to use an 'average' filter response they found.

All these manufacturers make something that use a 'standard' DAC and manufacturers of the DAC measurement gear do their best to get the guy, that sets the filters in a random way, to set the filters in a specific way so that when that DAC is used the results (DAC + alteration + compensation) shows a 'flat' response (an agreed upon (standard) speaker in the dead room at the same distance) and add the 'Harman correction' to it and say.... this is industry standard.
And indeed when the measurement gear from manufacturer A goes to the manufacturer B the results are remarkably similar on the same exact conditions (DAC).

Yet... secretly the guy that comes with the DAC measurement gear, as soon as another DAC is measured (different HP, different conditions) is allowed to play with the filter settings at will.
So while the 'standard' DAC measures a nearly perfect response (after the compensation) it won't be the case with other DACs. Some are closer than others yet... we don't really know which it is as the guy changing the settings keeps it to himself.

That's why the 'Harman target + correction curve' (often referred to as 'THE Harman curve) is a fine 'starting point' but there IS an uncertainty we do not know and that could be substantial (dB's) at different frequencies.

So I agree with this:


It does make sense. What I am trying to convey here is that the 'Harman curve' is not the exact curve to follow for each headphone when the goal is to get 'perfect sound quality' and that 'average' line we see and love in all the plots of the 'raw squigly' is not necessarily 'correct' as in 'exact'.
Trying to get 'close to that average' in general will bring the tonal balance closer to what has been found in the research (in this case Harman).

Now consider that aside from the guy that secretly operates the filters and the 'Harman EQ' being added there is a second guy playing with, yet again, a different set of filters that have some resemblance to the ones used in the DAC measurement device also plays with his filters (your ear canal, pinna) and there is an automated device (your brain) inside your head 'undoing' that second guy's filtering based on an ever (slowly) changing reference that is based on perception of real sounds around you.

This means... headphone measurements and actual perception does have some but not an exact correlation (2 sets of filters and one 'correction' being fixed and the other semi fixed') it is a mess and the 'somewhat randomized and highly averaged measurement' squigly in-between the DAC and brain is not an 'exact' one. It is merely an indication.
Some are more accurate than others within a certain (limited) frequency range.

So don't put too much faith in a highly averaged correction + EQ 'line' as being holy and final. It isn't but understand most people assume it is (has to be).

Headphone measurements are indicative at best. It is not as accurate as measuring a DAC without any, not accurately compensation-able filters between it.
So the 'THE Harman curve' for a specific HATS or other fixture is also just 'an average and far from exact to be followed' curve.

That is the point I am trying to get across. Not to discredit standards, nor to question Harman's research. Just that 'folllowing THE Harman curve' and an EQ based on that particular fixture, while at least giving improvements over none (in most cases) does not mean the result is 'exactly' correct. Most likely it isn't even though a nice plot shows it is based on potentially not correct 'measurements'.
Man, you must have written that spoiler as a joke! Nice one!
 
There are people that don't like Ora's EQ and those that swear by it.
There are those that don't like the Harman target and there are those that swear by it.
There are those that swear by Optimum Hifi target, DF target, certain 'room' targets or a target invented by some manufacturer or measurement guy.
I wonder how much of these differences in target curve preferences are actually a merely function of differences in the measurement rig used to create that target?
Let's say Harman created headphones that precisely matched the Harman-target measured on a Harman rig.
If you then took those same headphones and measured it on the Oratory rig, it would measure different. And Oratory would then issue EQ settings that supposedly "correct" the headphones so that they match the Harman target. But in reality, the headphones with 0 EQ are, by definition, are matching the Harman target.
 
Back
Top Bottom