• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Dan Clark E3 Headphone Review

Rate this headphone:

  • 1. Poor (headless panther)

    Votes: 4 1.6%
  • 2. Not terrible (postman panther)

    Votes: 11 4.3%
  • 3. Fine (happy panther)

    Votes: 38 14.9%
  • 4. Great (golfing panther)

    Votes: 202 79.2%

  • Total voters
    255
Associating with reviewers is one thing, associating with those that incur in unethical behaviour is another, specially when they are known bad actors.
It's publicity. Some people obviously trust these reviewers and thus a positive review will sell units. That's the bottom line. Me? I don't waste my time looking at youtube reviewers.
 
It's publicity. Some people obviously trust these reviewers and thus a positive review will sell units. That's the bottom line. Me? I don't waste my time looking at youtube reviewers.

No, the bottom line is that a company doesnt look reputable associating with unethical partners, you using the services of those unethical partners or not does not chage the matter.

In most developed countries false advertisement is illegal.

And I'm not saying DCA is falsely advertising, but there is a person making a review video on one of their products, a product that they seem to have gotten before release, and in the same video that person is both making outrageous claims about other products, and outright lying in some instances. And that person might be lying for a variety of reasons, and to profit is one of them. This is concerning.

You might not care about this matter, nor its implications, and thats ok, but I do, and some other people seem to care too.
 
I'm not really a fan of flashy products either, but I like the stitching on the various DCA products like The Stealth, The Expanse, and the E3 here. Flashy products tend to look cheap or chintzy to me, but I don't get that impression re the DCA products, they just look polished & well finished to me.
I can understand the appeal of the stitching for some people. My preference would be unbranded and without striking/bold stitching. DCA definitely is within the realm of reasonable and not cheap/chintzy at all. I've swung pretty hard towards unbranded everything. Matte black or other dark color Aeon 2 Noire is kind of where perfect is for me.
 
I've had mine for about 6 months and the left channel is playing with a lower volume for some reason. Changed cable, no difference.. Sigh..
In what frequency range? Shine a light inside the cup and look for leaks between the cup and where the ear pad. It might not be sealing. If it's the sticky mounting pads you can refresh the adhesive with some isopropyl alcohol on a cloth.
 
So, that last refridgerator ya bought, did ya take it home and try it out for 3 months before you laid down the cash?
No ! And I did not say it was good or not on any forum before and after buying it ! And that's the reason why I feel the claim "rate this headphone" is not clear enough. Yes you can rate the job done by Dan Clark, but you cannot rate the headphone if you did not listen...
 
We are voting based on the measurements @amirm just presented. We aren't voting at ASR on subjective impressions, as most haven't listened to the device in question. I believe you're thinking of Head Fi.
No. I am not Hi-Fi "addicted". ASR is barely the only forum I read everyday. I don't need commercial reviews :)
 
And to close my comments I owe a DCA Stealth, therefore I am rather a "supporter" of Dan Clark :)
 
If it weren't the poll results (input from owners only with a new expensive product) would be extremely small and probably all would end up having the highest possible score because most buyers, when they forked out a lot of money, will like what they bought.
 
In grand scheme of things, I don't think we need to worry about them especially when Dan tells me that they match the two drivers to close tolerance.
Thank you for the answer. That is a bit unlike you to say the manufacturer tells me it is fine, so it should be fine isn't it?

In any case, I was curious and went and looked up channel variation from 2 headphones that I know has good spatial qualities; ST009S and LCD-XC, and created the below graphs.

These graphs plot (R level - L level) from 700Hz to 5000Hz.

1701587241146.png
1701587252930.png
1701587266066.png


It looks like other headphones with good spatial qualities also vary ± 1 - 1.5 dB in this range as well.

Graphs look quite different though. Not sure if that means anything at all.
 
Last edited:
A little (but gentle) dip in the 2-4kHz range is actually preferred by many. It makes the sound less 'edgy', more 'laid-back'.
The reason for this is that people have different ear gain in that FR and that of the standard fixture can be different from peoples actual ear gain.
So some people may want/like a little more energy in that region where others will like a little less.

Rule of thumb: the more 'smooth' the response is the better the sound quality will be.
Another rule: very narrow peaks and dips may not be very detrimental. One should see how 'ragged' response is at the ears when using speakers in a room. That is factors worse yet the brain does not care (due to the way the hearing works) unless it becomes really, really bad.

People's ears generally do not conform to standards. There is leeway though.
Measurements should be done to standards so they are comparable and repeatable.
That does not mean ears comply to that standard as well.
Averaged over many, many ears one may actually get close to some standard fixtures though, that's what the standard (should) be made to.
 
Last edited:
Sure, that would be interesting. Ulitimately I'd be content with just seeing placement variation and blocked ear canal variation measurements between different people of the E3, rather than having to know how the headphone design achieved that goal, but it would be interesting still. We'll have to wait a bit until RTings and some other people do the work on that I guess.

I'm not expecting Rtings to review the E3 soon, more expensive HPs of that sort are few and far between on their website. While being the only publication that has found the means to try to assess that issue, it's quite unfortunate that they present the data the way they do and use in concha microphones, and I don't think that a sample of five is quite enough (ideally you'd determine the number of individuals needed beyond which indicators such as st dev no longer significantly change for a cohort of headphones), but it's better than nothing.
 
Absolutely not. I am confident if you ask Sean he will tell you the same thing. You are setting a standard of 100% accuracy and such a thing simply doesn't exist. It doesn't exist for speakers and certainly doesn't exist for headphones where our measurement abilities are more limited. I have said repeatedly that objective headphone assessments are 60 to 70% prescriptive. That is heck of a lot more than zero and at the risk of stating the obvious, it is not 100%.

This is the sort of answer I've seen you write several times already and it's a bit trite in my view. In situ variation and the evaluation of transfer function between measurement rigs and individuals is quantifiable, and if sound quality is what you're after, good engineering mandates that the headphones should be designed to produce a predictable and stable frequency response on actual individuals. There is literally no point in targeting any target of any sort if the in situ response swings widely and unpredictably.

Some headphones are simply more suited to that task than other, whether because the type naturally is less prone to coupling issues (typically, open back dynamics for example, or at lower frequencies headphones with an effective feedback system), or because conscious efforts were made to solve that problem (or just plain luck, let's face it I am not certain that most headphones manufacturers are particularly aware of that issue).

And this is what a review at the very least should try to make people aware of : is this pair of headphones more or less properly engineered so that it can effectively deliver the target it aims at ?

You have been linking to Sean's powerpoint. I have trouble accepting all that there is there without a paper to read with all the detail.

The captation method is similar to what they've already used years ago in the Harman papers, and I can think of a dozen articles on AES on that subject, which you must surely have already read.

Personally, I have always disliked testing that shows the range of responses and then averaging them. It is trivial to misposition a headphone on a fixture, or on your head. It makes no sense to show that, nor does it make sense to average that in some kind of final response. Averaging is a lousy low pass filter anyway. As you probably know, there is no exact science on how you put a headphone on someone's head. There is variability in that very thing. But sure, I can design a headphone with adjustable clamping pressure and soft enough pads to get lower variance.

Some headphones don't require users to be quite as diligent as others in terms of how they position them on their head, and I'd argue that after some expected fiddling with headband extension good headphones should sound good where they naturally fall on one's head.
Besides, if the response significantly varies with positioning on one's head, how is the listener supposed to know how it varies ? You're still shooting in the dark as I wrote.
Again, if sound quality is what you're after, well engineered headphones should simply be capable of delivering a predictable and stable FR.

I don't know how in the context of going over literature, we all of a sudden want to rationalize such statements made by someone in totally uncontrolled and anecdotal experience. No way can you assign cause and effect there unless you test the person and even then there are pitfalls as I mentioned above. I hear those comments day in and day out from owners of products. It is not something I can act on or value in my testing. Can they be right? Sure. But since we can't put any weight behind such subjective remarks, that is that.

The proper way to respond to that is not to believe them then. But to help them conduct a controlled test with equalization.

I know that you like to think that your method of using EQ to validate whether or not the measurements you've made are representative of your experience is objective, but it's just another form of admittedly enlightened subjective impression, and just as I would not expect anyone to take mine seriously, I don't see why anyone should consider yours seriously either.

Besides, there might one human among the 8+ billion of us which head just happens to perfectly match a 45CA, but you're probably not one of them, and I'm ready to bet my entire headphones collection that if we were to make in situ measurements on your head after equalising them to the same target on a fixture, we'd see rather wild variations as well... some headphones being much worse offenders than others (and that's the whole point of trying to evaluate which ones are).

Getting back to what I said: "When the response resembles the dashed line, you can have high confidence that you will like the sound. If not, you can apply a bit of EQ but the response should be close."

That is absolutely correct.

Anyone who's seen inter-individual variation graphs of the sort posted above (or read the literature on that subject) should logically think that this is not the case. Any other position is stubborn irrationality at this point and shows an incapacity to cogently engage with the data presented.

It even addresses your concern by saying you may have to adjust things. But that you start with a very good starting point that should be close to optimal as opposed to getting a headphone with wild frequency response. Surely that headphone doesn't magically get better with moving a few mm here and there on your head.

The Stealth's frequency response is wild... in terms of how it varies on actual individuals. This, alongside the average across listeners in situ response not matching the on fixture response even remotely, is why it's absolutely not well suited as a "very good starting point".

Use something like the HD800S instead (for this one we actually have repeated measurements of in situ behaviour from multiple sources, and have some idea of its sample variation).
 
The question is what causes the discrepancy between seal issues and the inter-individual measurements.
I see little correlation with seal differences.
Maybe a combination of seal and hair in front of the ear or something else like ear-driver distance ?

Have not heard, nor measured the Stealth myself.
 
The question is what causes the discrepancy between seal issues and the inter-individual measurements.
I see little correlation with seal differences.

I think it's a good question, but I am also not surprised to see that an evaluation of leakage under control + pad compression tests + positional variation may not always make sense when compared to inter-individual variation.
A pair of headphones might be very susceptible to seal breach, but might also happen to be designed in a way that ensures that a decent enough seal is achieved on most individuals for example.
In the absence of actual in situ measurements, I see these tests as very, very useful proxy tests to get some idea of the propensity of a pair of headphones to be capable of delivering the target they aim at when an "all squares are rectangle but not all rectangles are squares" logic is applied. I believe that it's reasonable to think that a pair of headphones which shows a stable response in all three tests is also quite likely to show a stable response across individuals, but a pair of headphones that does not perform well in one or several of these tests might still be able to deliver a stable response across individuals... or not.
 
I believe that it's reasonable to think that a pair of headphones which shows a stable response in all three tests is also quite likely to show a stable response across individuals, but a pair of headphones that does not perform well in one or several of these tests might still be able to deliver a stable response across individuals

Yep, that seems a logical thing.

As you say not many people test for this nor are there protocols for this so repeatability is out the door and becomes indicative. But better to have some indication than none (seal as can be obtained on flatbed or headshaped fixtures. Especially when the earpads have a very large inner diameter.
For this reason I use real glasses to simulate instead of some tubes, holes or solid shapes as the reaction of pads differs from those 'more consistent' ones.

For some headphones seal (hair/glasses) is a substantial issue, for some almost none.
 
In situ variation and the evaluation of transfer function between measurement rigs and individuals is quantifiable, and if sound quality is what you're after, good engineering mandates that the headphones should be designed to produce a predictable and stable frequency response on actual individuals.
You can quantify it trivially but repeating that is anything but trivial.

And no, the second part cannot be a mandate because it can compromise comfort. And I cannot ever be in the position of objectivity judging comfort.
 
There is literally no point in targeting any target of any sort if the in situ response swings widely and unpredictably.
Any such large variations would show on my test fixture and then on my head. I routinely remark on ease of getting reliable measurements (and in listening tests if I have issues with it). As I explained to you in my post, it is the nature of headphone technology to have fitment issues. Despite that, excellent work has been done to generate high likelihood of preference for large set of headphones in Harman's studies. Your notion then that we should throw that away now because some new work shows variability makes no sense. We have what we have and it is working very well in the context of the variability of the situation.
 
I can understand the appeal of the stitching for some people. My preference would be unbranded and without striking/bold stitching. DCA definitely is within the realm of reasonable and not cheap/chintzy at all. I've swung pretty hard towards unbranded everything. Matte black or other dark color Aeon 2 Noire is kind of where perfect is for me.
Yeah, people have their preferences.
Use something like the HD800S instead (for this one we actually have repeated measurements of in situ behaviour from multiple sources, and have some idea of its sample variation).
As an aside (which we don't have to discuss deeply, but is related), the HD800 was the original headphone used in the Harman Research in the creation of their Harman Target, so I'd imagine the HD800 and HD800s would be the most relevant headphones to get if you wanted to experience what the Harman Target should really sound like, but of course you'd use a Harman EQ on them:
and following the pic showing one part of the process where HD800 was used in the Harman Target creation:
Harman target baseline, GRAS 45CA.png
 
And this is what a review at the very least should try to make people aware of : is this pair of headphones more or less properly engineered so that it can effectively deliver the target it aims at ?
The reviews have shown to accomplish this goal. That you have seen a study showing some variation in test fixture or on human heads is known and not impacting the work we are doing. We have no data on preference with respect to directivity of speakers. By your logic, we should stop measuring the rest of the speaker performance because we don't have that data. It makes no sense.
 
Back
Top Bottom