• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Dan Clark E3 Headphone Review

Rate this headphone:

  • 1. Poor (headless panther)

    Votes: 4 1.6%
  • 2. Not terrible (postman panther)

    Votes: 11 4.3%
  • 3. Fine (happy panther)

    Votes: 38 14.9%
  • 4. Great (golfing panther)

    Votes: 202 79.2%

  • Total voters
    255
I know that you like to think that your method of using EQ to validate whether or not the measurements you've made are representative of your experience is objective, but it's just another form of admittedly enlightened subjective impression, and just as I would not expect anyone to take mine seriously, I don't see why anyone should consider yours seriously either.
I never refer to my listening tests as "objective." Objective is measurements. All listening tests are subjective. You are mistaken about the second part as well. My listening tests are in service of the measurements. I develop filters and perform AB tests against no filter, often blind, to assess whether correction results in higher preference or not. This is why I always like to measure and then listen. Otherwise I would be blind, handing out unreliable assessments. So if that is all you are doing yes, your listening tests are of little value. Mine are not so.
 
You can quantify it trivially but repeating that is anything but trivial.

? The whole point of a pair of headphones that's well designed so that it deliver a predictable and stable FR is that quantifying it is repeatable. As it turns out that's exactly what several articles and publications found out for the HD800S.

And no, the second part cannot be a mandate because it can compromise comfort. And I cannot ever be in the position of objectivity judging comfort.

Why so ?

Any such large variations would show on my test fixture and then on my head.

Not necessarily - one reason among others being the flat plate design of the hammerhead fixture you use, for example.

I routinely remark on ease of getting reliable measurements (and in listening tests if I have issues with it). As I explained to you in my post, it is the nature of headphone technology to have fitment issues. Despite that, excellent work has been done to generate high likelihood of preference for large set of headphones in Harman's studies. Your notion then that we should throw that away now because some new work shows variability makes no sense. We have what we have and it is working very well in the context of the variability of the situation.

This isn't "new work", it's been known for decades already, and Harman themselves already measured it years ago. There is little point in invoking preference scores when we already have the measured in situ data anyway : if a pair of headphones produces a 10+db spread across individuals in parts of the spectrum, it's just logical to conclude that initial statement I was responding to is just incorrect for these headphones. But it will be a lot more so for others. And that's what we need to know.

The reviews have shown to accomplish this goal. That you have seen a study showing some variation in test fixture or on human heads is known and not impacting the work we are doing. We have no data on preference with respect to directivity of speakers. By your logic, we should stop measuring the rest of the speaker performance because we don't have that data. It makes no sense.

No, because we do have data on in situ behaviour. We've had it for a while.
The other thing is that to the extent of our current knowledge, for headphones, the main factor dictating preference is univariate and is, basically, the delivered FR at the drum. If a pair of headphones fails to deliver the target it aims at on a fixture for a collection of individuals, it's quite logical to conclude that the fixture's measurements are more or less irrelevant - and actually misleading - for these individuals.
 
What could be an interesting test is to create some acoustic leakage (stick a wooden 6 or 10mm dowel between the pads and fixture) for E3, and Stealth and get an indication of what seal breakage does.
I know it isn't a standardized test but could give some clues.
 
Allegedly you can buy alternate screws for Stealth, Expanse and E3 hinge that can give you more clamping force, lessening the potential seal issue.
Although I haven't found alternate screws on their website or any mention of this? @Dan Clark can you confirm please.

Set screw claim
 
Last edited:
I never refer to my listening tests as "objective." Objective is measurements. All listening tests are subjective. You are mistaken about the second part as well. My listening tests are in service of the measurements. I develop filters and perform AB tests against no filter, often blind, to assess whether correction results in higher preference or not. This is why I always like to measure and then listen. Otherwise I would be blind, handing out unreliable assessments. So if that is all you are doing yes, your listening tests are of little value. Mine are not so.

oh very cool so the AB testing with the filter is blinded? How do you do that, I would like to set something up like that for myself.

Thanks again in advance!
 
oh very cool so the AB testing with the filter is blinded? How do you do that, I would like to set something up like that for myself.

Thanks again in advance!
I've done something similar in the past. You hover your mouse cursor over the on/off switch for the filter, click the mouse button repeatedly with your eyes shut so you don't know which one it ends up being on or off, (this is whilst listening to music), then listen for a short time to reference track then click the switch again with eyes closed, listen for a while, perhaps flip back to the original again with eyes closed - then make a decision about which version you like better, then open your eyes and see if the filter is on or off.
 
I've done something similar in the past. You hover your mouse cursor over the on/off switch for the filter, click the mouse button repeatedly with your eyes shut so you don't know which one it ends up being on or off, (this is whilst listening to music), then listen for a short time to reference track then click the switch again with eyes closed, listen for a while, perhaps flip back to the original again with eyes closed - then make a decision about which version you like better, then open your eyes and see if the filter is on or off.

this is what I was planning on doing, I was just hoping there was something more elegant and automated. like a random number generator either sets the filter on or off and you are blinded to what that state is. You listen to the music for a little bit and then press the space key and it switches it to the other state. and then you compare. and then you decide at the end.

it'll be embarrassing for me when I actually can't tell the difference
 
this is what I was planning on doing, I was just hoping there was something more elegant and automated. like a random number generator either sets the filter on or off and you are blinded to what that state is. You listen to the music for a little bit and then press the space key and it switches it to the other state. and then you compare. and then you decide at the end.
You can choose a specific track, duplicate it, and bake your EQ preset into it (for example, using the Audacity loopback recording feature).

Then load both files into the ABX Comparator for foobar2000.
 
I've done something similar in the past. You hover your mouse cursor over the on/off switch for the filter, click the mouse button repeatedly with your eyes shut so you don't know which one it ends up being on or off, (this is whilst listening to music), then listen for a short time to reference track then click the switch again with eyes closed, listen for a while, perhaps flip back to the original again with eyes closed - then make a decision about which version you like better, then open your eyes and see if the filter is on or off.

This is just me, but doing this I found it just a little too easy to know which EQ was which, particularly given that I knew what they were as I made them myself. Something that I found a bit more effective to throw me off, for me, was to create 4-5 sets of filters including some low anchor and a number of filter sets close to each others, apply them to sections of different tracks, level match them (Audacity can help), and then compare them, but I'm still searching for a proper ranking software for Mac OS, or website, that doesn't have a number of issues that limit what I can get out of this.
 
You can choose a specific track, duplicate it, and bake your EQ preset into it (for example, using the Audacity loopback recording feature).

Then load both files into the ABX Comparator for foobar2000.
This is just me, but doing this I found it just a little too easy to know which EQ was which, particularly given that I knew what they were as I made them myself. Something that I found a bit more effective to throw me off, for me, was to create 4-5 sets of filters including some low anchor and a number of filter sets close to each others, apply them to sections of different tracks, level match them (Audacity can help), and then compare them, but I'm still searching for a proper ranking software for Mac OS, or website, that doesn't have a number of issues that limit what I can get out of this.
Fair enough, good options, but I'm likely to use the "multiple clicking" technique if I want to compare filter on vs off, it's just less hassle. To be honest most of the time I do sighted comparisons and just try to be a blank slate of evaluation when I compare EQ's - I do tend to keep gravitating back to my long time favourite EQ's so I think there's something in that. EDIT: I feel I'm skirting close to off-topic!
 
@amirm Like the Stealth, the E3 is said to be a closed back headphone that does not sound like a closed back. My own subjective listening confirms that about the Stealth. What measurement demonstrates it? Would there need to be a waterfall graph to see that or is there something to look for in the frequency response graph?
 
All listening tests are subjective.
Is it possible to directly compare sound of headphones to reference speakers? From some research posted here it appears that yes, and if so, would it not make listening test a bit more scientific? I have myself attempted to do such comparisons some 30 years ago when measurements were not really available in order to apply EQ for studio work, and I think it worked quite well.
I totally do not understand involving “preference” in listening tests, in my understanding it should be - sounds like a reference speaker or not.
You kinda said it sounds like a studio monitor which is great, but unclear if it was just an impression or actual comparison. Thanks.
 
One thing that I found really interesting about the E3 Jude mentions in his video. He says they adapted their klippel driver matching system to help them match earpads, and that the earpads are carefully matched as well as the drivers. It doesn't mention tolerances, but still seems like yet another smart idea. Not sure if this has been done before, but certainly seems like it is another point of variability that's being removed from the equation which I can respect.

I've had a few sets of ananda shaped headphones over the years and the pads seemed very different pair to pair, or even left to right. I realize this does not account for different head and ear shapes, but seems a step in the right direction.

I'm still enjoying my Ether CX I've had since back when it was still Mr. Speakers, but very tempted with the E3. Not totally sure about the looks, so I do think I'd probably cover the glass with colored or patterned vinyl, but seems like a fun way to personalize them and experiment with different designs.
 
@amirm Like the Stealth, the E3 is said to be a closed back headphone that does not sound like a closed back. My own subjective listening confirms that about the Stealth. What measurement demonstrates it? Would there need to be a waterfall graph to see that or is there something to look for in the frequency response graph?
There are other closed headphones that sound open.
All closed headphones with a smooth flat response sound like open headphones.
It is only the crappy closed headphones (with wonky frequency response) that have a typical 'closed and honky' sound.
 
The Ztella should output 1Vrms with an output impedance of 2Ohms. Using Amir's measurements I calculated the expected dbSPL using the Ztella of some of the DCA headphones:
  • Dan Clark E3: 99.51 dbSPL
  • Dan Clark Stealth: 98.85 dbSPL
  • Dan Clark Aeon 2 Noire: 102.42 dbSPL
  • Dan Clark Aeon RT: 96.77 dbSPL

I've looked at this again. I note that on the Zorloo Ztella measurements there is clipping at 58 watts on a 33ohm load. I don't know if that's current or power protection (?), but if I assume a max mA of 42 then the low-impedance Noire is limited substantially down to around 98.36 dB SPL and the E3 could reach 101.21 dB SPL with the higher than rated measured voltage.

So the overload protection makes things more complicated again. Can anyone recommend a reasonably compact portable amp that could do a better job, ideally with no worries of clipping?
 
You mean like Amir got these headphones direct from Dan Clark? It is an essential part of marketing to get your products out to reviewers that will give you good reviews. At least on ASR, there are the numbers to back up any hyperbole and thus you tend to think that Amir is an honest reviewer.

I think the difference with Amir is that:

1. He’s not dependent on these relationships to make a living.

2. It doesn’t matter how much he likes the manufacturer, if the product is shite, he says it’s shite.
 
I think the difference with Amir is that:

1. He’s not dependent on these relationships to make a living.

2. It doesn’t matter how much he likes the manufacturer, if the product is shite, he says it’s shite.
Yeah, as I said -
At least on ASR, there are the numbers to back up any hyperbole and thus you tend to think that Amir is an honest reviewer.
 
There are other closed headphones that sound open.
All closed headphones with a smooth flat response sound like open headphones.
It is only the crappy closed headphones (with wonky frequency response) that have a typical 'closed and honky' sound.

I can only comment on my subjective experience, but I have found there to be a significant difference between various open sounding closed backs. I also own the Focal Stellia, which is praised as an open sounding closed back. Side by side, the Stealth sounds more open. Same for the the DCA AEON Flow closed.

Perhaps more importantly, I have found some open backs to sound more open than others. For example, the STAX SR-009 gives the impression of openness far more than the DCA Expanse. To me, that backs the idea that internal reflections are the cause. Is that not the case? I know many open backs sound closed if I cup my hands over the back of the ear cups.

Are you saying that this is not due to internal reflections or that the reflections are accounted for by the smooth flat response?

Are there specific areas in the frequency response we should be looking at for open vs closed sounding?
 
Back
Top Bottom