• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

ASR Getting Into Measuring Headphones!

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Yeah, sounds good, it's worth doing to validate that the EQ is actually "taking" - your minimum phase comments. Probably quite a bit of extra work due to all the many extra measurements you'd need to take to compare averages between the two.....so might be best to only do this for headphones that look like they have non-minimum phase issues in the frequency response.

You'd want to keep all other variables constant so it would be best to just take one post-EQ measurement straight after the last pre-EQ measurement (which should be the 'normal' measurement position with good seal and headphones positioned as centrally as possible over the artificial pinnae), without touching the headphones or rig at all between the two measurements.
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
There were 2 other $300 headphones as well. How did you know it was the Hifiman and not the others?

I don't, it was just an educated guess. The frequency response of the others didn't look as good as the HE400S, and the HE4XX which has a similar response has a high 88/100 predicted preference rating (I believe the graph was scaled up so the best rated headphone scored 100, meaning its actual received rating by listeners was likely lower and so closer to the above score).
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
As EQ is just an amplitude change at a given frequency, it will likely just increase (or decrease) the absolute distortion as if the SPL (at each given frequency) was increased to that amplitude, but it would be good to check this.
Level/SPL changes don't have a linear relationship with distortion. I tested and confirmed this for speakers and hence the reason I show two distinct levels. If distortion is very low, then the linear relationship will be closer to achieve than when distortion is taking off. Will have to confirm this for headphones.
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
Saw this in a post here by @flipflop

1607482432194.png

Does anyone know where it came from?
 

bobbooo

Major Contributor
Joined
Aug 30, 2019
Messages
1,479
Likes
2,079
Level/SPL changes don't have a linear relationship with distortion. I tested and confirmed this for speakers and hence the reason I show two distinct levels. If distortion is very low, then the linear relationship will be closer to achieve than when distortion is taking off. Will have to confirm this for headphones.

Yeah maybe I didn't word that clearly - I meant a headphone's distortion at a given frequency (say 20 Hz) when that frequency is EQed up by say 5 dB, will likely be the same as if you just turned up the total volume by 5 dB (although intermodulation distortion differences might come into play).
 
Last edited:

Francis Vaughan

Addicted to Fun and Learning
Forum Donor
Joined
Dec 6, 2018
Messages
933
Likes
4,697
Location
Adelaide Australia
Saw this in a post here by @flipflop

Where's the post?
Curious plot. Any attempt to draw a single regression line through that is near invalid. But there is a very interesting bi-modal distribution. Are these preference ratings from listening tests or someone's attempt at generated values from measurements?
 

solderdude

Grand Contributor
Joined
Jul 21, 2018
Messages
15,891
Likes
35,912
Location
The Neitherlands
from the preference plot... someone knows if these are REAL headphones or all simulated ones.

Given the fact that Nightowl (very similar to nighthawk) is at the very bottom. I would put it there too b.t.w. is somewhat strange given the many folks who swear the very dark NightOwl/Hawk (much darker than Meze 99) gets the most love from their owners. Supposedly because of low distortion but that doesn't make any sense. They prefer the dark sound which is not reflected in the plot.

Wow HD800S is so high up there, even better than HD650?

To me the HD800S is better in many aspects, except the price but there are tons of people thinking otherwise.
With a bit more sublows and the still present treble peak removed it is even better.
 
Last edited:

bidn

Active Member
Joined
Aug 16, 2019
Messages
195
Likes
821
Location
Kingdom of the Netherlands
I’m rooting for a headphone ranking chart based on distortion/ sinad. So far it’s been great with everything else. But recommendations based on when everything else is also factored in.

This would be interesting, but I fear we should be very careful because there are major differences between electronics and transducers, esp. headphones.

Solid state electronics allows for near perfect results, with insignificant distortion and floor noise levels, making SINAD ranking charts very meaningful.

Transducers on the other hand are very far rom such a perfection. All will have at least a few significant shortcomings, esp. re. most fundamental thing for high fidelity, their neutrality, i.e. their adequation to a target curve.

So the problem #1 is:
How could a SINAD measurement be meaningful if a headphone (e.g. one from Audeze) already fails miserably at producing a high fidelity FR?

Problem #2:
There is a lot of variation re. the frequency ranges at which the shortcomings occur. Should they be given the same importance ? I think not. I am taking an example : the human ear is optimised (see e.g. hearing thresholds) for the presence area (vocal communication), should a shortcoming occuring in the presence area be given the same weight as one occurring in the 19-20 kHz range? Clearly no, the first one is much more important.
Then there should be a frequency based weighting function. But which one ? I don't think there is an easy answer to this.

Problem #3:
And the same can be said of the shape of the deviation peaks and dips, a peak with a short base but a very large amplitude above the local baseline will be more detrimental than one with a larger base but less amplitude. How could we appropriately weight these differences?
I am not aware of any easy scientific answer.

So I don't see how purely measurement-based headphone rankings would be much meaningful, I am not convinced by a site like rting.com where you can even have measurements and rankings based on the distortion, like you want, but also on very debatable things like soundstage...

So while I am in favor of measurements only and measurement-based rankings for electronics,
for transducers I think measurements are still essential but should be completed by the discretionary evaluation of a serious, competent and honest reviewer (avoiding all the subjectivist, audiophiliac craze) who makes a ranking at his discretion. This was Tyll's approach, and is also Crinacle 's approach.

This leads to imperfection, this is why reviewer redundancy is actually important, the more diversity the better, I find it great to have not only Crinacle but also Amir too measuring and reviewing headphones.
 
Last edited:

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
Does anyone know where it came from?
The original plot is from Figure 6 in A Statistical Model that Predicts Listeners’ Preference Ratings of Around-Ear and On-Ear Headphones (Olive et al.).

Any attempt to draw a single regression line through that is near invalid.
That's very much the point:
Fig. 6.PNG

Are these preference ratings from listening tests or someone's attempt at generated values from measurements?
They're from actual blind listening tests using the 'virtual headphone listening test method'.
 

Herbert

Addicted to Fun and Learning
Joined
Nov 26, 2018
Messages
527
Likes
434
Imagine Henry Rollins on KCRW:
"Fanatic, pleasepleasepleaseplease test the Koss Porta Pro.
They are so incredibly gorgeous an built since - I think - 1984 or so?
Up to the present day? Who can say that of any product-
built since some *beep* thirtysix years and being sold in millions?"
 
Last edited:

peanuts

Senior Member
Joined
Apr 26, 2016
Messages
336
Likes
709
got a bunch of headphones for some reason, and i hate them! will not ever sound live regardless of what you do. no soundstage, no physical bass. booring.
 

Herbert

Addicted to Fun and Learning
Joined
Nov 26, 2018
Messages
527
Likes
434
I remember mixing a no-budget short movie in the nineties, still linear, tape-to-tape
on Betacam SP. One of my first works. 16 Tracks mixed down to 8, mixed down to 2.
Because the noisy machines were in the same room I used headphones
for mixing, some closed Sennheisers.
Well - after some listening over speakers I had to redo the mix
because subtle sounds that sounded spot on on the headphones
were almost not audible over any speaker.
I guess this says it all about "no soundstage, no physical bass, booring"
I still check my dialogue / music editing over headphones, mostly on my
way to work, to be sure there are no flaws or bad edits, though I am working
in quiet environments today...
About the bass: The Koss mentioned are good enough to check any false movement
of the boom operator, as too quick movements, or just changing the position of your
hands on the boom pole can result in some sub-bass noise.
So bass is definately there :)
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California

Yes of course, and how were you able to identify each of the points on figure 6 of that study?
You can determine many of the dots by correlating the headphone price on the graph with the headphone price list included in Appendix A, but what about for the several headphones priced identically at $300 (for example). How did you know which was which?
1607563213447.png

1607563222925.png
 

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,747
Likes
16,180
Yes of course, and how were you able to identify each of the points on figure 6 of that study?
You can determine many of the dots by correlating the headphone price on the graph with the headphone price list included in Appendix A, but what about for the several headphones priced identically at $300 (for example). How did you know which was which?
View attachment 98278
View attachment 98279
Interesting that Harmans AKG N90Q with auto calibration supposedly to the Harman target did worse than underestimated low price headphones like the the DT 990, SRH1540 and MDR-7506, would like to know if it was due to auto calibration not working well or the "Quincy Jones signature".
 

preload

Major Contributor
Forum Donor
Joined
May 19, 2020
Messages
1,554
Likes
1,701
Location
California
Yikes. The Olive paper isn't quite what I thought it was.
They basically took 31 different headphones, measured the FR of each on a GRAS 45, and then REPLICATED each FR curve using EQ and on a single pair of AKG K712 headphones. Each listener "listened" to 8 headphones "virtually" through that K712. The FR match between actual and "virtual/replicated" headphone was only +/- 1dB up to 12khz, and above 12khz, they didn't aggressively EQ. The did validation testing and only found a correlation of 0.85 between actual and virtual ratings.

Based on the paper, I believe the HP that scored 100 was an AKG K712 that had been EQ'd to the Harman AE/OE target curve.

So who has those filter parameters? Imma buy myself some K712's and be done. (Although, that being said, my HD800 SDR-mods and Denon AH-D9200, both eq'd, sound better than anything I'd heard to date).
 

flipflop

Addicted to Fun and Learning
Joined
Feb 22, 2018
Messages
927
Likes
1,240
You can determine many of the dots by correlating the headphone price on the graph with the headphone price list included in Appendix A, but what about for the several headphones priced identically at $300 (for example). How did you know which was which?
I used 3rd party measurements to figure out which of each 'HP' from Appendix 2 matched the models in Appendix 1. I could then compare the absolute ratings from Figure 2 with the relative ratings in Figure 6. For example, HD-650 is HP20 and scored 54.3; MDR-1000X is HP8 and scored 55.0. Further confirmation can be found in the fact that MDR-1000X is slightly more expensive at $350 compared to HD-650's $340 pricing, giving the dot a higher placement by a couple of pixels.
So who has those filter parameters? Imma buy myself some K712's and be done.
https://www.dropbox.com/s/l0ucv96zjjp8cvv/AKG K712.pdf
 
Top Bottom