• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master Complaint Thread About Headphone Measurements

Compact_D

Member
Joined
Jun 10, 2021
Messages
47
Likes
17
This depends on what "preferred by listeners" even means.

If listeners prefer accurate reproduction of timbre of acoustic instruments (as what they should), they will value what is above 50Hz more, because there is very little actual musical information below 50Hz (for most of the music).
amirm said:
all of my reference music tracks "translated" to this speaker and sounded just beautiful (from Genelec 8030C review, and there is not much below 50Hz!). Same for those cheap Sony headphones too.

Then, there is music that attempts to create a hypnotic effect with sound alone (not as in traditional music) and sub bass in this case carries significant musical information, but I would argue that it is not possible to accurately reproduce such music at home at all, and certainly not with headphones.

As a very simple example of this, I could mention Wojciech Kilar's "Krzesany" where during some 15 minutes, listener is "conditioned" for a simple folk melody which after such conditioning results in a shocking and unusual emotional effect. But this effect is completely missing in recordings, no matter how good the reproduction appears to be.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,833
Likes
243,196
Location
Seattle Area
This is not about Preference Rating Score. This is about answering the question "given a headphone's deviation from the target, which parameters affect whether or not it will be preferred by listeners". As far as I am aware, Amir's eyeballing or Robbo's enjoyment do not correlate with listener preference. You are of course more than welcomed to use whatever parameters you want in your assessment. I have no problem with that as long as you do not present that assessment as scientific and objective.

According to research, 2 parameters correlate with listener preference, std dev of error curve and its absolute slope. And, despite all its shortcomings, its correlation is r=0.86 which is higher than the correlation coefficient of using virtual headphone method for tonality evaluation which is r=-0.85 for on-ear headphones.
You start with not caring about preference score and finishing with the very thing! As I have explained to you multiple times, preference rating is NOT a figure of merit for headphone fidelity. The model was a fit for the sample headphone and limited set of music used in the study. It works more to show that measurements have high correlation with listener preference than any kind of figure of merit.

You are also misstating my position. I look at the response relative to target. If it matches or almost matches it, then it is a win. If it deviates, that is it. It is not compliant and requires EQ to fix. I develop that EQ and then listen again. I rate the sound with and without the EQ and present that in my review. I make no attempt to create intermediate scores as you are attempting to do with the preference score. The research shows major misfires when that is done as I have explained to you.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,833
Likes
243,196
Location
Seattle Area
Finally, the research does not find correlation between preference and frequencies below 50Hz because all headphones rated Fair or higher have a drooping sub base response below 50Hz, and the only headphones that follow the curve down to 20Hz also have excess bass up to 500Hz - so data does not allow for differentiation.
I have explained this before as well but you keep ignoring. The study is NOT appropriate for evaluation of sub-bass performance. This is because of the content they used:

1703745706950.png


None of this represents much sub-bass content. For this reason, some headphones were ranked as excellent even though they have shortfalls in sub-bass:

1703745788548.png


Such headphones would NOT get an excellent score from me.

In addition, playback level while following ITU standards, is too low to appreciate sub-bass response. This is a common critique I have of both speaker and headphone testing from Harman.

Nothing replaces an experienced eye and performing EQ fixes in evaluating the sound of a headphone under review. It would be nice to be able to do so, and I realize Sean practices this but it simply doesn't pass muster with me.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,833
Likes
243,196
Location
Seattle Area
Isn't it a bit hypocritical to tell someone that the tonality they like is objectively not good according to research while the research actually allows for that kind of deviation, while at the same time substituting the methodology research suggests for evaluation for another one you find convenient, and calling all that objective and scientific?
Not if you have real experience with the topic. The main value of the research was the development of a target response. This is the heart of it and is the same as flat on-axis/smooth directivity in speakers. Attempting to quantify gradations below perfect match is just that: an attempt. Far more research needs to be performed to find a model if such a model can even be identified.

You seem to have latched on the model as the main value which is completely wrong. A human can analyze the response with psychoacoustic knowledge far better than a simple regression. Such assessment can then be firmed up using EQ testing.

So the only thing silly is putting faith in a simple line instead of the true value of the preference graph. When you do, you get major misclassification of headphone performance.
 

Compact_D

Member
Joined
Jun 10, 2021
Messages
47
Likes
17
Such headphones would NOT get an excellent score from me.
Yet, Genelec 8030C speaker got "excellent", while also not having much of anything below 50Hz. What is the difference, are headphones rated differently?
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,833
Likes
243,196
Location
Seattle Area
Yet, Genelec 8030C speaker got "excellent", while also not having much of anything below 50Hz. What is the difference, are headphones rated differently?
Headphones can generate response down to even below 20 Hz. Since they can, and readily so, I find them to produce far better performance than just about any speaker system. Even the speaker systems that go down that low wind up being transformed by the room so I find them to never sound as clean as headphones. This is one of the major areas of advantage headphones have over speakers. For this reason, I absolutely rate them on content that bookshelf speakers can't even reproduce, but even low cost headphones can (often with EQ).

Excluding the response below 50 Hz as the research did means leaving behind this major advantage of headphones. This is why I say it is a hole in research. I can never stop smiling when I hear these super deep bass notes with incredible clarity. And how disappointed I am when I hear the same out of just about any speaker as distorted and lacking.
 

Compact_D

Member
Joined
Jun 10, 2021
Messages
47
Likes
17
This is why I say it is a hole in research. I can never stop smiling when I hear these super deep bass notes with incredible clarity.
Thanks, Very true,
except Harman curve affects timbre of orchestral instruments because it starts bass boost too high up. Your EQ of HD 800 S does it way better (boosts below 50Hz) and it should be the real target curve not Harman.
 

IAtaman

Major Contributor
Forum Donor
Joined
Mar 29, 2021
Messages
2,440
Likes
4,283
I have explained this before as well but you keep ignoring. The study is NOT appropriate for evaluation of sub-bass performance. This is because of the content they used:

View attachment 337557

None of this represents much sub-bass content. For this reason, some headphones were ranked as excellent even though they have shortfalls in sub-bass:
I thought Battle Star Galactica Theme might, so bought the FLAC and checked and indeed, the lowest it has is 43Hz and we don't even know (at least I don't) which 15-25 second loop they have chosen for testing.

1703755539377.png


But they also say : In a pilot test [11], these three programs produced the most discriminating listener preference ratings among ten different programs used to evaluate a subset of the headphones tested in this paper.

I suspect fit and seal issues might have something to do with the choice of program material as well, but this is my speculation.

In any case, if the research does not allow differentiation of sub-bass performance, I think the objectively right thing to do would be to ignore sub-bass performance in the evaluation of headphone tonal accuracy.
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,062
Likes
6,934
Location
UK
This is not about Preference Rating Score. This is about answering the question "given a headphone's deviation from the target, which parameters affect whether or not it will be preferred by listeners". As far as I am aware, Amir's eyeballing or Robbo's enjoyment do not correlate with listener preference. You are of course more than welcomed to use whatever parameters you want in your assessment. I have no problem with that as long as you do not present that assessment as scientific and objective.

According to research, 2 parameters correlate with listener preference, std dev of error curve and its absolute slope. And, despite all its shortcomings, its correlation is r=0.86 which is higher than the correlation coefficient of using virtual headphone method for tonality evaluation which is r=-0.85 for on-ear headphones.

Isn't it a bit hypocritical to tell someone that the tonality they like is objectively not good according to research while the research actually allows for that kind of deviation, while at the same time substituting the methodology research suggests for evaluation for another one you find convenient, and calling all that objective and scientific?

Finally, the research does not find correlation between preference and frequencies below 50Hz because all headphones rated Fair or higher have a drooping sub base response below 50Hz, and the only headphones that follow the curve down to 20Hz also have excess bass up to 500Hz - so data does not allow for differentiation.

Here is what they say:

The decision to exclude errors below 50 Hz in the model was based on the finding that these errors contributed little to the underlying variance in headphone preferences based on regression analysis. One possible reason for this is that the average response in all sound quality categories in Fig. 4 – except the “poor” category – drops off significantly below 50 Hz. Within the “poor” category of headphones there is excessive energy between 50 Hz and 500 Hz that contributes to their perceived poor sound quality.

PS. Excessive energy between 50Hz and 500Hz that makes a headphone rated poorly is what might potentially be happening by the way if you take a headphone like HD600 that heavily distorts at the low end and plug in a bass lift at 100Hz, because you eyeballed that would sound good. Distortion at the low end might infact be one of the reasons why people find Harman bass lift "muddying" due to excess energy it creates <500Hz.
Well it's true that a headphone can deviate from the target and still be preferred by some listeners, eg myself & anecdotally others, for example in the case of the New Version of the HD560s which is a darker headphone than the Old Version HD560s, following Oratory's measurement of mostly Old Version HD560s and then also my New Version HD560s I measured*. It looks like most people here on ASR are enjoying the New Version more when both used without EQ (New Version is the second graph):
HD560s Oratory 27.08.23 vs Harman.jpg HD560s New Version AVG converted to GRAS (myOldAVG to OraAVG).jpg
So the type of deviations you have away from Harman do make a difference to the preference, especially if they're synergistic with each other (the differences can balance out). So the New Version HD560s could sound tonally balanced because the shortfalls compliment each other - it's slightly below Harman in the bass, but likewise slightly below Harman from 1-3kHz and above 6kHz, so that helps balance out the tonality (ie a bit short in bass and a bit short in treble), whereas with the Old Version HD560s (graph on left), then you could say treble was quite spot on with Harman but bass was even further away from Harman than the New Version, so you could say the Old Version HD560s is a bit too bright tonality wise. So the types of deviation do matter.

I still think you're trying to apply the Preference Score to a situation that is not it's design - eg a detailed headphone review. It probably is down to the experienced headphone reviewer and experienced headphone user to look beyond the Preference Score to work out areas of the frequency response that are important and influential and how different areas of the frequency response balance out with each other in terms of creating a pleasing tonality. Quickly on the subject of bass, because that's your initial point, Harman themselves said that bass response was a very important aspect of user enjoyment, and from a logical perspective there is significant musical content below 50Hz (even more so in some kinds of music) so that's just all under the Bass umbrella.

*Note: I don't have a GRAS measurement rig, I have a miniDSP EARS rig, but I've measured 3 units of Old Version HD560s and 2 units of New Version HD560s on it. I created a conversion curve from miniDSP to GRAS under the broad assumption that the average of my 3 units of Old Version HD560s measured close to Oratory's HD560s measurement, so that was how I converted to a GRAS measurement, so pinch of salt required for the graph on the right above.
 
Last edited:

IAtaman

Major Contributor
Forum Donor
Joined
Mar 29, 2021
Messages
2,440
Likes
4,283
So the type of deviations you have away from Harman do make a difference to the preference, especially if they're synergistic with each other (the differences can balance out). So the New Version HD560s could sound tonally balanced because the shortfalls compliment each other - it's slightly below Harman in the bass, but likewise slightly below Harman from 1-3kHz and above 6kHz, so that helps balance out the tonality (ie a bit short in bass and a bit short in treble), whereas with the Old Version HD560s (graph on left), then you could say treble was quite spot on with Harman but bass was even further away from Harman than the New Version, so you could say the Old Version HD560s is a bit too bright tonality wise. So the types of deviation do matter.
You are saying you are disagreeing with me, but the paragraph I quoted above is exactly what I would say and agree as well. As long as you are maintaining a certain balance, a bit short on bass and a bit short on high might not be an issue, like it is the case with HD560S. Then the question becomes, how do we know a bit here and a bit there is not too much, and how do we know what is balanced - and for that I think the right thing to do is to refer back to the research. You might not like the equation that spits out numbers but I don't think one can ignore the parameters that correlate to preference when evaluating tonal accuracy and balance. If a headphone's error curve has low standard deviation and has a relatively flat tilt, I think that headphone would be objectively tonally balanced. Eyeballing a fit to the target curve might be a valid and effective way to evaluate tonal accuracy, but it would no longer be scientific nor objective. Do you disagree with this?
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,062
Likes
6,934
Location
UK
You are saying you are disagreeing with me, but the paragraph I quoted above is exactly what I would say and agree as well. As long as you are maintaining a certain balance, a bit short on bass and a bit short on high might not be an issue, like it is the case with HD560S. Then the question becomes, how do we know a bit here and a bit there is not too much, and how do we know what is balanced - and for that I think the right thing to do is to refer back to the research. You might not like the equation that spits out numbers but I don't think one can ignore the parameters that correlate to preference when evaluating tonal accuracy and balance. If a headphone's error curve has low standard deviation and has a relatively flat tilt, I think that headphone would be objectively tonally balanced. Eyeballing a fit to the target curve might be a valid and effective way to evaluate tonal accuracy, but it would no longer be scientific nor objective. Do you disagree with this?
Did give you a like for this because I agree with fair amount of that. Yes, if headphone has flat tilt and low standard deviation error to target then should sound good unless it has some unlucky frequency response deviations that can slip through the net to create a more unpleasant experience. I don't think the research really looks at the frequency response with a fine grain, so things can slip through the net, it's possible for two headphones to have the same Preference Score but visibly look really quite different in their measurement and also sound quite different - I think I remember @staticV3 once posting an example of some measurements showing this, but I could be imagining that! The headphone research is not fine tuned enough to give a "scientific rating" to all aspects of a headphone's frequency response, so you do have to look to experienced reviewers like Amir (and likewise experienced headphone ASR users & yourself included too probably) to add their own final analysis of a headphone - so of course that final element of judging is not going to be directly proven in the research, but that does not mean that such analysis are unscientific or illogical - Amir & others would pool all their knowledge of measurements/research and psychoacoustics (& psychoacoustical experience with EQ) to draw their final conclusions, so that can still be logical & scientifically based.
 

IAtaman

Major Contributor
Forum Donor
Joined
Mar 29, 2021
Messages
2,440
Likes
4,283
Did give you a like for this because I agree with fair amount of that. Yes, if headphone has flat tilt and low standard deviation error to target then should sound good unless it has some unlucky frequency response deviations that can slip through the net to create a more unpleasant experience. I don't think the research really looks at the frequency response with a fine grain, so things can slip through the net, it's possible for two headphones to have the same Preference Score but visibly look really quite different in their measurement and also sound quite different - I think I remember @staticV3 once posting an example of some measurements showing this, but I could be imagining that! The headphone research is not fine tuned enough to give a "scientific rating" to all aspects of a headphone's frequency response, so you do have to look to experienced reviewers like Amir (and likewise experienced headphone ASR users & yourself included too probably) to add their own final analysis of a headphone - so of course that final element of judging is not going to be directly proven in the research, but that does not mean that such analysis are unscientific or illogical - Amir & others would pool all their knowledge of measurements/research and psychoacoustics (& psychoacoustical experience with EQ) to draw their final conclusions, so that can still be logical & scientifically based.
Great, I think we understand where we agree and where we don't then.

In my opinion the key parameters that are correlated with preference should be calculated and presented with the review, and if the listening observations of the reviewer is deviating from what those parameters suggests, then it should be known to the reader, and the reviewer should be able to explain why what they hear does not align with what the measurements show. This would help improve the transparency of the reviews, and will help to distinguish facts from Amir's opinions and beliefs on the matter, no matter how educated his opinions might be.
 

IAtaman

Major Contributor
Forum Donor
Joined
Mar 29, 2021
Messages
2,440
Likes
4,283
I have explained this before as well but you keep ignoring. The study is NOT appropriate for evaluation of sub-bass performance. This is because of the content they used:
I think you are confusing me with someone else, I don't think we discussed this before. :)

You are also misstating my position. I look at the response relative to target. If it matches or almost matches it, then it is a win. If it deviates, that is it. It is not compliant and requires EQ to fix. I develop that EQ and then listen again. I rate the sound with and without the EQ and present that in my review. I make no attempt to create intermediate scores as you are attempting to do with the preference score. The research shows major misfires when that is done as I have explained to you.
No, I don't think I do. What you explained is what I believe is causing a problem. Because according to research, people do prefer headphones that deviate from the target but in the way you evaluate it, you do not allow for that.

1703781215652.png


Look at the headphones I have circled in the above familiar graph. Those headphones score in the 70-90 range in the prediction model, meaning that they have relatively large deviations from target, especially the ones of the left group of three. Yet, according to research they are highly preferred by people. In your method, these headphones would get a fail because they deviate from the target too much, and ion my opinion that is a direct conflict with the research.

That is why I am suggesting to expand the limits of acceptable to be inline with research. Then the question becomes, what deviations are "acceptable". I am suggesting what is acceptable should be low std dev error curve and flat tilt that maintains a balance. You keep saying I am advocating for the formula that spits out numbers - I am not. I am suggesting you need to align your acceptance criteria with that of the research and I thought the parameters research suggested would be a good start
 

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
7,062
Likes
6,934
Location
UK
Great, I think we understand where we agree and where we don't then.

In my opinion the key parameters that are correlated with preference should be calculated and presented with the review, and if the listening observations of the reviewer is deviating from what those parameters suggests, then it should be known to the reader, and the reviewer should be able to explain why what they hear does not align with what the measurements show. This would help improve the transparency of the reviews, and will help to distinguish facts from Amir's opinions and beliefs on the matter, no matter how educated his opinions might be.
As far as I'm aware though it's only the Preference Score along with the std dev that could be calculated & presented, at which point it would be down to Amir to put the score & deviation into perspective; however, if you don't have much faith in the fine-grained accuracy of the score & std dev then it could become an added complication that adds to confusion within a review. It's arguable that showing the raw data and eyeballing the frequency response is a pretty intuitive way of seeing the performance of a headphone - for me it certainly is, whereas the Preference Score & std dev are not particularly intuitive or telling to the nuances of the headphone. Amir himself would have to weigh up the pros & cons to including Preference Score & std dev within his reviews, but I'm inclined to think that he wouldn't be positive about it, but I won't speak for him. Ultimately though, if there are people that are fans of Preference Score then they can go through AutoEQ's database of headphones, just on a practical level for people that want to use Preference Score as a means for choosing a headphone (it also includes std dev & slope which is useful in that context), see following link:
 
Last edited:

Xicu

New Member
Joined
Dec 28, 2023
Messages
1
Likes
0
Headphones can generate response down to even below 20 Hz. Since they can, and readily so, I find them to produce far better performance than just about any speaker system. Even the speaker systems that go down that low wind up being transformed by the room so I find them to never sound as clean as headphones. This is one of the major areas of advantage headphones have over speakers. For this reason, I absolutely rate them on content that bookshelf speakers can't even reproduce, but even low cost headphones can (often with EQ).

Excluding the response below 50 Hz as the research did means leaving behind this major advantage of headphones. This is why I say it is a hole in research. I can never stop smiling when I hear these super deep bass notes with incredible clarity. And how disappointed I am when I hear the same out of just about any speaker as distorted and lacking.
What would be the headphone that closest resembles your target frequency response without EQ? I saw in the review most of them lack on sub-bass.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,833
Likes
243,196
Location
Seattle Area
What would be the headphone that closest resembles your target frequency response without EQ? I saw in the review most of them lack on sub-bass.
Multiple IEMs and three headphones. Latter are Dan Clark E3, Stealth and Expanse. They sound incredibly close to professional monitors I have tested as well.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,833
Likes
243,196
Location
Seattle Area
Look at the headphones I have circled in the above familiar graph. Those headphones score in the 70-90 range in the prediction model, meaning that they have relatively large deviations from target, especially the ones of the left group of three. Yet, according to research they are highly preferred by people. In your method, these headphones would get a fail because they deviate from the target too much, and ion my opinion that is a direct conflict with the research.
Your assertion about me is wrong. I don't ever say a headphone has "failed." I say there is either good enough compliance with the target or not. If not, I develop filters to compensate. If the sound improves, which almost always it does, I give a headphone a higher recommendation than without EQ. I give no numeric value like you are quoting as it is simply not defensible.

In my book, you either get a headphone that needs no EQ, or get what you want and EQ to the same target. I have no interest in saying your headphone is scoring 77. The purpose and real reason for the research existing was to define the target curve. Once there, it enables me to do what I just stated. Anyone using a "Good" headphone as is, or even one in "excellent" category based on numerical analysis is doing it wrong and is going against the research. Five years of research went toward creating the target, not linear model.
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,833
Likes
243,196
Location
Seattle Area
That is why I am suggesting to expand the limits of acceptable to be inline with research. Then the question becomes, what deviations are "acceptable".
Once more, "the research" is about finding a target response and getting the entire industry to rally around it so that we can finally have a proper standard for production and playback. Rating deviations is just an exercise after the fact to say, "look, the target was so useful that we can even build an overly simplified model and have it produce decent predictions about preference." You are throwing out all the work to develop the target based on listening tests and hanging your hat on numerical assessments of deviations. This is not remotely "the research."

My job is to review a headphone and not just make a measurement. If I go by simplistic ideas you are proposing, and say this headphone is "good," what would I do when someone asks me how it sounds to me? What if it sounds better than another that is rated "excellent?" Lie? Or say that the ratings assigned by what you say is wrong based on what I am hearing?

The only rational way to do this is the method I have chosen. Measure. Then quantify differences from target with equalization and performing AB tests, just like research is performed using surrogate headphones. If my EQ improves the sound, then I report that. Sometimes I can't decide on a filter and report that as well. My testing can easily be replicated and members can then choose what the data/review means to them.

Net, net, there is no such thing as "what is acceptable." You either are at the target or get there with EQ with some refinement for your tastes and what you listen to.

Above fully complies with spirit of the research as pointed out by authors themselves:
The Perception and Measurement of
Headphone Sound Quality: What Do
Listeners Prefer?
Sean E. Olive

"The reaction from the headphone industry to this new
research has been largely positive. There is evidence
that the Harman target curve is widely influencing the
design, testing, and review of many headphones from
multiple manufacturers, providing a much needed
new reference or benchmark for testing and evaluating
headphones. Several headphone review sites provide
frequency response measurements of headphones showing
the extent to which they comply with the Harman
target (Vafaei, 2018; Audio Science Review, 2020); in

cases where they fall short, corrective equalizations are
often provided."
 

IAtaman

Major Contributor
Forum Donor
Joined
Mar 29, 2021
Messages
2,440
Likes
4,283
In my book, you either get a headphone that needs no EQ, or get what you want and EQ to the same target. I have no interest in saying your headphone is scoring 77. The purpose and real reason for the research existing was to define the target curve. Once there, it enables me to do what I just stated. Anyone using a "Good" headphone as is, or even one in "excellent" category based on numerical analysis is doing it wrong and is going against the research. Five years of research went toward creating the target, not linear model.
Well, five years of research went towards creating 3 targets if we are being accurate, not one, and who knows if another 5 years were to be spent, another 3 targets might have emerged. In any case, my intention is not to speculate on what might be the findings of further research, but to point out to you that what is fundamental and is being researched is the preference, not the target; and the target is not very different from the preference scoring system in its utility, in that they are both tools to estimate what people might prefer,
 
OP
amirm

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,833
Likes
243,196
Location
Seattle Area
Well, five years of research went towards creating 3 targets if we are being accurate, not one, and who knows if another 5 years were to be spent, another 3 targets might have emerged.
Nope. I have explained this already. Five years was spent finding the right target. That was the most important thing as it has the potential to end the circle of confusion which researchers lament all the time. Grading headphones into categories is an exercise. It is not one that I believe in, or follow, nor is it remotely on solid ground like the target is. You are welcome to wait for more grading techniques. But meanwhile, we are absolutely using the science to motivate the industry to move toward a single standard. And failing that, providing EQ to get there.

but to point out to you that what is fundamental and is being researched is the preference, not the target; and the target is not very different from the preference scoring system in its utility, in that they are both tools to estimate what people might prefer,
Forgive me for being blunt but this is just nonsense especially when I quoted directly from Sean's paper to you just above your post. Here it is again:

"The reaction from the headphone industry to this new
research has been largely positive. There is evidence
that the Harman target curve is widely influencing the
design, testing, and review of many headphones from
multiple manufacturers, providing a much needed
new reference or benchmark for testing and evaluating
headphones. Several headphone review sites provide
frequency response measurements of headphones showing
the extent to which they comply with the Harman
target (Vafaei, 2018; Audio Science Review, 2020); in

cases where they fall short, corrective equalizations are
often provided."
 
Top Bottom