• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

New white paper of the PEQdB tool with interesting results out

Status
Not open for further replies.

RogerSmith

Active Member
Joined
Jan 2, 2025
Messages
100
Likes
167
Saurav Chala (aka Sharur) has published a white paper on statistical evaluation of several hundred users of his PEQdB tool (https://peqdb.com/wiki/PEQdB-White-Paper.pdf). The resulting target curve, which is practically identical for over-ear and in-ear headphones, has some quite interesting deviations from the Harman curves. But not only the outcome is quite impressive, but also the tool itself, as it allows quick and easy investigation of the own preferences, in a much more broad way as the Harman research (large database of headphones and IEMs, many more filters).
Please no discussion about the the author as a person, I know he is not well respected here by most, but that should not impact his findings and contributions.

Captura de Tela 2025-05-24 às 05.18.00.png
 
Last edited:
Where is the arrogance? To me, the writing sounds pretty typical of research papers (boring and filled with passive voice)
There is absolutely no recognition to the preceding research (namely Harman research) and nothing but complaints about issues. The research by Harman is de facto the standard in the field. Not recognizing this properly and instead doing nothing but throwing shade at the research is simply arrogant and disingenuous. The only goal is to discredit the research done by Harman.
The conclusion of the research is also articulated in a grandly arrogant tone by claiming in absolutes that their research is the pinnacle of research in their field. If you think this is standard practice, I don't want to know the standard of quality in the research you regularly read.

I frankly don't know who the author is and I really don't want to know. Reading the paper tells me all I need to know. These sort of "researchers" are a cancer to the scientific community and the reason why many choose not to go into research or drop out after starting.
 
Nice study! Some observations:

1. It's a self-published white paper, meaning it's not peer reviewed and has not been submitted for publication. I think it is definitely worthy for consideration for publication.

2. Discussion of the methodology is poor. In particular, the program was released to the public, and there were 266 different models of headphones used for analysis. This means he had to use third party measurements (if they exist) for his analysis. Headphone measurements are imprecise at best, depending on test fixture, clamping pressure, methodology, averaging, etc. This is one order of uncertainty.

3. Another order of uncertainty: no control for IEM eartips. Not even discussed. These have a profound influence on the freq response, especially bass.

4. No statistical analysis.

These are inherent weaknesses of a self-funded study. If he wanted to address these problems he would actually need funding. So he did the best that he could do with his resources. I don't begrudge him of that, maybe that's why he didn't submit it for publication.

Regardless, he managed to replicate the general trend of Harman's findings within a couple of dB. IMO this study does not invalidate Harman's findings, but it is another reminder how wide the confidence intervals are when doing studies of this kind.
 
Please no discussion about the the author as a person, I know he is not well respected here by most, but that should not impact his findings and contributions.
Why not? The paper spends an inordinate time discussing other researchers by name, in an unflattering light, rather than saying it provided a refinement, new insight, lifted a veil or two, or whatever.

Despite the tone of the paper, it does more to confirm the Harman target than provide a new info. Also since the data is not presented or discussed completely, no way to know if Sharur's study is systematically different due to method rather than cause.

I just looked him up:
1748202972974.png
 
Nice study! Some observations:

1. It's a self-published white paper, meaning it's not peer reviewed and has not been submitted for publication. I think it is definitely worthy for consideration for publication.

2. Discussion of the methodology is poor. In particular, the program was released to the public, and there were 266 different models of headphones used for analysis. This means he had to use third party measurements (if they exist) for his analysis. Headphone measurements are imprecise at best, depending on test fixture, clamping pressure, methodology, averaging, etc. This is one order of uncertainty.

3. Another order of uncertainty: no control for IEM eartips. Not even discussed. These have a profound influence on the freq response, especially bass.

4. No statistical analysis.

These are inherent weaknesses of a self-funded study. If he wanted to address these problems he would actually need funding. So he did the best that he could do with his resources. I don't begrudge him of that, maybe that's why he didn't submit it for publication.

Regardless, he managed to replicate the general trend of Harman's findings within a couple of dB. IMO this study does not invalidate Harman's findings, but it is another reminder how wide the confidence intervals are when doing studies of this kind.
1) true, I don't know if it will be submitted for publication somewhere, but it already serves a good purpose in explaining the results achieved with the PEQdB tool over several hundrer of listeners.
2) as is explained in the paper, and detailed in figures 16 and 17, a comparison has been made of measurements of different people , rigs (clone couplers) and also different units of IEMs and no significant difference has been found.
3) see 2), as third pary measurements were used.
4) I think it has been made, but is not detailed in the article, this really should be more detailed.
 
Why not? The paper spends an inordinate time discussing other researchers by name, in an unflattering light, rather than saying it provided a refinement, new insight, lifted a veil or two, or whatever.

Despite the tone of the paper, it does more to confirm the Harman target than provide a new info.
If you would have read the article properly, you would have noticed that the result is a new target curve, based on many more users than the Harman research ever used.

Also since the data is not presented or discussed completely, no way to know if Sharur's study is systematically different due to method rather than cause.
I think the resulting target curve is just the average of all users, which are almost 1000.
 
There is absolutely no recognition to the preceding research (namely Harman research) and nothing but complaints about issues. The research by Harman is de facto the standard in the field.
Nobody, aslo not the author, do deny this.
Not recognizing this properly and instead doing nothing but throwing shade at the research is simply arrogant and disingenuous.
There are doubts by many on the resulting target curve, especially for IEMs. Only 2 filters were available and just 20-30 people participated.
The only goal is to discredit the research done by Harman.
No, but to provide a better target curve, which was achieved. Alss a great tool to do this, which is free to use, is provided.
The conclusion of the research is also articulated in a grandly arrogant tone by claiming in absolutes that their research is the pinnacle of research in their field. If you think this is standard practice, I don't want to know the standard of quality in the research you regularly read.
Well acutuall, since I did publish research myself (in theoretical physics) in the past, I know for sure that for getting published in the best journals, you have to use a lot of hyperbole to get the needed attention.
I frankly don't know who the author is and I really don't want to know. Reading the paper tells me all I need to know. These sort of "researchers" are a cancer to the scientific community and the reason why many choose not to go into research or drop out after starting.
So what do you have contributed?
 
I really don't understand and like the hostility towards the author, true, he uses quite some hyperbole, but it is easy to look beside this and acknowledge the results and for example use his tool. The Harman research, as important as it is, did not provide a way of discovering your own preferred target curve for basically any IEM you want to use at home with a song of your choice. What is not to like about this?
 
Last edited:
If you would have read the article properly, you would have noticed that the result is a new target curve, based on many more users than the Harman research ever used.
I read it fine. I just didn't take it the way you wanted me to. :cool:

And no, you misread my comment. The paper is a proposed refinement. That's all. No way to tell if the data is systematically skewed despite your speculation below:
I think the resulting target curve is just the average of all users, which are almost 1000.
I have no doubt the Harman curve can be improved. My own preference is different for instance, there is a very wide distribution. There may be something here, but hard to know, it just skims the surface.
 
I read it fine. I just didn't take it the way you wanted me to. :cool:

And no, you misread my comment. The paper is a proposed refinement. That's all. No way to tell if the data is systematically skewed despite your speculation below:

I have no doubt the Harman curve can be improved. My own preference is different for instance, there is a very wide distribution. There may be something here, but hard to know, it just skims the surface.
Ok, so you should agree with me the the Harman IEM curve is not such a holy grail as many take it. I also would not claim that the resulting target from this study is the most essential finding, but more the tool, for everybody finding out the proper one, in a quick and easy fashion with any IEM to use.

I also agree that there should be more robust statistical analysis with age group, gender and listening skills being seperated for, as was done in the Harman research. But I think that can still be added.
 
Last edited:
Just to be clear, this is not a research paper. It is a white paper which tend to be an amorphous combination of marketing and technical description of a product or procedure. It is not nor should be intended to stand up to scrutiny supported by peer-reviewed literature. The paper does not follow standardized formatting for a scientific publication. The design and methodology of data collection and analyses likewise do not adhere to accepted scientific standards.

You have a choice to accept or reject the information presented in the paper, and the conclusions derived by the author(s). You also have a choice to make personal use of the product or procedure being discussed. In and of itself, the paper neither supports nor refutes findings from the existing literature. It is the presentation of someone’s idea.
 
Just to be clear, this is not a research paper. It is a white paper which tend to be an amorphous combination of marketing and technical description of a product or procedure. It is not nor should be intended to stand up to scrutiny supported by peer-reviewed literature.
It does stand up to scutiny definitely.
The paper does not follow standardized formatting for a scientific publication.
I am not quite sure if you ever have published a peer-reviewed paper, put this one here is EXACTLY formatted like a scientific paper. Only that it would have to be submitted in pre-print format, but that is just a flag in the text editor (usually LateX).
The design and methodology of data collection and analyses likewise do not adhere to accepted scientific standards.
So what, besides maybe a bit more detailed description of the actula algorithm or a more profund statistical analysis is missing? I have read tons of PUBLSIHED scientific papers, that are weaker on all fronts.
You have a choice to accept or reject the information presented in the paper, and the conclusions derived by the author(s). You also have a choice to make personal use of the product or procedure being discussed. In and of itself, the paper neither supports nor refutes findings from the existing literature. It is the presentation of someone’s idea.
New ideas are what scientific progress is all about.
 
Since you ask for specific agreement:
Ok, so you should agree with me the the Harman IEM curve is not such a holy grail
I agree
as many take it.
I do not agree with this part of your sentence, vague and as it is. Olive specifically cautions against the interpretation you just made, as does Amir, to give a couple of specific examples. In fact many caution against this misinterpretation, including myself.

I have no idea if this dude's company has mined some data and provided an improvement, or if the data and/or analysis his method uses leads to systematic offsets form Harman. Otherwise his study is another that either falls within, or slightly modifies previous studies. Wide error bars, as stated by Olive:
We found that the majority of people generally prefer this curve, and if they didn't prefer it, they preferred it with some slight modifications to the bass and treble. In general, 64% of the people liked it as it is without any alterations, then there was a 16% segment who liked it with more bass – 4 to 6dB. Then there was a segment of around 22% of people who prefer it with less bass – 2 to 3 dB less bass and slightly less treble, or slightly more treble.

We think this could be related to hearing loss, particularly because as people age, they tend to lose hearing at high frequencies, and by reducing the bass and boosting the treble they may in fact be compensating for this hearing loss and making voices and instruments more intelligible. It's based on a preferred sound quality over headphones, which can be traced back to a frequency response.

Given the fact Sharur didn't control for any of the important things that affect large studies of audio, the best I can say is he re-verified (for the umpteenth time) Olive's research. And he does come across as disrespectful to Olive specifically, which is unnecessary and distracts from how small the differences he found.
 
"CONCLUSION The PEQdB over-ear, in-ear, and generic magnitude frequency response target curves are the most statistically optimal headphone target curves created and should be the industry standard for tonality."

I'm trying not to sound too critical, but this conclusion appears misaligned with scientific standards established by Sean Olive and may reflect a lack of methodological self-awareness. The author has not provided any rigorous statistical validation or direct comparison to the preceding Harman curves, which he critiques in a largely anecdotal manner-problematic given that the Harman targets were developed through controlled testing and statistical validation. That’s a significant difference, unless credibility isn’t the goal and it's intended for entertainment or engagement farming. The latter is not in the spirit of ASR rules, but granted the pdf doesn't link directly to monetized content.
EDIT: Also, why is this white paper formatted like a research paper? White papers typically focus on persuading a reader with the authors perspective and solutions, rather than fixating on criticizing another author's work. It comes off like attempting to piggyback on Sean Olives name. Compare to this article on the USound target that basically follows white paper conventions https://usound.com/how-headphones-should-sound
 
Last edited:
I do not agree with this part of your sentence, vague and as it is. Olive specifically cautions against the interpretation you just made, as does Amir, to give a couple of specific examples. In fact many caution against this misinterpretation, including myself.
Well here on ASR, many take it as gospel.
I have no idea if this dude's company has mined some data and provided an improvement, or if the data and/or analysis his method uses leads to systematic offsets form Harman.
I mean that is the whole point of the paper, there are about 750 listeners whose targets, which were determined by Bayesian methods from AB testing with different EQ settings, contribute the the final presented one.
Otherwise his study is another that either falls within, or slightly modifies previous studies. Wide error bars, as stated by Olive:


Given the fact Sharur didn't control for any of the important things that affect large studies of audio, the best I can say is he re-verified (for the umpteenth time) Olive's research. And he does come across as disrespectful to Olive specifically, which is unnecessary and distracts from how small the differences he found.
Most, if not all other targets presented to be an alternative to the Harman one(s), be it Usound, Soundguys or the hyped "New Meta" have no theoretical basis and are also back by no listening tests whatsoever, in contrast to the one presented here.
 
"CONCLUSION The PEQdB over-ear, in-ear, and generic magnitude frequency response target curves are the most statistically optimal headphone target curves created and should be the industry standard for tonality."

I'm trying not to sound too critical, but this conclusion appears misaligned with scientific standards established by Sean Olive and may reflect a lack of methodological self-awareness. The author has not provided any rigorous statistical validation or direct comparison to the preceding Harman curves, which he critiques in a largely anecdotal manner-problematic given that the Harman targets were developed through controlled testing and statistical validation. That’s a significant difference, unless credibility isn’t the goal and it's intended for entertainment or engagement farming.
There were just 750 listeners contributing, many more than in all Harman studies combined. I agree that more statistical analysis would be needed, but the results already point strongly to a validity of the arguments made.

The advantage of the method using the PEQdB tool is, that anybody can easily find out, if his prefered target is close to Harman, just by running the AB test and at the end comparing the outcome to the Harman target. I guess many would be quite surprised of the outcome.
 
It does stand up to scutiny definitely.

I am not quite sure if you ever have published a peer-reviewed paper, put this one here is EXACTLY formatted like a scientific paper. Only that it would have to be submitted in pre-print format, but that is just a flag in the text editor (usually LateX).

So what, besides maybe a bit more detailed description of the actula algorithm or a more profund statistical analysis is missing? I have read tons of PUBLSIHED scientific papers, that are weaker on all fronts.

New ideas are what scientific progress is all about.

Yes, I have published many peer-reviewed papers in medical and scientific journals and co-authored national guidance documents. I have been a peer-reviewer for many papers and national guidance documents. I served as an Editorial Board Member for a medical journal.

Not sure why any of that is relevant for identifying shortcomings in a technical document.

Yes, this white paper superficially follows a journal format in the naming of the various sections, but the text contained within those sections do not. The paper gives the appearance of self-promotion aided by criticism of existing work in the form of a diatribe. It is not presented as a well designed and executed study intended to explore a hypothesis and contribute to scientific literature in a defensible and reproducible manner.
 
Status
Not open for further replies.
Back
Top Bottom