• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Resolve's B&K 5128 Headphone Target - you can try the EQ's.....

Status
Not open for further replies.

usern

Senior Member
Joined
May 31, 2020
Messages
491
Likes
500
Does Peace allow exporting presets as native Equalizer APO presets? Quite annoying to enter these filters by hand
 

markanini

Major Contributor
Joined
Feb 15, 2019
Messages
1,785
Likes
1,833
Location
Scania
Being the nerd that I am, as well as a truth seeker, I've tried taking in as much information as possible about the significance of the BK 5128 and it's adoption by reviewers. We're reaching a point where objectivity in audio is getting more mainstream consideration. At the same time I can't ignore human nature and incentives, any tool to benchmark objective metrics can be used to push gear with questionable performance and value by any other metric. It's like the parallell of the blatant career politicians talking the language of grassroots activists. I see no reason to believe a parallel doesn't exists in audio, virtually all areas of audio publicity is touched by incentives, and these people have elected them selves to represent someone to trust in audio recommendations. Not my intention to black pill anyone, just encouraging others to see things as dense mix of positive or negative outcomes coming out of subjectivist/incentivist reviewers adopting objectivist tools and language.
 
Last edited:
OP
Robbo99999

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,996
Likes
6,864
Location
UK
Being the nerd that I am, as well as a truth seeker, I've tried taking in as much information as possible about the significance of the BK 5128 and it's adoption by reviewers. We're reaching a point where objectivity in audio is getting more mainstream consideration. At the same time I can't ignore human nature and incentives, any tool to benchmark objective metrics can be used to push gear with questionable performance and value by any other metric. It's like the parallell of the blatant career politicians talking the language of grassroots activists. I see no reason to believe a parallel doesn't exists in audio, virtually all areas of audio publicity is touched by incentives, and these people have elected them selves to represent someone to trust in audio recommendations. Not my intention to black pill anyone, just encouraging others to see things as dense mix of positive or negative outcomes coming out of subjectivist/incentivist reviewers adopting objectivist tools and language.
Nonsense conspiracy, that's not the motivation for the B&K Target. How can another target mean any kind of preferential treatment for a certain manufacturer of headphone - is there really a headphone manufacturer out there that currently "inadvertently" targets the B&K Target, and why would a reviewer align themselves just with that one if that even was the case. It's just nonsense. I shouldn't be personal but in the past months each time you post I always can't shake the similarity of the last part of your username (British English) with my imagined character trait that you possess, DOH, I'm gonna leave it there, I won't expand on that point.
 

_thelaughingman

Major Contributor
Forum Donor
Joined
Jan 1, 2020
Messages
1,363
Likes
2,045
Being the nerd that I am, as well as a truth seeker, I've tried taking in as much information as possible about the significance of the BK 5128 and it's adoption by reviewers. We're reaching a point where objectivity in audio is getting more mainstream consideration. At the same time I can't ignore human nature and incentives, any tool to benchmark objective metrics can be used to push gear with questionable performance and value by any other metric. It's like the parallell of the blatant career politicians talking the language of grassroots activists. I see no reason to believe a parallel doesn't exists in audio, virtually all areas of audio publicity is touched by incentives, and these people have elected them selves to represent someone to trust in audio recommendations. Not my intention to black pill anyone, just encouraging others to see things as dense mix of positive or negative outcomes coming out of subjectivist/incentivist reviewers adopting objectivist tools and language.
I think the more objective avenues of data for this hobby there are, the better it gets. Progression in technology and ability to have an almost similar anatomical substrate to human ear is the reason the BK5128 is being adopted by many. You have your views and validity to back up your belief but at the end of the day, this new technology only benefits many in the end. Being a skeptic is not a bad thing, but not everyone is motivated by monetary gain and I can’t draw a parallel as to the intention of this being a new paradigm that everyone needs to believe in.
 

markanini

Major Contributor
Joined
Feb 15, 2019
Messages
1,785
Likes
1,833
Location
Scania
I think the more objective avenues of data for this hobby there are, the better it gets. Progression in technology and ability to have an almost similar anatomical substrate to human ear is the reason the BK5128 is being adopted by many. You have your views and validity to back up your belief but at the end of the day, this new technology only benefits many in the end. Being a skeptic is not a bad thing, but not everyone is motivated by monetary gain and I can’t draw a parallel as to the intention of this being a new paradigm that everyone needs to believe in.
Belief is not mutually exclusive to choice, otherwise it's a fallacy. Anyway much of the reviewer space, close to 100% of IEM space, measure all reviewed products and then some. And still recommendations are given for questionable sets, and good sets are being undersold. The other week one of the reviewers that measures his products was reading a carbon copy of a companies marketing script about a claimed tuning circuit feature which independent engineers broke down as bunk, he never announced his association with the brand in question on his social media channels, at the same time he's a mod on the companies discord server. You can hope for a lot with improved objective tools, but don't expect it to solve everything, just keep a reasonable level of skepticism, please.
 
Last edited:

_thelaughingman

Major Contributor
Forum Donor
Joined
Jan 1, 2020
Messages
1,363
Likes
2,045
Belief is not mutually exclusive to choice, otherwise it's a fallacy. Anyway much of the reviewer space, close to 100% of IEM space, measure all reviewed products and then some. And still recommendations are given for questionable sets good, and good sets are being undersold. The other week one of the reviewers that measures his products was reading a carbon copy of a companies marketing script about a claimed tuning circuit feature which independent engineers broke down as bunk, he never announced his association with the brand in question on his social media channels, at the same time he's a mod on the companies discord server. You can hope for a lot with improved objective tools, but don't expect it to solve everything, just keep a reasonable level of skepticism, please.
Now that’s a more refined explanation of your reasoning behind your previous post. Thank you. I agree that nothing on the interwebs and sponsored reviewers can be taken for face value. I myself am skeptical when i see something being touted as the new holy grail but if there is substantial evidence to back up the claims then i’ll make an educated decision to believe the claim.
 
Last edited:

isostasy

Senior Member
Joined
Oct 19, 2022
Messages
354
Likes
637
Taking multiple measurements of one unit ("spatial averaging") is how I obtain the result of one unit.

The graph you linked to shows the results of multiple units averaged (the result of each individual unit obtained in the above way)
That's 21 units of different ages, some of them very old, some of them from 2022.
No observable trend.


Not completely different but yes, the difference would be audible in ABX tests.
Forgive me but if some of them were 'very old' what purpose does this graph serve, given the HD650 has gone through various revisions over the years? Myself and many others have demonstrated simply the different screen used (black paper, silk, or current silver screen) changes the frequency response and the driver itself has probably changed over the last 20 years. Not to mention the current pads are different. I'm not disagreeing with your process: averaging different units in different positions sounds reasonable. But a current silver screen HD650 and an older black paper HD650, for example, should be considered different models. Averaging the response of these is counterproductive. The HD800 and HD800S are given a different name yet the difference is smaller than the difference between some HD650 and HD600 variants. You wouldn't average HD800 and HD800S measurements together.
In my view, it is fool's gold to try to get to high accuracy in headphone measurements. Nothing about is precise. Targets are averaged. Fixtures comply with some average. Position variations, part variations, etc. all work to make actually resolution of the data far lower than 100%.

The measurements give us a guide to follow and confirm. This is what I do with EQ testing and listening tests in tandem. I deviate from measurements as needed to get pleasant sound.

A key goal of the target seems to be lost in all of this: in some ways, it doesn't matter what the target is. We just need one. Not five, but one. If every headphone complied with it, both in production of music and consumption, then we as consumers can EQ to taste and be done. With multiple people chasing some target with different fixtures, we lose this. For this reason, I am disappointed to see a couple of reviewers jumping on 5128 bandwagon. Why on earth would you do this? Is it some kind of race to keep up with head-fi? Why on earth would you adopt a fixture that research shows needs a well researched target to produce correct target? Makes no sense at all.
Thanks Amir, this puts succinctly what I was trying to communicate to resolve earlier. The 5128 may give the illusion of higher resolution but in reality the same limitations of position variation, unit variations etc. still apply.

This is why I wish more effort was put into measuring these attributes in particular. If I were a reviewer with a gras system, rather than spend thousands on the new 5128 I would spend it on obtaining many units of different popular headphones to measure. From a consumer point of view, it is much more valuable to know that the headphone you buy is very likely to be the same as the one the reviewer has actually reviewed than to be able to see precisely where a single unit deviates from an unresearched graph taken on a brand new measurement rig.

@markanini I find your theory intriguing. Why would dishonest reviewers need a new multi-thousand dollar system to continue doing what they've already been doing for years, however? I'm broadly with @_theLaughingman , we still have to exercise the same scepticism we always have. I don't think the development and adoption of the BK5128 is inherently bad (it may be good), but do agree with you it is likely many will use the guise of some kind of new objectivism to make money.
 

markanini

Major Contributor
Joined
Feb 15, 2019
Messages
1,785
Likes
1,833
Location
Scania
@isostasy Assuming they are dishonest, like you say, why would they say no to plausible deniability? It if was in the form of putting on an objectivist act, and someone helped fund the required equipment, they would be inclined to embrace it.
 

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
543
Likes
1,618
Now that’s a more refined explanation of your reasoning behind your previous post. Thank you. I agree that nothing on the interwebs and sponsored reviewers can be taken for face value. I myself am skeptical when i see something being touted as the new holy grail but if there is substantial evidence to back up the claims then i’ll make an educated decision to believe the claim.
I should note, the 5128 is not a "holy grail" - it is, however, the best head and torso simulator you can buy, and its more accurate ear Z is quite potentially interesting. Headphones are far from load invariant, and the 60318-4 standard does only an adequate job of approximating the load of a human ear. I'd like to push further in what we assess with it, and ideally incorporate additional measures to give us a better view of headphone behavior than just frequency response with an average human ear as the load.
 

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
543
Likes
1,618
I explained at length when I first started to measure headphones. Averaging is a type of low pass filter. It is used to gain insight into data that to humans seems random, or hard to quantify. It is also highly sensitive to extreme values (geometric mean is better in this regard).

In the case of headphone measurements, the graph is not hard to understand at all. I give you two instances in stereo measurements. Your brain can easily eyeball what the average of those two is, and you are welcome to average them if you like.

In my view, it is fool's gold to try to get to high accuracy in headphone measurements. Nothing about is precise. Targets are averaged. Fixtures comply with some average. Position variations, part variations, etc. all work to make actually resolution of the data far lower than 100%.

The measurements give us a guide to follow and confirm. This is what I do with EQ testing and listening tests in tandem. I deviate from measurements as needed to get pleasant sound.

A key goal of the target seems to be lost in all of this: in some ways, it doesn't matter what the target is. We just need one. Not five, but one. If every headphone complied with it, both in production of music and consumption, then we as consumers can EQ to taste and be done. With multiple people chasing some target with different fixtures, we lose this. For this reason, I am disappointed to see a couple of reviewers jumping on 5128 bandwagon. Why on earth would you do this? Is it some kind of race to keep up with head-fi? Why on earth would you adopt a fixture that research shows needs a well researched target to produce correct target? Makes no sense at all.
Would you be amenable to having a more extended discussion about that (for example, a round table livestream)? The Headphones.com team has some internal splits on those questions as well, and I think there's a range of reasonable stances there, and having a public dialogue could allow us to address each others' points in a single space we can point people to. I doubt we'll all agree completely in the end, but it'd be useful to have a summary of our dissent and, particularly, our responses to each others' position on this topic!
 

TheTalbotHound

Active Member
Forum Donor
Joined
Nov 23, 2020
Messages
182
Likes
130
While I am not expecting major new insights using the 5218 for over-ears, for IEMs this is a different story. It can quickly become a bit of a moot point whether Harman's IEM target is well researched or not if 711 couplers introduce inaccuracies when, for example, comparing active IEMs with a feedback mechanism and passive ones.

In both cases however we'll still be limited to testing headphones on a singular fixture and not on a system that happens to reliably and repeatedly reproduce the sort of variation we can expect on a cohort of real humans (and then leaves us ponder whether or not these variations are desirable, at least at higher frequencies), which in my opinion is by and large the main issue (with over-ears at the least), alongside sample variation / wear.
How hard would it be to design a measurement system similar to the 5128 but modular enough for users to swap on different pinnae and canal shapes/sizes in order to produce sets of measurements that might more accurately represent the variation of real humans?
 

MayaTlab

Addicted to Fun and Learning
Joined
Aug 15, 2020
Messages
956
Likes
1,593
How hard would it be to design a measurement system similar to the 5128 but modular enough for users to swap on different pinnae and canal shapes/sizes in order to produce sets of measurements that might more accurately represent the variation of real humans?

No idea, but I think that you'd also need to be able to change the head shape and the geometry around the pinna, not just the latter.
 

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
543
Likes
1,618
How hard would it be to design a measurement system similar to the 5128 but modular enough for users to swap on different pinnae and canal shapes/sizes in order to produce sets of measurements that might more accurately represent the variation of real humans?
Fun fact: This was, if my memory serves, part of the pitch for the 5128 - its design allows quick and relatively painless swapping of pinnae (and moderately worrying but quick swapping of ear sims), and B&K specifically highlighted that different ear sims and pinnae representing different population segments could be created.

I don't think there's been enough demand for that yet, but it's something I'd love to see B&K bring to market, because getting a wider sampling of "how this device interacts with human subgroups" is quite important as audio measurements mature IMO.
 
OP
Robbo99999

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,996
Likes
6,864
Location
UK
Fun fact: This was, if my memory serves, part of the pitch for the 5128 - its design allows quick and relatively painless swapping of pinnae (and moderately worrying but quick swapping of ear sims), and B&K specifically highlighted that different ear sims and pinnae representing different population segments could be created.

I don't think there's been enough demand for that yet, but it's something I'd love to see B&K bring to market, because getting a wider sampling of "how this device interacts with human subgroups" is quite important as audio measurements mature IMO.
Although if that was to happen, with the swapping of pinnae, I suppose there's nothing to say that we should be striving for headphones that create exactly the same frequency response at eardrum for each of those pinnae. I'm thinking that each person's pinnae stamps it's own individuality on the frequency response that is received at the eardrum as part of the overall "transfer function" even when listening to speakers in a room. Maybe it would be possible to notice particular frequency response aberrations though between different headphones when used with different pinnae that can not be accounted for as a valid part of the natural transfer function of the pinnae in relation to say receiving sound from a speaker system vs a headphone. I'm just thinking it through with my somewhat knowledge on this, so I might have made some wrong suppositions. I can imagine it getting very complicated with swapping pinnae, some of me thinks it would mean a different target curve for each one, and then you'd have to assess your headphone on the various pinnae to see how it deviated from expected target curve that was customised for that pinnae.
 

Mad_Economist

Addicted to Fun and Learning
Audio Company
Joined
Nov 29, 2017
Messages
543
Likes
1,618
Although if that was to happen, with the swapping of pinnae, I suppose there's nothing to say that we should be striving for headphones that create exactly the same frequency response at eardrum for each of those pinnae. I'm thinking that each person's pinnae stamps it's own individuality on the frequency response that is received at the eardrum as part of the overall "transfer function" even when listening to speakers in a room. Maybe it would be possible to notice particular frequency response aberrations though between different headphones when used with different pinnae that can not be accounted for as a valid part of the natural transfer function of the pinnae in relation to say receiving sound from a speaker system vs a headphone. I'm just thinking it through with my somewhat knowledge on this, so I might have made some wrong suppositions. I can imagine it getting very complicated with swapping pinnae, some of me thinks it would mean a different target curve for each one, and then you'd have to assess your headphone on the various pinnae to see how it deviated from expected target curve that was customised for that pinnae.
This is part of the reason Headphones.com is basing a target off *a* diffuse field HRTF, rather than specifically "this fixture's HRTF", in fact. We know per Theile that the perceived response of headphones is modified by "directionless"/"cue-less" HRTF (without head tracking and HRTF processing, at least), and we know from perception of speakers that the perceived timbre of a sound is based on perceived source. Thus for two humans with different HRTFs, different eardrum SPL will yield the same perceived tone.

In premise an ideal headphone (from a fidelity/quality standpoint) should have an eardrum response that matches the DFHRTF of the head it is worn by multiplied by the "preferred room curve", in the same way that an ideal speaker has an eardrum response matching the HRTF of the head in its sound field in its room (which will be a sum of direct and indirect sound, but I know you know Chris's paper on abstracting that) multiplied by said preferred in-room curve/response. Making such a headphone is acoustically challenging at best, if not wholly impossible, but this is very likely a parameter where headphone designs differ, and I suspect it's one of the "x factors" that make up the difference between @Sean Olive's highly predictive statistical models and a complete picture of what influences sound quality perceptions.
 

MayaTlab

Addicted to Fun and Learning
Joined
Aug 15, 2020
Messages
956
Likes
1,593
Although if that was to happen, with the swapping of pinnae, I suppose there's nothing to say that we should be striving for headphones that create exactly the same frequency response at eardrum for each of those pinnae. I'm thinking that each person's pinnae stamps it's own individuality on the frequency response that is received at the eardrum as part of the overall "transfer function" even when listening to speakers in a room. Maybe it would be possible to notice particular frequency response aberrations though between different headphones when used with different pinnae that can not be accounted for as a valid part of the natural transfer function of the pinnae in relation to say receiving sound from a speaker system vs a headphone. I'm just thinking it through with my somewhat knowledge on this, so I might have made some wrong suppositions. I can imagine it getting very complicated with swapping pinnae, some of me thinks it would mean a different target curve for each one, and then you'd have to assess your headphone on the various pinnae to see how it deviated from expected target curve that was customised for that pinnae.

I share that sentiment, but I believe that you don't need to know what would be the ideal response for each listener to already reach some interesting conclusions. This is a bit of a copy and paste of an earlier post :

Pick two listeners A and B, which DF HRTF you know (we'll assume that DF HRTF + tilt or shelf is the reference for “sounds good to individual A or B” for the sake of the argument). Let’s imagine a theoretical pair of headphones that varies across listeners in a way that perfectly matches their DF HRTF differences. It then means that if you’ve measured the response in situ for listener A, you can calculate the response in situ for listener B - and don't need to measure it.

Now pick two such “ideal” headphones. It also means that, regardless of their basal FR (we’ll consider for the example that they’re different), the variation between listeners will be constant across both of them.

Now, let’s imagine that you pick a selection of six over-ears, but this time you don’t know whether or not any one of them can perfectly vary across listeners in a way that matches the difference in their DF HRTF. If they all are inconsistent in terms of how they vary across listeners, then it means that at least 5 out of 6 are incapable of varying across listeners in a way that matches their variance in DF HRTF - if not all of them.

You’re none the wiser in terms of knowing which headphones captured the DF HRTF variance best (a very interesting question indeed), but you can at least rule out that this is a common characteristic, even for these large, open over-ears - and possibly even start to quantify how incapable they are of doing just that.
 
OP
Robbo99999

Robbo99999

Master Contributor
Forum Donor
Joined
Jan 23, 2020
Messages
6,996
Likes
6,864
Location
UK
I share that sentiment, but I believe that you don't need to know what would be the ideal response for each listener to already reach some interesting conclusions. This is a bit of a copy and paste of an earlier post :

Pick two listeners A and B, which DF HRTF you know (we'll assume that DF HRTF + tilt or shelf is the reference for “sounds good to individual A or B” for the sake of the argument). Let’s imagine a theoretical pair of headphones that varies across listeners in a way that perfectly matches their DF HRTF differences. It then means that if you’ve measured the response in situ for listener A, you can calculate the response in situ for listener B - and don't need to measure it.

Now pick two such “ideal” headphones. It also means that, regardless of their basal FR (we’ll consider for the example that they’re different), the variation between listeners will be constant across both of them.

Now, let’s imagine that you pick a selection of six over-ears, but this time you don’t know whether or not any one of them can perfectly vary across listeners in a way that matches the difference in their DF HRTF. If they all are inconsistent in terms of how they vary across listeners, then it means that at least 5 out of 6 are incapable of varying across listeners in a way that matches their variance in DF HRTF - if not all of them.

You’re none the wiser in terms of knowing which headphones captured the DF HRTF variance best (a very interesting question indeed), but you can at least rule out that this is a common characteristic, even for these large, open over-ears - and possibly even start to quantify how incapable they are of doing just that.
I think I'm just about understanding your paragraphs there, after rereading them multiple times, difficult concepts. About your last paragraph there, I think you would be able to quantify how far various over ear headphones would be away from replicating the "DF HRTF of the individual with added room curve". If we were to say that if there were future plans to have multiple swappable pinnae (representing different population groups) for the B&K, and each of those pinnae would come with the DF HRTF curve for that pinnae supplied by B&K - so when you measured all your different over ear headphones on it you'd just see how far away each headphone was from the "DF HRTF of that pinnae with added room curve". You'd be able to see which of the headphones were able to get closest to the different target curves of each of the various swappable pinnae - maybe some would be closer and some would be more far away - the headphones that would be able to get closest on average to each of the swappable pinnae DF HRTF's (with added room curve) would be the headphone that matches the targets the best and also displays the least variance between individuals from their ideal curve (ie more likely to sound the same between various individuals). But I mean, that's a whole lot of pain in the ass work, and who knows if there would be strong patterns with some headphone designs being able to do this better than others. I don't know if you can really ask that of a simple headphone. (And it sounds like B&K haven't created a load of different representative pinnae either/anyway).

(It's an interesting side tangent to the topic of this thread, and I don't mind it being discussed, but I still welcome feedback from people who have tried Resolve's EQ's and for them to also report back in his thread over on his website - which is the main purpose of this thread.)
 
Last edited:

thewas

Master Contributor
Forum Donor
Joined
Jan 15, 2020
Messages
6,900
Likes
16,905
but I still welcome feedback from people who have tried Resolve's EQ's and for them to also report back in his thread over on his website - which is the main purpose of this thread.
Nowadays very rarely listening with headphones, but out of curiosity tried his HD 600 EQ few days ago but didn't like it as much as for example Oratory's ones, don't think though that the reason for this is his 5128 but rather current constant slope target (which he is considering of changing to 2 shelving filters like Harman).

What about you, how is your preference with your headphones? (I apologise in advance if you posted it already in this thread and I missed it).
 

Resolve

Active Member
Reviewer
Joined
Jan 20, 2021
Messages
212
Likes
531
Nowadays very rarely listening with headphones, but out of curiosity tried his HD 600 EQ few days ago but didn't like it as much as for example Oratory's ones, don't think though that the reason for this is his 5128 but rather current constant slope target (which he is considering of changing to 2 shelving filters like Harman).

What about you, how is your preference with your headphones? (I apologise in advance if you posted it already in this thread and I missed it).
To be clear, I'm not quite advocating for a 2 shelf like Harman, but I do advocate for a variable slope - or at least a boundary window around the slope at key regions. This is sort of what I was getting at in the interview we did with Dr. Olive, where I think a 3 point preference adjustment would be good, one specifically at the highest point of the ear gain around 3khz. This is in part due to the additional adjustment around there in the later research, which he confirmed was based on listener feedback as well. He seemed to think it was a good idea, but also made a good point that if you adjust bass and treble you effectively also adjust the midrange. So maybe my issue could alternatively be solved be shifting the treble adjustment up slightly.
 
Status
Not open for further replies.
Top Bottom