• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Crossfeed for headphones

I apologize for the delayed response, as I have been attending my grandparents' funeral over the past few days. Thank you for your understanding.

Not wishing to disagree with you, but based on my own listening, I have not found a single HRTF based solution (freeware or paid), that sounds correct.
I also have no intention of arguing with you. As I mentioned earlier, I am already listening to various BRIRs based on personalized recordings I've made, and I do not dispute their validity. However, as I was exploring ways to provide better crossfeed solutions for crossfeed users, I came across your statement in this thread: "I do not think the delayed signal from one speaker to the opposite ear is a significant aspect of the emulation." That is what led me to join this thread.

If you want to try basic virtualization (not Crossfeed), the most common tool would be HeSuVi. (Free Software with EQ APO)

If you have a sufficient budget and require compatibility with various formats alongside easy adaptability without the need for detailed editing, you might consider the Realiser.
If you’re looking for minimal virtualization (yet superior to most generic HRTFs) with the potential to create virtual spaces through impulse manipulation, there’s also Impulcifer.

Highly appreciate the theoretic side, but at the end of the day, it's all about the results. In my case, the tools that attempt to do much, sound the worst. And most of these seem to be the commercial ones, which sorry to say, are all just black boxes. Somewhat snake oily, sold based on "reputation" and no one really knows what's going on in them.! With enough of a herd mentality, just like some revered headphones and IEMs and speakers, they develop a cult following. I've also been down that road.
Yes, the important thing is the results.
And the software I mentioned is transparently open and can all work well.

Whether it’s typical crossfeed or attempting BRIR, the path diverges accordingly.
The characteristic of crossfeed is ultimately just a simple attempt to emulate BRIR.

I respect and highly value the attempts and experiences you’ve had.
However, if you want to experience something closer to real speakers, consider the three options I listed above. Except for the Realiser, the others are free, and HeSuVi doesn’t even require recording. (Of course, the presets in HeSuVi fall far short in terms of imaging and realism compared to the BRIRs I’ve personally recorded. However, some of the presets, like those from DTS, are reportedly the result of loopback recordings by users or developers. They’re not bad.)

So, once again, I have no intention or reason to argue with you.
I also think positively of the attempts you’ve made and the results and experiences that came from them.
However, excluding HeSuVi (its built-in presets), what I’ve suggested isn’t something that might sound good to some and not to others. Rather, I’m suggesting the basics of BRIR, which is fundamentally the same as the way you listen.
Therefore, because it reproduces "the way you listen to speakers" exactly, the method I’ve suggested will undoubtedly work, barring personal preferences regarding your specific space. (If you manipulate the spatial impulse based on acoustic theory, you can even overcome the limitations of the space itself.)
And to mention once again, while BRIRs depend on the quality of the space and the alignment of HRTFs that match you (especially those recorded by yourself), the role of the HPCF (Headphone Compensation Filter), which correctly equalizes the playback device (headphones/IEMs), is equally critical.
Thus, when you experience discomfort with virtual emulations or HRTFs, it’s crucial to determine whether the discomfort stems from a mismatch with the HRTFs themselves or from the headphone compensation filter not being properly implemented for your device.
From my own listening, loopback tests, and verifying other people's setups, as well as retesting and re-measuring, I’ve found that in most cases, the influence of the headphone compensation filter was dominant. (In other words, even if the HRTF isn’t perfectly matched to you, if the HPCF is properly applied to your playback device, it can still be quite listenable.)
I really enjoy subjective discussions.
I also enjoy objective discussions.
And I love the process of connecting what I’ve felt (or what others subjectively perceive) to objective measurements and data, discovering how objective theories manifest and influence my subjective impressions.
 
Last edited:
I apologize for the delayed response, as I have been attending my grandparents' funeral over the past few days. Thank you for your understanding.


I also have no intention of arguing with you. As I mentioned earlier, I am already listening to various BRIRs based on personalized recordings I've made, and I do not dispute their validity. However, as I was exploring ways to provide better crossfeed solutions for crossfeed users, I came across your statement in this thread: "I do not think the delayed signal from one speaker to the opposite ear is a significant aspect of the emulation." That is what led me to join this thread.

If you want to try basic virtualization (not Crossfeed), the most common tool would be HeSuVi. (Free Software with EQ APO)

If you have a sufficient budget and require compatibility with various formats alongside easy adaptability without the need for detailed editing, you might consider the Realiser.
If you’re looking for minimal virtualization (yet superior to most generic HRTFs) with the potential to create virtual spaces through impulse manipulation, there’s also Impulcifer.


Yes, the important thing is the results.
And the software I mentioned is transparently open and can all work well.

Whether it’s typical crossfeed or attempting BRIR, the path diverges accordingly.
The characteristic of crossfeed is ultimately just a simple attempt to emulate BRIR.

I respect and highly value the attempts and experiences you’ve had.
However, if you want to experience something closer to real speakers, consider the three options I listed above. Except for the Realiser, the others are free, and HeSuVi doesn’t even require recording. (Of course, the presets in HeSuVi fall far short in terms of imaging and realism compared to the BRIRs I’ve personally recorded. However, some of the presets, like those from DTS, are reportedly the result of loopback recordings by users or developers. They’re not bad.)

So, once again, I have no intention or reason to argue with you.
I also think positively of the attempts you’ve made and the results and experiences that came from them.
However, excluding HeSuVi (its built-in presets), what I’ve suggested isn’t something that might sound good to some and not to others. Rather, I’m suggesting the basics of BRIR, which is fundamentally the same as the way you listen.
Therefore, because it reproduces "the way you listen to speakers" exactly, the method I’ve suggested will undoubtedly work, barring personal preferences regarding your specific space. (If you manipulate the spatial impulse based on acoustic theory, you can even overcome the limitations of the space itself.)
And to mention once again, while BRIRs depend on the quality of the space and the alignment of HRTFs that match you (especially those recorded by yourself), the role of the HPCF (Headphone Compensation Filter), which correctly equalizes the playback device (headphones/IEMs), is equally critical.
Thus, when you experience discomfort with virtual emulations or HRTFs, it’s crucial to determine whether the discomfort stems from a mismatch with the HRTFs themselves or from the headphone compensation filter not being properly implemented for your device.
From my own listening, loopback tests, and verifying other people's setups, as well as retesting and re-measuring, I’ve found that in most cases, the influence of the headphone compensation filter was dominant. (In other words, even if the HRTF isn’t perfectly matched to you, if the HPCF is properly applied to your playback device, it can still be quite listenable.)
I really enjoy subjective discussions.
I also enjoy objective discussions.
And I love the process of connecting what I’ve felt (or what others subjectively perceive) to objective measurements and data, discovering how objective theories manifest and influence my subjective impressions.
My condolences on your loss.

I have had a rethink on this issue. I spent the last week, from about Xmas day, till yesterday, revisiting my setup. But this time, I had the luxury of a few IEMs. In previous reviews, I tended to use one headphone, to review many options for crossfeed.

I have come across HeSuVi in my "journeys" on the subject, crawling through the Internet, but I have not used this tool yet. This will be one for further action, on my part, as time avails. It just might be the holy grail of crossfeed/roomish simulation tools. Will give you feedback when I've tried it out.

To cut a long story short, while in some circumstances, with some IEM's/headphones, my current conclusion is, one can just about get by without crossfeed, but crossfeed adds however subtle, a valuable augment to the listening experience for all such devices. When done well, for the specific device one is listening to, it definitely adds something that makes listening no longer as fatiguing as the non crossfeed listening.

The challenge I think is, from my own recent experiments, the choice of which crossfeed solution to use. Furthermore I wonder if most people have the patience and time, to sift through and compare the various solutions, to identify the one that confers on each head worn listening device, the best illusion of being in some kind of enhanced space.

I find that, each device benefits from a different crossfeed solution. Why I do not know? Maybe it's the frequency response or distance from the ear drum. Sincerely no clue why, but that is what I've observed. So for example I have about 14 presets from about 6 different crossfeed tools. I'm using crossfeeds which run as VST plugins in my DAW - Reaper. So for each IEM, I cycle through all the options, and narrow down to what I perceptively consider as the best virtual cohesive solution, which delivers a result that I consider is the most natural - hard to define what that is in words, cos I cannot measure it. So now each IEM, has its own set of preferred crossfeed solutions.

I recently also revisited the approach to use the following chain - stereo to ambisonic, followed by ambisonic to binaural transform. I have for a few years had good success with the plugin chain of IEM encoder followed by abDecoder Light, which gives very acceptable results perceptively, with ALL devices. But in truth it is only a perception, because abDecoder Light is "cheating". I've always known that it does not implement the high frequency rolloff on the opposite side, so its a bit of a clever approximation, which I accept is faulty, but that was the solution that left me without any obvious negative artefacts, that I could hear.

I was able to finally revisit an audio chain which replaces AbDecoder Light, with one of the Ambisonic to Binaural plugins from the SPARTA suite. Available here


It definitely delivered an interesting result, with its default HRTF, and provides the option to allow me (I think) import any other HRTF, but I did not go that far. Unfortunately it does not support processing at 96K, which has been my default sample rate, for a rather long time, a bad habit from the days when many audio plugins had terrible aliasing when run @ 44.1 or 48K. The suite has a few Ambisonic to binaural transform tools, and as time permits, I'll explore further.

But like those who drink wine, (not me any more), wine is an acquired taste, so like a new wine, it would take me a while to adjust, besides the inability to run at 96K, is a bummer. I think I could wrap it in another plugin host, and run that @ 48K, in a 96K session. So the 48K restriction may not be a showstopper. My current reservation is more related to the variance in the SPARTA result from most of my other tools. I suspect it is "correct", but need more time to evaluate it, through extensive listening.

What I have found quite acceptable, and truly acceptable, was to switch to one of the crossfeed solutions based on the Meier Algorithm - which is NOT based on any ambisonic transform to binaural. Just pure classic crossfeed - time delay + frequency enhancement on each side, with attenuation of the higher frequencies which are fed to the opposite ear. This has since become my preferred solution, for some of the IEMs I was using in the test, and narrowing down what setting works best for each IEM. What led me to this, was an acknowledgment that - NOT implementing a frequency roll off in the opposite ear, cannot be an optimal solution - which Ab Decoder has done as its own approach.

I'm using a plugin by a developer known as Case - so searching for Meier Case, should lead one to it.

The chain involving abDecoder is next in line, and the BS2R crossfeed implementation (version coded by Liqube for Windows) which does a proper delay and high frequency rolloff on the opposite ear, I would place as the third in line, in my available options. The BS2R tends to result in a bit of a darker sound - warm. Sounds great, but I prefer the results I'm getting from the Meier algorithm based plugin developed by Case. At this time, I am extremely pleased with it. Really satisfied.

So that's progress. I now highly suggest that crossfeed is a desirable bonus, to any headworn listening device, where this can be easily achieved. The plugin is free to download., and has only one control, a slider for changing the intensity of the crossfeed., so it also meets ideal usability requirements.

I'll look into the HeSuvi, and thanks for suggesting it, but it may take a while, to get back to you here, or directly via in-mail. I've "spent" far too much time over recent years on these audio "experiments", so having rediscovered something that works enough to remove the extreme stereo impact of headphones, I'm taking a bit of a break from audio research, to get 2025 in motion. As soon as I have a few critical personal projects for the year achieved, I can devote some time to revisit this opportunity, and post anything I discover, here.

Do keep well. Best wishes for 2025 and beyond.
 
I apologize for the delayed response, as I have been attending my grandparents' funeral over the past few days. Thank you for your understanding.


I also have no intention of arguing with you. As I mentioned earlier, I am already listening to various BRIRs based on personalized recordings I've made, and I do not dispute their validity. However, as I was exploring ways to provide better crossfeed solutions for crossfeed users, I came across your statement in this thread: "I do not think the delayed signal from one speaker to the opposite ear is a significant aspect of the emulation." That is what led me to join this thread.

If you want to try basic virtualization (not Crossfeed), the most common tool would be HeSuVi. (Free Software with EQ APO)

If you have a sufficient budget and require compatibility with various formats alongside easy adaptability without the need for detailed editing, you might consider the Realiser.
If you’re looking for minimal virtualization (yet superior to most generic HRTFs) with the potential to create virtual spaces through impulse manipulation, there’s also Impulcifer.


Yes, the important thing is the results.
And the software I mentioned is transparently open and can all work well.

Whether it’s typical crossfeed or attempting BRIR, the path diverges accordingly.
The characteristic of crossfeed is ultimately just a simple attempt to emulate BRIR.

I respect and highly value the attempts and experiences you’ve had.
However, if you want to experience something closer to real speakers, consider the three options I listed above. Except for the Realiser, the others are free, and HeSuVi doesn’t even require recording. (Of course, the presets in HeSuVi fall far short in terms of imaging and realism compared to the BRIRs I’ve personally recorded. However, some of the presets, like those from DTS, are reportedly the result of loopback recordings by users or developers. They’re not bad.)

So, once again, I have no intention or reason to argue with you.
I also think positively of the attempts you’ve made and the results and experiences that came from them.
However, excluding HeSuVi (its built-in presets), what I’ve suggested isn’t something that might sound good to some and not to others. Rather, I’m suggesting the basics of BRIR, which is fundamentally the same as the way you listen.
Therefore, because it reproduces "the way you listen to speakers" exactly, the method I’ve suggested will undoubtedly work, barring personal preferences regarding your specific space. (If you manipulate the spatial impulse based on acoustic theory, you can even overcome the limitations of the space itself.)
And to mention once again, while BRIRs depend on the quality of the space and the alignment of HRTFs that match you (especially those recorded by yourself), the role of the HPCF (Headphone Compensation Filter), which correctly equalizes the playback device (headphones/IEMs), is equally critical.
Thus, when you experience discomfort with virtual emulations or HRTFs, it’s crucial to determine whether the discomfort stems from a mismatch with the HRTFs themselves or from the headphone compensation filter not being properly implemented for your device.
From my own listening, loopback tests, and verifying other people's setups, as well as retesting and re-measuring, I’ve found that in most cases, the influence of the headphone compensation filter was dominant. (In other words, even if the HRTF isn’t perfectly matched to you, if the HPCF is properly applied to your playback device, it can still be quite listenable.)
I really enjoy subjective discussions.
I also enjoy objective discussions.
And I love the process of connecting what I’ve felt (or what others subjectively perceive) to objective measurements and data, discovering how objective theories manifest and influence my subjective impressions.
I fully appreciate that crossfeed is only one part of the ideal virtualisation solution, for listening on headphones/IEMs. Just so glad I've finally figured out something that I'm happy with, that I am so pleased to now concentrate on actually enjoying the listening, and do this for a while. Revisiting head device based listening. In my case it will be predominantly on IEMs.

It would be great though to arrive at a solution that may be HeSuvi, if that adds enough of the missing "spacial room cues" which obviously crossfeed alone, cannot meet that requirement.

Crossfeed delivers, enough out of head localisation, so the sound is not coming from inside my head, but from somewhere outside my head, at least enough to remove that "congested" head listening experience, when one does not use crossfeed. Making listening more "comfortable" on IEMs. Will be interesting to discover how much more is out there, along this path.
 
I find that, each device benefits from a different crossfeed solution.
Even if a crossfeed simulation is highly precise and personalized—created using ILD and ITD derived from recordings made with your own ears—the way each device (headphones/IEMs) interacts with your ears is inherently different, which is why it sounds different.
For example, when I generated my own crossfeed, or used pre-made crossfeed scripts or crossfeed VSTs, I noticed, much like your experience, that the results were heavily dependent on the response of the device.
When I used an IEM with well-preserved ear gain, combined with crossfeed and a bit of added early reflections, it felt almost like a speaker-based BRIR, even if it wasn’t entirely accurate. However, applying the exact same settings to another pair of headphones destroyed that illusion entirely, leaving me with nothing but a muffled treble response.
So, to achieve a "consistent" experience even with Crossfeed, each headphone/IEM must be aligned to a specific target or personalized target, much like a BRIR.
The unique characteristics of each headphone/IEM need to be minimized as much as possible, so they purely serve the role of a "playback device." This process is identical to the role of the previously mentioned headphone compensation filter (HPCF).

To cut a long story short, while in some circumstances, with some IEM's/headphones, my current conclusion is, one can just about get by without crossfeed, but crossfeed adds however subtle, a valuable augment to the listening experience for all such devices. When done well, for the specific device one is listening to, it definitely adds something that makes listening no longer as fatiguing as the non crossfeed listening.

In reality, due to the binaural method (listening through only one ear at a time), it can cause a sense of pressure on the ears, and the tonal balance can sound very unnatural (this is not merely an issue of FR).
No matter how inaccurate a crossfeed may be, our brain tries to find and imagine something from the small cues that crossfeed provides.
This interacts with the response of the device itself and, at times, can result in a more comfortable listening experience.

The challenge I think is, from my own recent experiments, the choice of which crossfeed solution to use. Furthermore I wonder if most people have the patience and time, to sift through and compare the various solutions, to identify the one that confers on each head worn listening device, the best illusion of being in some kind of enhanced space.
It’s the same as the story above.
If we focus solely on the simple purpose of Crossfeed, the software or settings for Crossfeed don’t hold much significance.
They simply follow the response of the IEM or headphone device we wear on our ears.
Also, this is a slightly different topic, but I have previously shared my link in this thread.
It’s a crossfeed that combines decorrelated noise with the default presets in Peace. While it’s a simple crossfeed, it offers a unique experience compared to conventional crossfeeds. Give it a try sometime.

I have come across HeSuVi in my "journeys" on the subject, crawling through the Internet, but I have not used this tool yet. This will be one for further action, on my part, as time avails. It just might be the holy grail of crossfeed/roomish simulation tools. Will give you feedback when I've tried it out.
What HeSuVi primarily does is enable users to conveniently perform convolution in EQ APO for setups ranging from stereo (4-channel true stereo) to 7-channel.
However, not all users have personalized BRIRs, and rather than questioning the realism of personalization, public virtualization presets that are more likely to work than simple crossfeed have value for this purpose. This is why HeSuVi includes loopbacked presets tailored to this goal (e.g., DTS, Dolby Headphone, etc.).
Whether they are good or bad is not something that can be definitively stated, but at the very least, those presets are not just regular crossfeed—they are BRIR virtualizations.

I have also used tools like SPARTA and IEM Suite as a means to play back ambisonic reverb.
Of course, if such plugins align with your purpose, they are a good choice.

I fully appreciate that crossfeed is only one part of the ideal virtualisation solution, for listening on headphones/IEMs. Just so glad I've finally figured out something that I'm happy with, that I am so pleased to now concentrate on actually enjoying the listening, and do this for a while. Revisiting head device based listening. In my case it will be predominantly on IEMs.

It would be great though to arrive at a solution that may be HeSuvi, if that adds enough of the missing "spacial room cues" which obviously crossfeed alone, cannot meet that requirement.

Crossfeed delivers, enough out of head localisation, so the sound is not coming from inside my head, but from somewhere outside my head, at least enough to remove that "congested" head listening experience, when one does not use crossfeed. Making listening more "comfortable" on IEMs. Will be interesting to discover how much more is out there, along this path.
Yes, thank you for understanding my point without any misunderstanding.
It is ultimately just one of the most simplified solutions.
And as mentioned earlier, due to the interaction with the device's own response, you might feel, by sheer luck, an illusion of extraordinarily realistic imaging that feels special (almost lifelike).

if that adds enough of the missing "spacial room cues" which obviously crossfeed alone, cannot meet that requirement.
This might be a slightly different topic, but many people are either satisfied or dissatisfied with the spatialization (virtualization) provided by Apple or Samsung's TWS devices.
While it applies imperfect personalization to fit the user's ears, why is it still unsatisfactory?
It's because it doesn't apply the numerous reflective HRTFs you hear in reality but instead uses something more akin to simple reverb. (Of course, they try to compensate for this limitation with head tracking, but this is also why it doesn't sound as if the sound is coming from "far away" or at that distance, as it would with actual speakers.)
In other words, people tend to focus solely on the HRTF (HRIR) in the direct sound region.
However, that's not as important as one might think.
The majority of how we perceive space and distance ultimately comes from reflections—even the faintest ones.
Even if you convolve a stereo impulse recorded in a concert hall while wearing IEMs or headphones, it will never truly feel like being in a concert hall. (Of course, the reverb might give a slight illusion, but that's all.)
This is because the HRTFs that interact with your face, ears, and body within the reflections are missing.
This, too, can be considered differently (and I’ve actually tested it). Let’s assume you’re using a Samsung Galaxy Buds to listen to spatial audio on an Android Samsung phone. The experience is, of course, different from a typical IEM experience or crossfeed experience. However, it still feels quite lacking.
But if you use an app like JamesDSP to convolve early reflections that include HRTFs, it becomes quite listenable.

My condolences on your loss.
Thank you for expressing your condolences.
 
Even if a crossfeed simulation is highly precise and personalized—created using ILD and ITD derived from recordings made with your own ears—the way each device (headphones/IEMs) interacts with your ears is inherently different, which is why it sounds different.
For example, when I generated my own crossfeed, or used pre-made crossfeed scripts or crossfeed VSTs, I noticed, much like your experience, that the results were heavily dependent on the response of the device.
When I used an IEM with well-preserved ear gain, combined with crossfeed and a bit of added early reflections, it felt almost like a speaker-based BRIR, even if it wasn’t entirely accurate. However, applying the exact same settings to another pair of headphones destroyed that illusion entirely, leaving me with nothing but a muffled treble response.
So, to achieve a "consistent" experience even with Crossfeed, each headphone/IEM must be aligned to a specific target or personalized target, much like a BRIR.
The unique characteristics of each headphone/IEM need to be minimized as much as possible, so they purely serve the role of a "playback device." This process is identical to the role of the previously mentioned headphone compensation filter (HPCF).



In reality, due to the binaural method (listening through only one ear at a time), it can cause a sense of pressure on the ears, and the tonal balance can sound very unnatural (this is not merely an issue of FR).
No matter how inaccurate a crossfeed may be, our brain tries to find and imagine something from the small cues that crossfeed provides.
This interacts with the response of the device itself and, at times, can result in a more comfortable listening experience.


It’s the same as the story above.
If we focus solely on the simple purpose of Crossfeed, the software or settings for Crossfeed don’t hold much significance.
They simply follow the response of the IEM or headphone device we wear on our ears.
Also, this is a slightly different topic, but I have previously shared my link in this thread.
It’s a crossfeed that combines decorrelated noise with the default presets in Peace. While it’s a simple crossfeed, it offers a unique experience compared to conventional crossfeeds. Give it a try sometime.


What HeSuVi primarily does is enable users to conveniently perform convolution in EQ APO for setups ranging from stereo (4-channel true stereo) to 7-channel.
However, not all users have personalized BRIRs, and rather than questioning the realism of personalization, public virtualization presets that are more likely to work than simple crossfeed have value for this purpose. This is why HeSuVi includes loopbacked presets tailored to this goal (e.g., DTS, Dolby Headphone, etc.).
Whether they are good or bad is not something that can be definitively stated, but at the very least, those presets are not just regular crossfeed—they are BRIR virtualizations.


I have also used tools like SPARTA and IEM Suite as a means to play back ambisonic reverb.
Of course, if such plugins align with your purpose, they are a good choice.


Yes, thank you for understanding my point without any misunderstanding.
It is ultimately just one of the most simplified solutions.
And as mentioned earlier, due to the interaction with the device's own response, you might feel, by sheer luck, an illusion of extraordinarily realistic imaging that feels special (almost lifelike).


This might be a slightly different topic, but many people are either satisfied or dissatisfied with the spatialization (virtualization) provided by Apple or Samsung's TWS devices.
While it applies imperfect personalization to fit the user's ears, why is it still unsatisfactory?
It's because it doesn't apply the numerous reflective HRTFs you hear in reality but instead uses something more akin to simple reverb. (Of course, they try to compensate for this limitation with head tracking, but this is also why it doesn't sound as if the sound is coming from "far away" or at that distance, as it would with actual speakers.)
In other words, people tend to focus solely on the HRTF (HRIR) in the direct sound region.
However, that's not as important as one might think.
The majority of how we perceive space and distance ultimately comes from reflections—even the faintest ones.
Even if you convolve a stereo impulse recorded in a concert hall while wearing IEMs or headphones, it will never truly feel like being in a concert hall. (Of course, the reverb might give a slight illusion, but that's all.)
This is because the HRTFs that interact with your face, ears, and body within the reflections are missing.
This, too, can be considered differently (and I’ve actually tested it). Let’s assume you’re using a Samsung Galaxy Buds to listen to spatial audio on an Android Samsung phone. The experience is, of course, different from a typical IEM experience or crossfeed experience. However, it still feels quite lacking.
But if you use an app like JamesDSP to convolve early reflections that include HRTFs, it becomes quite listenable.


Thank you for expressing your condolences.

There is a lot to unpack here. Wow. You have been most generous in sharing your experience. Thanks.

I'll slowly walk through each of the opportunities you have discussed on this thread, and explore them and try them out. Its a lot. Relieved to find someone else who has been down a similar path, so I have confidence to tread where you have trodden.

My regret is that it took me several years to get this far, cos in my experience it is not easy to find the info, and I do not recall any reviewers of head worn listening devices, who cover this subject matter. None, in well over 20 years of following headphone reviews, and also IEM reviews in the last year, with as much attention as possible. Not a single one I recall discussed the immersive opportunites. About the only ones who mention anything about this are the folks who review Dolby Atmos related things. On the other hand, I am relieved to have gotten this far, but there was far too much invested in time, and trying things out, and giving up, and coming back months later, and several cycles of doing this. All is well that ends well.

The products with headphone amps, from the likes of RME and SPL, which include crossfeed implementations (typically based on Meier IIRC), have been pretty expensive, and thankfully we have these in software for either free or for very little money.

Just so glad, I can walk into 2025, with 3 things that were conclusively achieved in 2024. Acquired a good dongle DAC + one or two good cost effective IEMs and custom eartips, eventually arrived at an acceptable crossfeed setup, and learnt how to use AutoEQ.App for automated correction filters (I use convolution plugins, to implement the impulse files generated by AutoEQ.app) as well as the measurement graphs on Squig.link to guide me in defining any additional manual filters of course based also on listening. From a relative newbie on the consumer hi-fi , cos prior to now all my audio was on the professional studio/live sound arena as an audio/mixing engineer, looking back, its a long road, sifting through so many opinions, trying to figure out which is authoritative and valid.

On the professional side of audio, I was so spoiled by access to measurements, that were comparative. Could just look at a comparison of specs, of competing devices, and on the basis of nothing but the published specs, take a purchase decision or recommendation to purchase, knowing well that I will rarely be disappointed. Worst that could happen was a device that was poorly manufactured, faulty, or dead on arrival, which would be returned. But once I looked at the specs, and the reviews, I could order pretty much anything - without listening to it myself. Still did the same thing a few days ago, was recommending a Direct Inject box for guitars at a church where I volunteer as the audio engineer, and had no qualms about what I had suggested they buy. which was provided, and as expected, it worked 1st time, end result as expected - splendid distortion free sound from the guitar. Just by looking as the specs of various Direct Inject devices, and taking into consideration brand name reputation also.

AudioScienceReview has been a huge help - bought a few dongle DACs purely on the specs published in reviews by AmirM on ASR. And with one exception which was a compromise in the supply chain - a proliferation of fake Samsung dongles in the UK, the other dongle DACs delivered exactly the experience predicted by these reviews/measurements. Also bought an IEM, the 7Hz Zero 2, which has become a reference - a benchmark for any listening or any further acquisitions of similar gear, also bought based on the excellent review on ASR.

For several years I've been thinking - is there such a similar methodology by which one can rank the effectiveness of immersive playback solutions, kind of similar to the Harman curves for headphones, and offshoots of that seminal work, which are pretty decent predictors of what to expect from a listening device?

My point is that in the absence of any empirical data on immersive technologies, we are in the land of alchemy, voodoo, snake oil, with everyone and their dog (or cat) touting - buy this, its the greatest thing since sliced bread, and the only evidence one has for how effective these things are - NOTHING, all anecdotal. I'm imagining in my head some kind of visualisation, by which one can show how effective each type of virtual immersive solution/tool, or a chain of tools is, without having to go through the pain of trial and error, and extensive time spent listening to each option critically, to tease out the differences. Just wishing aloud. So many have made a lot of money, in a world of ignorance, pulling wool over our eyes. I am fortunate I am skeptical, so have not spent any money on any immersive solutions yet, only evaluating demos, but nothing yet has compelled me to spend any money - not impressed with any commercial binaural solutions. Part of the challenge is - they cannot prove the effectiveness of their solutions, only option is try for yourself, and pretty much many of them, its a black box, we have no clue what is in there, all we have is - a trial, and some marketing blurb.

I've been spoiled in the professional audio world with analysers, so I can "see" exactly what certain tools are doing, using visual analysers for things like EQ's, compressors and limiters. What I had in mind was some kind of analysis tool, that based on sending an impulse through a binaural solution, would give me a 3D chart, by frequency, time and level, and stereo spread, to demonstrate the scope of the solution, and just by comparing charts, I could see which settings were doing what, and compare solution options. No more voodoo and black boxes, e.g. CanOpener - just singling out one example - nothing against CanOpener in particular, they are all like that the commercial tools - claims, anecdotes, in black boxes, No one, except the developer, actually knows what's going on in there.

I also have in mind parallels, such as the intelligibility metric used for speech based solutions, and metric like RT60 for measuring reverberation in a space. i.e if we could turn the analysis of crossfeed/binaural solutions into a visualisation format/with metrics, that enables them to be compared. And it does not matter if its a black box, the analyser will do its thing and objectively spit out the results, similar to how speakers and headphones and IEMs are measured. That will be a great day, no more pulling wool over our eyes, cos we can just look at the specs and stats, and decide which solution best meets our requirements in cost and/or effectiveness. Some sort of waterfall, which is combined with the kind of 360 degree radial charts used for microphones and speakers, so we can see both the time domain as well as frequency, as well as virtual distance, as well as 360 degree of where sound is virtually emanating from. Hope I'm not asking for too much. It would drastically reduce the time one spends evaluating immersive tool promises, with only a subjective measurement tool - our ears.

Ah lest I forget, I did try out a free solution, that holds a bit of promise. Dolby Atmos Composer Essential, which has binaural tools, so I could achieve stereo to binaural. But the CPU utilisation was crippling my laptop, so will need to delay any further investigation, until I've migrated to a workstation, which is in the search/define specs before I buy stage. And then I discovered that with the exception of Reaper my DAW, the other major DAW's like Cubase, Logic, Nuendo, Studio One, have not just support for immersive audio, but typically also deliver a binaural output. So if I had been using any of these, that would have been job done. No need to search for anything, it would be bundled in the product. Hopefully something soon will be bundled in Reaper for binaural listening. That would be great.

Super conversation. We speak again soon. And by then, I would have been able to devote time to some more "immersive technology trials".
 
Found time to re-examine the crossfeed solution space. Migrated to a more powerful CPU, so running Fiedler Atmos Composer Essential, no longer maxes out CPU as it did on my laptop. Suspect my laptop needs a re-install. Anyway no more CPU issues with Composer Essential, which is a free tool.

I have found a way to objectively present, the frequency changes that each crossfeed solution confers on the audio. Hopefully I'll be also able to find a way to describe the time based behavior, if any, for each crossfeed option, at a later point in time.

The measurements below focus only on presenting examples of the changes to frequency response and/or level.

--------------

Here's the 1st example. The blue line is the mid and the pink line is the sides.

This is using a chain - IEM Stereo to Ambisonic plugin converting to 1st order Ambisonic, then Audio Brewers Ab decoder light converts Ambisonic 1st order to "binaural".

1741898067207.png


It is a simple implementation, which reduces the level of the sides, by just a bit over 2dB, in comparison to the mids. That's all it does, no frequency changes, no delays.

---------------

Next is the Fiedler Dolby Atmos Composer Essential, which does a decent job of converting stereo to binaural - with a MID placement, we can interpret this to be MID-field, as in emulating speakers in a mid-field position.

1741898949753.png


--------------------

Next is the Fiedler Dolby Atmos Composer Essential, also binaural, but in NEAR setting, i.e. simulating a NEARfield speaker position. I found this to be the most intelligible setting for the Fiedler solution.

1741899331857.png



-----------------------
The next one is from the BS2B crossfeed - version coded by Liqube. This is the Liqube preset. The key thing one notices the attenuation of low frequencies, below 2Khz, by up to 9+dB, in the MIDs, and and high frequencies in the mids.

1741899973328.png


I could put up a few more of these, but it gets a bit boring, once you get the hang of it.

The key issue here is, there is no consensus, on exactly how to implement the crossfeed. EQ attenuation. Every developer does it their own way. And different presets, yield a different kind of attenuation.

There is also habituation, the ear adjusts. So one approach is never so clearly better than another, just different.

----------------------------------------------------------
One final one, which is based on using simple MID-SIDE EQ,

1741900633300.png
 

Attachments

  • 1741900531150.png
    1741900531150.png
    46.5 KB · Views: 25
Subjectively the Fiedler Audio solution adds spaciousness compared with some other options. And sounds OK. I feel the distortion/comb filtering on these are excessive and the result of whatever room simulation/HRTF they have used is somewhat artificial.

I've tried the Realphones from DSoniq. again, in recent times. Been at least 4 years since I explored the initial version, and at that time I was not convinced. Now I wish I had invested in it then, cos it's now a pretty expensive luxury item, obviously being milked for all it's worth, as it is the best solution out there, at this time. Lots of features, from headphone correction of many popular headphones and a few IEMs, many layers of possible EQ filtering, and lots of things to toggle on or off, room simulations, and many sliders for varying the strength of corrections, EQ's, ambience, HRTFs, etc. Stopping short of being able to use third party HRTF's this is the swiss army knife of crossfeed tools.

It can be a bit unwieldy, with far too many options, giving you enough rope, to do a ton of damage to the audio, if you opt for the Pro and Ultimate versions. It has an Easy Modem, where multiple internal parameters are modified at the same time, by each of two sliders.

In comparing various tools, a key issue is how does one ensure that the level at which one is listening, does not change, when switching?, cos any change in level has its own subjective impact on our opinion. And every effort to compensate for level, if it has a delay in achieving this, removes the seamless comparison our ear benefits from. And it begs the question, how does one perceptively measure loudness, as each tool modifies aspects such as frequency response and maybe simulated room acoustics I had a lot of automated help, in comparing at identical perceived loudness, using Melda Productions - MAGC plugin., which manages the gain compensation, as I switch from one option to another, in the audio chain.

I'll make the effort to post the measurements from RealPhones, before the evaluation period is over. But my immediate subjective opinion, is that does the best job, of all the tools which I have listened to.

A few key points of what I think makes RealPhones a stand out option.

1. Depending on what option you buy, it has a good number of parameters, for varying corrections, emulations, ambience, and also turning these off. Pretty much anything you can think of, it's there. emulating speakers, environments, various spaces,. from indoors to cars. Comprehensive.

2. A key observation is all the basic settings tend to introduce a room tilt EQ, even with all other options turned off. So clearly this is a potential advantage one needs in listening, which explains why some prefer Harman tunings., in my opinion. This tilt makes for an easier listen.

3. I now consider that a certain amount of room ambience is needed to get the best from a headphone device, over and above crossfeed. And Realphones provides this suitably. So crossfeed is good to have, but the delay in some crossfeed options like BS2B is far too short to alter the audio anywhere near as realistic of audio in a room, as proper room simulation. Typical delay in BS2B from one ear to the other is less than 500 microseconds. That's less than one millisecond. Adding ambience via the features in RealPhones, delivered a more convincing simulation of audio in a room. I have not been able to check if RealPhones also adds Inter Aural Time Delay. ITD. That will be a part of my further review, by switching off all ambience, one may isolate just the delay from an ITD implementation.

Thinking back, it's now pretty obvious, we need that "space", which we are so accustomed to hearing, from room reflections or just floor reflections, even when we are not in a room we get all manner of cues from reflections, however small, from nearby surfaces, including the ground, trees, cars, etc, our own human anatomy, torso, hands, has reflections, so some of that - i.e ambience/reflections, when simulated, enhances our audio perception, and without it, headphone listening, in my opinion and experience, suffers somewhat. So I have changed my mind on this. Room/Ambience/Reflections I find, is essential, to listening with head based devices, cos that is more like the real world, and without it, things sound strange and artificial, in comparison.

If one does not have the time to cobble these features into a solution, using less expensive tools, RealPhones seems to be the way to go, for those who can justify the cost.
 
This is the Frequency Response from RealPhones without any headphone correction, simulating listening on headphones. So it has applied a room curve nevertheless. Here the mids and sides are identical, no effort is made to roll off bass in the sides, so the measurement tool displays only one line, for both.

1741946836181.png



Next is Realphones, with room and speaker emulation turned off, and HRTF switched on. So this is a similar to using simple EQ filters, to implement the crossfeed. This crossfeed does get a lot more complex when one reduces the speaker Angle parameter in ReaPhones, so there is likely a bit more than just simple parametric EQ being applied.

1741947741292.png


An example with reduced speaker angle. from Realphones.

1741949466455.png



This next one is Realphones simulating a NORMAL, i.e nearfield speaker. So cross feed i.e HRTF, is ON, and my guess is there's some comb filtering from the acoustic simulation The downward FR slope which is prevalent in most Realphones presets is present, This is akin to the downward tilt, prevalent when tuning speakers in a real room. and this tilt occurs all the way to the highest frequencies.

1741947041055.png


This next one is from Fiedler (which I posted earlier), reposted here for comparison. Definitely flatter than the Realphones, and sounds brighter on listening., and harsher, making the Realphones above, a more comfortable listen.

1741948248317.png



So from a Frequency Response point of view, it's a lot easier to understand what exactly these various crossfeed solutions are doing to the audio, and appreciate their results and differences between them.

So the only possible outstanding item is how they behave in the time domain. That may take me a while to decipher, cos I am not aware of any tools for analysing this aspect with ease. The effort would need a combination of tools and therefore time. Not sure if or when I'll get round to that. Thankfully it's pretty easy and consistently so, to objectively review any crossfeed solution, and appreciate the similarities and differences between them. Pretty pleased with the results of this review, and these are now results that others can independently verify, using the same or similar analysis tools.

I've been using DDMF Plugin doctor to conduct this analysis. and massive credit to Paul Third - Youtuber, who was the spark, which gave me an excellent clue, to achieve this revised investigation into crossfeed tools.
 

Attachments

  • 1741947703260.png
    1741947703260.png
    52.8 KB · Views: 22
Stopping short of being able to use third party HRTF's this [Realphones from DSoniq] is the swiss army knife of crossfeed tools.

This may be, but if you want to cut a tree, all the one hundred fifty tools from the Swiss knife do not help you much, what you need is an axe, even if it is a small one.
Using a personalised HRTF will make all the difference in 99.9% of all cases.
Not only are your ears different than anybody else's, your left ear is probably different from your right ear, too. So I would not count on any off the shelf solution to provide realistic sound for you.

I tried some solutions for crossfeed and virtualisation with generic HRTF to some success, but it never sounded "right" for me, always a bit "foggy" and instruments and singers were somewhat contourless, broadly drawn out, like a flat poster image of the "real thing".

Crossfeed and room sim are two diferent things. with crossfeed you only make the image narrower.
try out this one @OK1: https://www.dear-reality.com/products/dearvr-monitor
best I heard so far
I tried to use the trial but the AU unit does not work in Audio Hijack. There are absurd noises and distortions. (And Hijack is otherwise working perfectly). This does not give a favourable impression of the product at all.
Telling from hearing through this dominating layer of these noises it seems to be a useful room simulation. But as it uses a generic HRTF the results are accordingly. I get better results with measurements from my own ears.
 
This may be, but if you want to cut a tree, all the one hundred fifty tools from the Swiss knife do not help you much, what you need is an axe, even if it is a small one.
Using a personalised HRTF will make all the difference in 99.9% of all cases.
Not only are your ears different than anybody else's, your left ear is probably different from your right ear, too. So I would not count on any off the shelf solution to provide realistic sound for you.

I tried some solutions for crossfeed and virtualisation with generic HRTF to some success, but it never sounded "right" for me, always a bit "foggy" and instruments and singers were somewhat contourless, broadly drawn out, like a flat poster image of the "real thing".


I tried to use the trial but the AU unit does not work in Audio Hijack. There are absurd noises and distortions. (And Hijack is otherwise working perfectly). This does not give a favourable impression of the product at all.
Telling from hearing through this dominating layer of these noises it seems to be a useful room simulation. But as it uses a generic HRTF the results are accordingly. I get better results with measurements from my own ears.
When I made the comment about using third party HRTFs, that's from a reality perspective, like most listeners, where there's almost no chance that we will ever have our own personalised HRTF, like you, unless something changes drastically in the cost and the time required to measuring HRTFs.

You are an exception. It's going to take a while for the rest of us to catch up with you. So for a lot of others, the next best thing is trying out a few third party HRTFs, from other ears, to find the one they like, or use one of the synthesized HRTF's not based on any measurements of a specific human being.
 
When I made the comment about using third party HRTFs, that's from a reality perspective, like most listeners, where there's almost no chance that we will ever have our own personalised HRTF, like you, unless something changes drastically in the cost and the time required to measuring HRTFs.
You make it sound like I am some elite guy from the happy few. Well, I am not.

I am just a guy with a microphone (and an audio interface). And the mic is a cheap one.
There is nothing magical about an HRTF, at least not for audio listening purposes. A "true" HRTF, measuring the sound at the eardrum is a different thing, but I just copied the process the Smyth Realiser employs. That is to measure the sound at the blocked ear canal with a small electret. Once for a speaker and then for headphone(s).
The electret is a PUI capsule (6mm) from Mouser and connected to a Motu M2 via a SimpleP48 https://potardesign.com/simple-p48-for-electret-mic-capsules/
Then you use the IR from the speaker measurement for convolution and from the headphone measurement you create an inverse filter for EQ.
You can do that for direct sound (right speaker->right ear and vis versa) only and this is already quite amazing. Or you create a "true stereo" solution with the crossed paths (right speaker->left ear and vis versa) with appropriate delay.
That is all you need. (Well almost ;)

The devil is in the details of course. It took me quite a while to make all kinds of possible mistakes, I went on exhaustive learning trajectories and I am sure there is still a lot to improve. But it is much better than the best "generic" solutions I found before that.

HRTFs are just too different from person to person that one can hope for a good match by just randomly trying.
Here is a collection of eight HRTFs from an EBU presentation https://tech.ebu.ch/publications/pr...al-techniques-for-personalized-binaural-sound
1742500346937.png

The fat black curve is the average of the eight individual ones. Compare any two of these and the differences are more than considerable. And these have to be close.

As an experiment I made a true binaural recording with in ear microphones outside in the park. The result was crazy realistic, super precise and stable localisation. One could follow footsteps of someone passing by step by step. Then I switched the channels out of curiosity and was surprised again. All the localisation was gone! Everything sounded as being behind me.
And my ears HRTF are close (see below).
This does not happen indoors though. The reflections seem to stabilise the frontal perception. But it give a hint to the why a foreign HRTF sounds off.

And many of the available HRTF collections are not high quality because they were not created for listening pleasure but for research. In the end you do not want an HRTF anyway but more a RoomRelatedTransferFunction [What Smyth calls "PRIR" (personal) or others "BRIR" (binaural)].

Here is a summary of my path with some of the mistakes you might NOT wanna make.
First I measured in my living room from the listening position. The result was a failure. The frontal localisation of sounds worked but the sound was really bad. The room reflections obviously were the culprit. This made me very sceptical and I thought about possibilities to improve my room acoustics vor a very long time.
It made me think that a very good room-binaural IR from a pro room would be the way to go.

I was thinking about the same lines as you and checked HRFT collections in search of one that provides frontal localisation for me. But nope.
I found the room-binaural IRs from WDR control rooms and tried those. But you can hear the problems the small room has from bass room modes and the bigger rooms have too many reflections IMO.

I experimented with the best room-binaural IRs I could find (https://openresearch.surrey.ac.uk/esploro/outputs/dataset/99512517702346
https://salford-repository.worktrib...ampled-binaural-room-impulse-response-dataset), which happen to be the room-binaural IRs that Smyth has included in the Realiser firmware.
This was not bad at all, but as mentioned before it never sounded "right" (spatially).
I took me umpteen rounds of attempts to modify the IR (and headphone equalisation) to improve the sound but in the end I just got confused.

Tipps:
Measure in the middle of the room in the near field (≈1m).
EQ your speakers to flat as good as possible (calibrated microphone needed).
Measure at an azimuth angle of 10-15°.

This way I arrived in my acoustically fairly average living room (22sqm) - after some experimenting for the best position - at this:
1742499236760.png
1742499248719.png
1742499265398.png

First diagram is gated (5ms) so it is the anechoic HRTF (both ears) - The other two are the 200ms room IR (of one ear).
By going near field the reflections are attenuated quite a bit and the first ones are more than 20dB down, but still they have quite an effect and they could be smoother. (If you look at the time slices some comb filtering is obvious and listening to the reflections only is quite sobering).
However, the result is amazing. Using this IRs for convolution brings great and realistic localisation immediately even if the headphone EQ is way off. I was surprised.

This is the result of the measurement of my Sennheisers with the in-ear microphones.
1742501825292.png

From this I created filters that inverse this FR in REW (individually for each ear). The nulls should not be (fully) EQed of course.

The combination of convolution and EQ does it for me.
Of course some more EQ is needed, as the flat response from the speaker in near filed sounds too bright, so a "room curve" has to be applied. And some adjustment from speaker position (not the usual stereo triangle) and so on might be needed.
But this is minor stuff compared to the effect of the personal "HRTF".

And I use one more "trick" from the Smyth cookbook.
Obviously my room has problems below (400 Hz) which is no surprise as room modes become more uneven below that and the position in the middle of the room does not help. The Realiser has a "direct bass" function that blends to the direct stereo signal below a crossover frequency. In effect it is like a crossover to a perfectly anechoic bass system (like outdoors). Smyth does this at 80Hz or 120Hz. I do this at 300 Hz as there are no advantageous effects from reflections in this range. This gives a bass reproduction you cannot get in a room.
 

Attachments

  • 1742499919767.png
    1742499919767.png
    160.2 KB · Views: 21
a very good croodfeed is the SPL Phonitor

Mikhail Naganov have "reverse engineering" and published on his blog Electronic Projects in Reconstructing SPL Phonitor Mini Crossfeed with DSP in 2017.

i use with camilladsp , here more informations
 
This (and more) is now a free download. And surprise this version (no trial version) even works as a AU unit on my Mac.
Not bad.
Yeah a pretty interesting development that happened in the most recent 24 hours. Many Dear Reality products are now FREE. Not sure how long this will last, but makes me wonder, why?

But it does give us all the opportunity to evaluate these over a long period.
 
This (and more) is now a free download. And surprise this version (no trial version) even works as a AU unit on my Mac.
Not bad.

about HRTF:
I only have my experience as a sample, but I had to play with the sliders to make it work for me.
and only the large room sounds good to me

1743517094324.png
 

I took a look at the plugin, interested in how it would work.

In a nutshell the plugin convolves with an impulse response that is built from a peak, four strong reflections and some ”grass“.
AMBIENCE will increase the amount of ”grass”.
FOCUS will increase the amount of reflection (increasing there corresponding peaks)
And it will change the FR too, increasing the boost of the ear peak.

The pics are an overlay of
grey/black: Ambience 00 - Focus 30
red: Ambience 00 - Focus 100
green: Ambience 64 - Focus 100 (your preference)

First I investigated the Mix Room A as that was the room I liked most.
dearVR_FR.png
dearVR_IR.png

I added the smoothed (psychoacoustic) FR to see the tonality effect more clearly.
Increasing focus from ”30” to ”100” gives an additional boost at 3.7kHz of about 2-3dB.

The peaks in red and green are right on top of each other, the only difference being the amount of grass.
The very strong (and early) reflections [with focus100 the first reflection is only 6dB down] will produce considerable comb filtering. Whether this is an audible problem I cannot say as you cannot switch it off of course without changing a lot of other things.
These strong and early reflections are certainly not what you usually want in your listening room IR.
And those peaks do not have an FR similar to the direct sound but are strongly coloured,

For some odd reason they did not compensate the volume for changes of the parameter so a change will always sound different on change of loudness alone [about 2dB(!) for focus30->100].
For another odd reason the bass below 100Hz changes with choice of FOCUS.
I would say that here is quite some room for improvement.

In respect of the comparison with a personal solution I guess you just have to be (very) lucky that your ears conform to the frequency curve that was chosen here. Then it might work well (within the constraints above) but otherwise…

Now on to the Large Mix Room (your preference)
Same pictures and parameter adjustments as before.
dearVR_Large_FR.png
dearVR_Large_IR.png


Well there is more going on.
The peaks have more deals (larger room) and the grass is somewhat shaped (FWIW).
Otherwise it is similar but from the later peaks one gets more comb filtering and wobbly FR in the lower mids. Not something I am very keen for.
The FR starts to long for some serious EQ to be neutral and the volume difference is even stronger here.
But well, it's free!

EDIT: I forgot to mention that all these measurements above are for mono signals, so the sound from left and right speaker channels is added (at the left ear). This is the most relevant situation as one wants phantom center sources to sound "right" in the first place. But this thread is about crossfeed and so it is interesting how this is implemented in dearVR.
I checked this for one case only (it will probably be very similar in other cases).
I chose the smaller MixRoom A again (this is not about room effects but about the head and the ears) and parameters Ambience00 Focus100.
Here the result for direct (green, left speaker to left ear) and cross (blue, right speaker to left ear)

Crossfeed.png

That looks quite close to a very simple solution where the cross feed is nothing but attenuated by about 6dB.
The actual FR at the left ear for signals from the right speaker will look quite different. There will be at least significant head shading, probably a lot more than that.
And 6dB level difference in bass is quite a lot. IF bass has significant level differences in the recording this might sound off. Most bass (in particular when mixed at a console) is close to mono though.
 
Last edited:
Back
Top Bottom