• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Has anyone tried using this crossfeed setup?

From what I see, this is not possible, at least it’s not straight forward: JamesDSP features a single convolution stage, but ASH requires two stages in a row, one for the BRIR and one for the HpCF.
The HpCF is a simple eq file in .wav format, jamesdsp supports the (additional) fir convolution with the "viper ddc" option applied.
The HpCF .wav file eventually needs to be renamed as ".vdc" though

Also check the amazing www.autoeq.app tool allowing you to save your preferred eq correction in an app compatible format
 
Last edited:
And with HRIR only there will actually be no "space" simulated at all

I doubt that. I know what some say that in an anechoic chamber there is no space, but this is hard to believe since the main clues, like interaural time difference, are still present
 
I know what some say that in an anechoic chamber there is no space, but this is hard to believe since the main clues, like interaural time difference, are still present
Well, perhaps I used the wrong word. With "space" I meant something more physical, like size and dimensions and characteristics like that. What acoustical "size" has an anechoic chamber? If it is truly anechoic it should sound like speakers and listener are suspended in mid air. So yes, there is "space" but no room (or only that little bit from the recording studio/hall that the engineer mixed into the tracks).
As I wrote I tried it and I got out of head experience and sound directions of course. But localization is rather fuzzy, probably because the HRIR is from somebody else's ears.
With one's own HRIR it might be different. I experimented with in ear microphones and got amazing localization even outdoors in the park. But even with the own HRIR there will be no listening room.
 
Well, perhaps I used the wrong word. With "space" I meant something more physical, like size and dimensions and characteristics like that. What acoustical "size" has an anechoic chamber? If it is truly anechoic it should sound like speakers and listener are suspended in mid air. So yes, there is "space" but no room (or only that little bit from the recording studio/hall that the engineer mixed into the tracks).
As I wrote I tried it and I got out of head experience and sound directions of course. But localization is rather fuzzy, probably because the HRIR is from somebody else's ears.
With one's own HRIR it might be different. I experimented with in ear microphones and got amazing localization even outdoors in the park. But even with the own HRIR there will be no listening room.

ok, I thought you were meaning that speakers in anechoic chamber would sound like headphones, cause I have read this a lot allready.
the problem with the room simulation is: how would the perfect room be? ceirtainly it shouldn't colorize the sound, but if it doesn't is it really a room?
one could add reverb to the roomless binaural IR. first reflection algos also do exist.
But I personaly think the best room is no room, especially over headphones where we don't expect one. do we really want the room? or do we want that floating stereo traingle?
is there really little ambience in the tracks? do you miss ambience when listening to music via headphones?
 
But I personaly think the best room is no room, especially over headphones where we don't expect one. do we really want the room? or do we want that floating stereo traingle?
I found it rather tricky with "no room".
I see the problem with room coloration and the question about the "right room“. An anechoic HRIR looks like the logical, purist answer.
But stereo is a system built upon psychoacoustic effects and approximations. Anechoic stereo (speakers as well as HP+HRIR) produces comb filtering with mono signals among other things. And this is not the same in all recordings as it depends on recording technique (XY versus A-B etc.)
And stereo is mixed in 99% of all cases with listening room reverberation in mind (a form of hopefully benign "spatial distortion").

When I built an anechoic crossfeed with the Neumann-HRIR and Anaglyph I was not satisfied with the spectral balance. With this FR (mono) of the HRIR this does not come as a surprise.
KU100 HRIR-Anaglyph raw MONO_RL-L.jpg

[These FR are not smoothed, it is the HRIR that has been smoothed.] This is the out put of R+L at the left ear.

So I equalized in such way that a mono signal (L+R) is reproduced with neutral FR.

KU100 HRIR-Anaglyph EQ MONO_RL-L.jpg

But when listening the result had an extrem emphasis in the midrange.
I checked the FR of left and right (speaker) channel separate (again for the left ear) and found this.
KU100 HRIR-Anaglyph EQ CHANNELS-L.jpg

What the heck?!?
This is the aforementioned comb filtering of course. The bumps (1.8 kHz) do cancel in a mono signal, but with music (stereo) it sounded unbearable.
It became clear to me why the FR of the HRIR looked so warped.
Now, how would one EQ that? I chose to flatten out the bumps (more or less of course and with the same filters in both channels) and try. I compared spectral balance of the EQed HRIR with the direct signal and tried to match that. Now the mono FR is off but it sounds ok. The result is instructive: it depends on the recording. Sometimes spectral balance is very close to "direct" sometimes not. With mono recordings it is not.
So there is no "right HRIR" (or EQ) either, so it seems. I guess stereo is just not (fully) compatible with binaural.

And then there is the mentioned problem with the "wrong" ears. In Anaglyph there are several HRIR from individuals (measured at IRCAM). They all sound off to me. In REW the responses look quite ragged and with rather big differences left-right. Maybe that is the reason.
The KU100-HRIR from Cologne are quite symmetrical, processed, somewhat smoothed and to me it sounds much better. But still with somewhat fuzzy localization. I guess that fuzziness might disappear with my own HRIR.

one could add reverb to the roomless binaural IR. first reflection algos also do exist.
I like the idea of a synthesized room with ideal characteristics that can be tweaked. But first reflections might not yet do the trick.

In Anaglyph there is a room module that can add a collection of rooms with some adjustment options. I did only a very short check and to me it seemed like not giving the same spatial effect (realism) as the WDR BRIR. So I did not investigate further, maybe it would be worth it.

Actually the WDR Controlroom1 does not leave much to wish for me. It is rather dry ( I like dry) but still has certainly enough reverb to reasonably eliminate the stereo problems from above. The difference between mono FR and single channel FR is much smaller. [Maybe some of the directivity discussion is about that?]
It sounds beefier ;-) For me coloration (in comparison to direct) does not change that much between recordings. And localization is definitely better for me, too.
So I would say: yes, I want the room! To me it is just better.
[Though I do not like the idea of adding "spatial distortion" at all, however benign it may be. I would prefer a purist solution.]
 
is there really little ambience in the tracks? do you miss ambience when listening to music via headphones?
Well, yes and no.
As I wrote above it is always difficult to get something spectrally similar without the room part, at least for me.
But when that is the case and I can compare, then the anechoic signal is not bad at all. Much better than the shoebox in the head of direct stereo for sure. It is very dry though, always sounding somewhat anemic.
And with certain recordings (Mozart operas) it is as if the scene is indeed "suspended in mid air".
To me the anechoic solution sounds best with recordings with a lot of reverb (big hall, live).
I could live with that for sure.

That is as long as I do not switch to the Control Room BRIR. That is a different world for most recordings. Afterwards I know too well what I was missing. And it is more robust, too.
 
Thanks for mentioning this. A good room-derived crossover setup for headphones has always interested me.

But I'm not getting acceptable results from this. Most of the rooms are bass monsters. All of them are "too alive". I suspect that is because room response is dependent of SPL and it's quite nonlinear. The proposed method is applying proportionally the same effect to all of the audio content, loud and quiet. Although that might not be too bad, what if the measurements were made at 94 dB ("fully" exciting the materials in the room) and your listening volume is 86 dB (you don't expect a "fully" excited room)? In any case, all audio content is of course constantly varying in level accross the whole spectrum, and the room response represents the rooms "reaction" to a single dB level throughout the spectrum.

I'm going to try and figure out if I could add some nonlinearity to this setup and see if the results are better.
 
I am looking at all this information and would really like to get a good substitute for Darin Fong's OOYH. I have a Mac and my set-up is this: Mac > Topping D90SE > Stax SRM 323S > Stax SR 009. I am technically unsophisticated and this is why OOYH was so good for me. I have tried contacting Darin Fong several times, but have not received a response, so I assume that OOYH is an orphan technology. I don't have speakers. The above set-up is my only way of listening to music. I downloaded SoundSource at the recommendation of a poster here and am happy with it. My Stax 009's are on a Harman curve and I am happy with that. I don't want to invest in a Smyth Realizer at this time, but would like a simple plug-in like I had with OOYH. Any help would be appreciated.
 
I've been using this excellent set of equalization/convolution files given here: https://github.com/ShanonPearce/ASH-Listening-Set for the past few days and it's been the best headphone experience bar none across the multiple sets that I own.

For my PC setup it's already as simple as it gets but I'm wondering if there isn't the possibility for a portable version somehow where I think this would really shine. The processing ability required isn't that much (a fraction of a core on even my old PC). A modern phone could blow it away if there was an app that could do it, but some kind of DAC/amp that does it directly would probably be best.

The one I use is the small broadcast room which is somehow the most realistic thing I've ever heard through a headphone. This beats Hesuvi and the old OOYH plugin easily.
Can this be used with a Mac?
 
the fact that headphone enthusiasts even exists allready shows it, no? if you are an enthusiasts than you don't hear nothing there to fix.
But also if you read about "binaural room simulation" in hifi forums you find that most wont like it.
Even the much lesser crossfeed solutions are niche.
And both of these are nothing new. they acutally only found partial acception with audio engeniers
I tried room sims for mixing but found they alter the FR too much to be reliable in that context. Listening for pleasure is another thing of course.
 
I have to be honest here and feel I have been excluded from these "binaural room simulation" discussions. Please see my previous posts.
 
I downloaded the trial version of dearVR_MONITOR.pkg and can't find it on my Mac. Where do I find it? The instructions say to insert this into the master bus of my DAW? What does this even mean? Where is the master bus and where is the DAW? I have Garage Band, which I have never used. Is this the DAW or master bus? I need straightforward instructions.
 
Last edited:
I downloaded the trial version of dearVR_MONITOR.pkg and can't find it on my Mac. Where do I find it? The instructions say to insert this into the master bus of my DAW? What does this even mean? Where is the master bus and where is the DAW? I have Garage Band, which I have never used. Is this the DAW or master bus? I need straightforward instructions.
DearVR Monitor is a plugin, i.e. you will need some sort of host software to run it. I’m not into Mac but remember that you can use SoundSource from company Rogue Amoeba for that.
Of course you could also use some digital audio workstation (DAW) for that but I guess that this is more complicated..
 
DearVR Monitor is a plugin, i.e. you will need some sort of host software to run it. I’m not into Mac but remember that you can use SoundSource from company Rogue Amoeba for that.
Of course you could also use some digital audio workstation (DAW) for that but I guess that this is more complicated..
I have SoundSource. Would you be able to give step-by-step instructions on how to put DearVR Monitor into SoundSource? I downloaded the trial version, but when I looked for it (the DearVR Monitor), I couldn't find it. Any help would be greatly appreciated. I had Darin Fong's OOYH previously and appreciated the simplicity of it, but this has become an orphan technology, so I was looking for an alternative.
 
I downloaded the trial version of dearVR_MONITOR.pkg and can't find it on my Mac. Where do I find it? The instructions say to insert this into the master bus of my DAW? What does this even mean? Where is the master bus and where is the DAW? I have Garage Band, which I have never used. Is this the DAW or master bus? I need straightforward instructions.

No clue about MAC, but it runs fine using VST in Foobar with this addon: https://www.foobar2000.org/components/view/foo_dsp_vst3
 
Would you be able to give step-by-step instructions
I‘m afraid, this is not possible since I’m not even using Mac, but a quick Google search reveals that plugins are saved under:
"/Library/Audio/Plug-Ins/Components"
Hence, in SoundSource you need to add dearVR-Monitor with „Add effect“ (I guess..), normally the plugin can be selected from a list then.
Maybe someone who is really using SoundSource with plugins can help better..
 
Back
Top Bottom