• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Threads of Binaural virtualization users.

I'm curious to hear more about how your process works. Are you chaining multiple SPARTA plugins? I've only used Binauraliser myself.

I took the 11 64-channel Eigenmike files that were closest to ATMOS bed and heights (used MATLAB to convert them to wav first), then ran them through the Eigenmike VST encoder plugin I got from their web page (so they become HOA 64-channel files). I then made 12 tracks in Reaper, routed audio in, and convolved each channel except LFE with the Sparta Matrix convolver using 64 HOA wav files I made previously. After that, every channel is routed to the master bus, which has Sparta ambiBIN in 6-order binaural with my sofa file on it. And headphone eq on top. I also kept the LFE separate and just used the binaural plugin to set it ahead of me in space + low pass filter at 60hz. The speakers from the impulse responses roll off around 50, so the clean bass channel helps out. I also measured the "speakers" with REW and got a flat response with LFE and left/right channels.

Side bonus is that I can now cook eggs on my M1 when all this convolution is going on :) I've never gotten this machine that hot.

But! I also got Virtuoso trial extension, and well... it does amazing things now that I use my sofa. The Reaper solution is fun, but the placement is much more realistic in Virtuoso. At first, I thought the pre-set rooms in Virtuoso use different files in the background, but after a while I figured it's the same engine (just different parameters like size, etc.) Definitely using it going forward.
 
@visualizer that's a pretty intense setup! I had wanted to try setting up some per channel reverb with Sparta Binauraliser, but I find I like to listen either with Virtuoso for some recordings and Binauraliser for others. Both plugins sound better to me when 2 channel tracks are upmixed to 7.1 from JRiver, but I'd also like to experiment with other options such Penteo.

What kind of tracks are you listening to? Are you routing Atmos tracks from Apple music? AFAIK this isn't possible on Windows; I currently don't have many in the way of multichannel recordings but am keen to find other sources of multichannel tracks and/or ways to playback Atmos tracks with my SOFA file.

Also, did any of you folks see the new plugin "Orbit Spatial" featured on Michael Wagner's YouTube channel? It's an Atmos renderer (Mac only) that allows you to load in custom SOFA files. I really need to get a Mac at some point this year!
 
@variance Yes, I've been listening to Apple Music Atmos mixes and just soloing speakers to compare the tonality and accuracy. Orbit Spatial looks like a cool option for getting an overview of the mix. If only there were a way to get mainstream music ADM files... Tho it is absolutely possible to get 12-track MP4 audio from Tidal if you need (via "Tidal downloader" with Atmos track download turned on).
 
I’ve found RWTH IKS Lab Eigenmike em64 impulse response database with impressive spherically arranged 36 speaker array....
I took the 11 64-channel Eigenmike files that were closest to ATMOS bed and ...
Let me ask, why this room?
From the paper:
"The results showed that the measurement room exhibits low reverberation times and high clarity, which indicate a suitable environment for simulating acoustic scenes."

Basically the room adds nothing to the reproduction, so it is suitable for simulation. In your case bypassing the whole simulation, gives the same or better result. Why did't you pick a room, where music really performed?
 
@variance I already tried the mesh2SOFA, wanting to DFE my previously made project. It seems to me it's not possible to skip any steps in the gui? For example, go straight to step 6 and only feed it the finished numcalc output?
Just wanted to update you (and anyone else reading this) that I created a new script called `sofa_mastering_tool.py` that can load an existing SOFA file and can separate create mastered versions of the input file and generate tilted DFHRTF files. This way, if you have an existing SOFA file and just want either the DFHRTF or a different samplerate, you can use this tool.

EDIT: I forgot to mention that if you want to use the new DFHRTF tool, be sure to either clone the full repo or at least grab the updated `generate_extras` and `generate_sofa_outputs` files, as the previous versions don't work with the new tool.

Some of the publicly available HRTF datasets are already diffuse-field equalized, but for others such as HUTUBS, you can use this tool to generate their DFHRTFs from their simulated resulps to compare to your own. I started testing with this and the results are pretty interesting so far! It's easy to see how different HRTFs even starting from 2 kHz.

1772394368342.png
 
I've been experimenting with different audio processing paths:
  • Stereo → JRiver Upmix → sparta_binauraliser or APL Virtuoso
  • Stereo → Ambisonic conversion → COMPASS Binauraliser
So far I like how the ambisonic path sounds the best, but it's a colossal hassle and the settings need to be tweaked carefully to actually work. (It's very easy to cause audio glitches this way.) Let me know if anyone is interested in this and I can post more about the settings I'm using.
 

Attachments

  • Stereo2BinauralAudioChain.png
    Stereo2BinauralAudioChain.png
    205 KB · Views: 21
Let me ask, why this room?
From the paper:
"The results showed that the measurement room exhibits low reverberation times and high clarity, which indicate a suitable environment for simulating acoustic scenes."

Basically the room adds nothing to the reproduction, so it is suitable for simulation. In your case bypassing the whole simulation, gives the same or better result. Why did't you pick a room, where music really performed?

I figured since the RT60 value was close to that of studio spaces, it would work well for "virtual studio".

I notice you prefer to add more reverberation to your playback, which just seems to be a different approach. Since I produce and mix music that is considered modern, the reference mixes I listen to are all mixed in a room with similar traits. I would even explain this further.

If a mix engineer mixes, for example, a pop record, he then manipulates reverberation behind different elements to make them appear closer or further back. A good reverberated mix will almost create an illusion of elements being "placed" in the sound field. If I then listen back to such a mix and add lets say, concert hall reverberation, it would send all the nuanced reverberation balance the mix engineer created to hall reverb - so I would be playing back reverberation inside reverberation, essentially breaking the intended sound image. I think you see where I am going with this. Adding reverb might work for classical music if the recording is reverberated naturally during the recording process, and there isn't too much reverb already present. But again, if I listen to a good orchestral Atmos mix in 7.1.4 through Virtuoso, it sounds very immersive to me, like I would be sitting in a concert hall.

Now you might say, why add all this very short reverb to the virtual speaker room at all if there is a reverb already added to every instrument in the mix? The thing is, we perceive reverberation differently when we listen to mixed music in a room and when we listen to it through headphones. For example, if I add a certain amount of reverb to a guitar or a vocal when mixing just on headphones without room simulation, it might sound proper to my ears, but if I then go and listen to the same audio in the studio mixing room on speakers, I will hear that there is too much reverb all of a sudden. That is because the natural room reverb makes us feel reverberation levels differently inside the mix. But! If I mix a track on speakers inside a room, the balance always sounds correct on headphones. Hence, the desire to create "virtual mixing rooms" for headphones with very balanced, minimal reverberation.

That's all, of course, if you mix or produce music. For just listening purposes, I think we can all choose what we prefer and whether we want to hear precisely what the artist and mix engineer intended, or whether we want to make listening more fun.

I also understand there is no way of making a stereo classical piece feel like you are sitting inside the concert hall without adding reverb channels to the sides and back. But I also think that's what the Atmos mix does anyway (or maybe the mix engineer decides to pan sources around you and make a soprano run circles around your head, which I hope nobody does :) )

@variance, many thanks, I will check the new scripts out! Also, a silly question - if you plot personal or other HRTFs in REW, do you run a sweep trough Sparta plugin with a sofa file inserted, or do you have a simpler way of doing it?
 
@variance, many thanks, I will check the new scripts out! Also, a silly question - if you plot personal or other HRTFs in REW, do you run a sweep trough Sparta plugin with a sofa file inserted, or do you have a simpler way of doing it?
Hey @visualizer. These plots are generated by my mesh2SOFA tools. I recently added an extra tool to the repo called "sofa_mastering_tool.py". Basically you just drag a SOFA file onto the window, and then you can either resample the SOFA files or export CSV files of the Diffuse-Field HRTF with a configurable tilt (set to 0 for neutral diffuse field). I then load those CSV files into REW and overlay them. I can make a quick video of how to do this if you'd find it helpful!

I'm also thinking of adding the ability to load multiple SOFA files onto this app so that it's easy to batch generate these DFHRTFs.

Note: If the result of exporting the CSVs is a flat line instead of curved, then the SOFA file has already been diffuse field equalized. I'm going to investigate if it's possible to "un-diffuse-field-equalize" these kinds of files. Basically I'm just curious to compare the various diffuse field responses from the various databases out there. To my knowledge nobody has done this, so we really haven't seen how different people's overall hearing is from a headphone point of view - we've all been looking at mannequin head DFHRTFs only!
 
Back
Top Bottom