• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Threads of Binaural virtualization users.

You can include those in the model but I see no use of it. Shoulder and torso is not fixed relative to your head, so whenever you move your head reflection patterns are changing. Also different clothing change reflection/absorption (and hair and hats and the chair you are sitting in etc). None of them affect your hearing in real life. Yo don't hear "worse" because you are sitting or you don't have your favorite sweater on.
more accurately personalised HRTFs are essential for more accurate perception in the vertical plane
Just like in this comment, there is too much focus on static accuracy in research. In my opinion it is much more important to be satisfying dynamically. Proper response to head movements is much more helpful for accurate localization and resolving front to back confusion, than super accurate hrtf.
You can have the most accurate static HRTF, but if you switch horizontal directivity in head tracking, the whole soundfield switches front to back.
 
I think that is very personal and depends entirely on why you are wanting the HRTF in the first place.

Music ,for example, in surround, Atmos, Auro 3D, Sony 360 .. are all made to be listened to facing forward..

So if you're interested in listening to the *music* , head tracking becomes a big pain - any movement and everything shifts.

If you're wanting to listen to the *effect* of virtualization, to feel you're in a room, to behave in a way you wouldn't when listenimf to music .. sure you may want to feel like you're in a specific space.

Gaming is different of course.

Mixing, using a virtualized space to mix ... you want it to be fixed. You don't want panning decisions to be determined by how your head tracking has centred itself properly or not.

Sony's research was for virtualizing immersive audio, Sony 360, Atmos etc and making perception of audio from below, above and behind
.. more accurate, correctly perceived. And better used statically.

With regards to things sounding worse or not ... it's subconscious.. and what frequencies are we talking about here? Are they effected by clothing .. high frequencies? The research, as I remember, suggests the size of our shoulders and torso are important part, not the covering - suggesting a lower frequency range that impacts our localsiation.

We don't determine quality by small things like frequency response .. our brains quickly normalise frequency response. It's only over-thinking that has us believe a certain FR of a headphone is better or worse to a large degree (obvious it becomes a quality issue to certain degree) because we would get used to the sound and feel it was correct again, if we liatw ws exclusively for long enough. Just like our eyes get used to wearing tinted glasses etc.

Because we have two ears, it all about the *differences* between the two ears for localisation .. unless you have vastly different clothes on each shoulder, it cancels out. And if you *do* have different coverings on your shoulder, maybe you're ability to localise sounds is compromised .. it's not like we test it out in any way in real life.
 
I've compared my results of the mesh2HRTF calculations of my headscans to the Apple Personalized Spatial Audio headscan results.
You can scan your head with an iPhone in 30 sec to create personalized profile for Apple Spatial Audio, and you can use the results across all of your Apple devices to binauralize stereo or multichannel content.
Below the results for the standard 5.1 directions measured at the Left ear.

View attachment 501202View attachment 501204View attachment 501205View attachment 501206View attachment 501207
The two very different methods are surprisingly well tracking each other, the basic shapes are the same, the important HF dips are virtually at the same frequencies.
So the mesh2HRTF and the Apple san produced virtually the same results.
In listening the mesh2hrtf and Apple solutions are producing the same spatial accuracy for me. If I switch to the generic Apple profile, tonality changes and directions are changing, mostly in elevation.

The Apple curves are smoothed. From the unsmoothed curve it seems the Apple renderer also applies a relatively small room BRIR to the HRTF. I attribute most of the raggedness and level differences in the 100-1k range to this.

View attachment 501216
There is also a MOVIE mode, when you connect your headphone top an AppleTV, but that messes up everything, Bass boost, HF mess and more room interaction.

There are three files in the OS, Reverb_General.ir Reverb_General_Personalized.ir and BRIR_GEneral_Personalized.ir, which I suppose control these added BRIR curves. I am working on it to figure out the format, bypass them or replace them with flat responses, and check afterwards.

So in conclusion the 30 second Apple iPhone scan is producing the same result as a full mesh2hrtf calculation, but is is messed up with the usual Apple lockdown. If only they would be more flexible letting us to pick BRIRs. But they don't even let us easily EQ headphones.
I have tried Apple's personalized spatial audio and have found that it is fairly accurate, the tonality changes and directionality are generally what I have come to expect from my own experiences and trying/testing different products and solutions, but my major issue is the room rendering, there is very little reverb and the distance is quite close, particularly for use in gaming, or of course, media consumption. Obviously these issues could be solved with either editing the HRTF/BRIR renderer directly, or by applying third party reverb/BRIR.

Applying reverb after recording has it's own issues. For example, using Apple's personalized HRTF with a program like ASH Toolset or plugins such as Sparta, DearVR, or one of the many others, requires a SOFA file, and I see no good and relatively simple way to record a SOFA file. I tested Apple's solution on video games running on Windows by recording and editing a HeSuVi wav file, the result turned out well, but again, I see no way of converting it into a SOFA file, or to apply it to a SOFA file, nor do I see a way to edit the file after recording the response. Applying a reverb/room simulation directly to the file I believe wouldn't work, since it would be channel independent. Re-recording the response a second time could introduce more artifacts and errors that would be very hard to impossible to completely account for.

Which leaves the other option, applying the desired room effects/reverb to the desired audio before recording.

Firstly, I find it interesting that there is a separate "Movie Mode" (or whatever Apple may call it) for the Apple TV, have you tested with the Apple TV App on the Mac? What exactly triggers this different mode, only Apple's TV's spatial audio? In addition, how did you manage to record the Apple TV's output? Did you use a coupler or HATS, or did you find a way to record the TV's output, in the same way that you can use an application like loopback to record the spatial audio output on Mac? Re-reading your post, I assume that it is unfortunately the former.

Secondly, going back to applying reverb before the recording, I think there are a few potential avenues. Assuming you are recording in HeSuVi format (not sure what other format there would even be, although suggestions are welcome) you can obviously just apply a third-party reverb to the recording, either live or pre-processed, then record the output with the reverb. However, this lacks deeper integration with Apple's renderer, and in addition, the existing BRIR, although slight, can still interfere/overlap with your applied room simulation or reverb. I think a more promising solution would be to use the built in system audio unit (AU) effects, available through the Audio Toolbox API. Unfortunately, that is easier said than done, simply loading the effects directly through audacity provides undesirable/unusable results and is obviously an improper use of the audio units. Of course, a swift app can call on these API's directly, but then you are talking about coding, a basic UI, and a lot more than simply applying an audio unit to the sound. Probably the best and most convenient way is to use Logic Pro, however, you are then talking about a $200+ minimum purchase for something that may not even expose the necessary options and tools to you. I do know that it has spatial audio tools deeply integrated, but I haven't tested it. It looks like they have a free trial, I may try it. In addition, it's not like the purchase would be entirely wasted if you have any interest in music production, I'm even tempted to buy it, however, being completely limited to Mac is a major downside for me.

This is a longer post than I meant to write, however, aside from something like Mesh2HRTF (speaking of which, if you were willing to give more details on how you scanned and edited the model, it would be welcome), other personalized solutions (of which there are very few available on Windows) have been fairly lackluster to me. Embody Immerse is almost laughable, their HRTF simulation is not particularly good and, in my opinion, their room simulation ruins it. Other than that, there is Dolby's personalization, which is now shut down (although I never got a chance to try it either way), Aural ID (large upfront purchase, although, I believe the free trial is new or I missed it before), and..... That's it as far as I'm aware. Well, you also have the EAC Individualized MATLAB app, but I haven't been able to achieve a good result with it, I think I may be measuring wrong.

Ok, much longer than I meant to write. Thoughts?
 
but my major issue is the room rendering, there is very little reverb and the distance is quite close, particularly for use in gaming, or of course, media consumption
That was my feeling too. I think they voiced it to users watching iDevices at arms length, that's why the close distance virtual sound.

Firstly, I find it interesting that there is a separate "Movie Mode" (or whatever Apple may call it) for the Apple TV, have you tested with the Apple TV App on the Mac? What exactly triggers this different mode, only Apple's TV's spatial audio? In addition, how did you manage to record the Apple TV's output? Did you use a coupler or HATS, or did you find a way to record the TV's output, in the same way that you can use an application like loopback to record the spatial audio output on Mac? Re-reading your post, I assume that it is unfortunately the former.
Apple TV mode only kicks in if you connect your AirPods to an appleTV. The AppleTV rendering is a BRIR of a bigger room, the target market is for home cinema environment I guess.
I measured everything within the computer using the Logic Pro's built in plugin which can access the Apple system renderer even when you are not playing through the Apple headphones. So you can get a multichannel directional stimulus into the plugin and record the response.

Applying reverb...
I add the missing sound filed to existing recordings between the source and the binauralizer. I use ambisonics multichannel reverbs, with a selected IR trying to match the original performing venue. I keep the direct sound from the record intact, just add the reverb without the direct part. It will basically recreate the missing ambience channels from stereo, kind of upmixing it.
Check this paper after Page11: https://www.angelofarina.it/Public/Papers/155-AES19th.PDF

I am not familiar with PC solutions.
One of the cheapest and easiest great-great-great ambisonic reverb with real spaces is the Waves IR-360 (Mac and PC) which is based on Angelo Farina's work. Is is 5.1 only, so no height. It works in any plugin host, Logic Pro too, so you can combine with the Apple renderer. Or you can use Sparta binauralizer and your HRTF with an OSC head tracker for a complete solution with any headphone .
The Space Designer in Logic Pro is also a great ambisonic reverb. The built in B Format rooms are good, but single point source, so you have to insert one for each input channel and link them. With the Impulse Response Utility, you can create your 3d spaces. You can to find LR or LCR impulse responses in B format on the net, and build your own multipoint B Format rooms for a single master channel reverb. These are usually eery realistic and can be fed to the Sparta or Apple renderer within Logic Pro. They mask successfully the Apple renderer's built-in music mode BRIR.

Assuming you are recording in HeSuVi format (not sure what other format there would even be, although suggestions are welcome)
Applying a reverb/room simulation directly to the file I believe wouldn't work, since it would be channel independent. Re-recording the response a second time could introduce more artifacts and errors that would be very hard to impossible to completely account for.
If you assemble the stereo playback - reverb to multichannel - binaural render from multichannel to headphone chain, you can capture a couple of 4 channel full stereo impulse responses with the most used reverb settings in a fixed head position.
I use those IR with a convolver on devices where the whole head tracking chain is not available.
I was also experimenting capturing the multichannel output of the reverb and save it in a multichannel Flac ( there are batch utilities for that). Foobar2000 on an iPhone can play back multichannel files and feed it directly in Mch format to the Apple renderer with head tracking.

speaking of which, if you were willing to give more details on how you scanned and edited the model
My basic workflow is the following, although variance is working on a script to make it easier.:

1. Scan your head with an iphone. Do it once, do it right, you don’t have to splice in high res ear scans. It will make no difference, just makes it complicated.
I use swim cap for the hair but no crazy reference grid. I put short earplugs in my ears exactly in the position where the ear canal mic will be for headphone correction. This way I don't have to sculpt my ears and I have the proper mic position reference in the model. Start from the back, do a little ups and down around the ears and your face and finish in the back. In this way the accumulated errors will show up in the back, where it is not important. Small holes are ok if smoothing over them is good approximation.

2. Use Meshmixer to prepare the model. That will solve most of the problems which are much more complicated in other apps. Remove the excess, make it solid, remesh with the finest resolution and check for holes. Any hole can ruin the following steps.

3. Position the head model in the coordinate system in Blender according to the tutorial.

4. Run the hrtf_mesh_grading script for the left and right to the desired resolution. It will reduce the million vertices models to 20-70k which can be handled in the simulation.

5. Import the 2 simplified models to Blender, and set the materials and microphone locations. Export the left and right model separately with the hrtf export script. It will create 2 simulation work directories for the 2 ears.

6. Run NumCalc on each. if you have less memory, you can run the NumCalc without the wrapper script. I ran my simulations on 8Gbyte on a Mac.

7. Run finalize_hrtf_simulation to create the sofa files and diagnostic plots.

8. Measure your ear with an in ear mic and correct your headphone accordingly. Headphone responses on your head measured with an in ear mic usually has nothing to do with published standard measurements.
 
Hey guys. I've been working some more on my scripts. I'll most likely put it on github once I think it's ready for testing by others.

In the meantime, you'll need a few dependencies for it to work:
  • The Mesh2HRTF mesh grading tool. Windows users can download the precompiled version from the mesh2HRTF-tools project on SourceForge. Non-windows users will need to compile it I believe from the Mesh2hrtf/Mesh2Input/Meshes/GradingDistanceBased (although I haven't compiled it myself because I'm on windows).
  • Assuming you have already installed mesh2HRTF, you'll need to install some additional python libraries: `pip install customtkinter pyvista numpy scipy matplotlib netCDF4`
Mesh preparation tips
The scanned mesh will certainly have problems, and even following each step carefully it's possible to have defects in the final graded meshes. I agree that meshmixer is great at getting the mesh into a decent state, but some particular issues needed to be fixed elsewhere. Here are some mesh issues that I've found to cause problems with processing down the line, even if the mesh is otherwise okay according to Meshmixer or other 3d-printing focused apps.

Tunnels/bridges: Two issues that I've found that are a bit difficult to fix are tunnels that tend to form behind the ear. The only way I know to fix this is to delete the bridge that forms between the ear and head in blender and then fill the resulting holes in. If anyone knows of a better way please let me know!
1768096637920.png


Thin peninsulas/pointy shapes:
Thin areas can form inside the ear folds that, from the inside of the mesh, form points areas. Pointy areas are bad for BEM processing and can trip up the mesh grading tool. In this case I would I chop off the end and round it over:
1768097081605.png


Non-isotropic mesh geometry:
A mesh with triangles of varying proportions and sizes is bad for BEM processing. I find they can also can cause a problem for the mesh grading tool. For this reason, I added a step that isotropically remeshes the head model prior to mesh grading.
Bad mesh:
1768097267911.png

Good mesh:
1768097677783.png
 
Hello. I finally got the gui/front end for Mesh2HRTF working reasonably well and made a quick video introduction to it. I don't know if I can post the link here directly but it's called "Mesh2SOFA" on github and the video is linked in the readme. Hopefully someone finds it helpful! Let me know if you end up testing it out and if you have any questions and/or suggestions.
 
Testing it on a Mac. It stuck at the 2nd step, 'Process & grade'. The script can not find the hrtf_mesh_grading binary because it does not have an .exe extension on a Mac.

The command it tries to execute works otherwise.
 
Thanks for testing, @fcserei ! What is the binary called on Mac? Is it just hrtf_mesh_grading with no extension? I’ll take a look at this as well as how it handles looking for the other executables.
 
Yes, the binary is called hrtf_mesh_grading and the script ties to run hrtf_mesh_grading.exe. I don/t know if it will be a problem later , but Blender is also called Blender or Blender.app
 
Thanks again, @fcserei . I updated the main file _project_manager_gui.py but am only able to test on Windows. Let me know if you can run the grading tool and open Blender now.

It now has you pick the file rather than the folder, and should hopefully respect the selected file regardless of extension.
 
The tool works now on a Mac. It is a great help, especially in the positioning part. Also makes the Blender work a bit more streamlined.
I've got a 32GB M1Pro Mac now. It ran the whole high res simulation in abut 3 hours with only 2 processes and 20GB memory used, compared to the 8GB Mac which took 5 days.
 
That’s insanely fast! When I ran my mesh (with shoulders) on a 32-core 256gb cloud computer it took just under 2 hours.
 
Hello fellow HRTF warriors,

As many of you have, I've also followed the virtual spatialization options, and I have already made my first sofa files to play around with. This thread got me excited, so here are my 2 cents :)

Right now, I'm using Spat Revolution for playback (also trying IRCAM Spat5 for some experimental stuff). I used the iPhone front sensor to scan my ears, and it seems to work just fine. The atmos mixes in Spat Revolution, at least for me, already sound insanely good. I'm still tweaking the reverberation/speaker source settings, and I haven't landed on a finalheadphone EQ profile, but the coloration/balance already sounds fairly studio-like.

My next steps are:

  • Trying out APL Virtuoso (if they give me another trial since I have my sofa file now). I wonder how it compares to Spat since the APL's reverberation is supposed to be "not just stereo", as I understand it. It also gives options to change the virtual room size, so it should, in theory, be a more advanced solution?

  • I also want to try generating mesh2hrtf SOFA one more time with a more accurate mesh. I found a local one-manservice that has the "eFit scanner", so it might give me a more accurate and deeper scan that I can then merge with my previous head model. I'm still on the fence if the 3D scan has to go that deep tho. Since the physical driver withboth in-ear/on-ear headphones is outside the ear canal in both in-ear and on-ear headphones, should I reallysimulate the ear canal as much as possible or just the beginning of it? I will surely use the @variance mesh2SOFA for my simulations this time (also having a 32 GB M1 Pro).

  • Maybe buying in-ear microphones to equalise my planar headphones.

  • Better head tracking. So far, I've used Opentrack and Google's MediaPipe face_mesh for webcam tracking, both with a custom code snippet to get rotation data via OSC to Spat. Once I got the camera head tracking to translate accurately, it improved the realism immensely. The only thing is, I haven't been able to get the lag down; I would like it to be simultaneous. I yet have to try 3DDFA_V2 webcam tracker from GitHub and obviously the physical head trackers...

  • I also started experimenting with real impulse responses from a physical 7.1.4 speaker array measured in a studio-like room with an Eigenmike em64 spherical microphone. I was thinking if it's possible to somehow take the speaker impulses and, in tandem with my personal SOFA, simulate something closer to Symth A16, or at least extract the location-specific room reverberation and place it behind the standard Spat5 simulation. Right now, it seems possible to do it without head tracking, but with tracking, it gets a bit more complicated since I would now have to crossfade between the different HOA files (assuming I'm using different convolution files for different head positions). As a hobbyist in spatial audio, the Max9/Spat5 custom patches are really hurting my brain by now. There is definitely a bit of a learning curve, I guess :) If anybody already has their own custom Max9 patches, I would encourage them to share!

Anyways, that's what I'm up to, and hopefully this thread encourages more people to dive into this topic. The goal for me personally is to achieve a "virtual studio" solution for myself that holds up against the real world.

@variance I already tried the mesh2SOFA, wanting to DFE my previously made project. It seems to me it's not possible to skip any steps in the gui? For example, go straight to step 6 and only feed it the finished numcalc output?
 
Last edited:
It also gives options to change the virtual room size, so it should, in theory, be a more advanced solution?
Once you are using plugins, possibilities are much more than relying on the built in rooms of binauralization plugins .

should I reallysimulate the ear canal as much as possible or just the beginning of it
I use the entrance. It is a solid point to simulate and measure too. Open back and ANC headphones and earbuds supposedly lowering the earcanal coupling impedance anyway, reducing the effect of plugging the ear with the headphone, so canal resonances are unaffected and not duplicated like when you add them to the simulation.

Maybe buying in-ear microphones to equalise my planar headphones.
Yes, definitely, you have to EQ the headphone to either for flat or for diffuse field. Online measurements have noting to do yo your actual in ear measurements. I've modified a cheap iMM-6 mic to take my measurements.

Image 2-6-26 at 6.57 PM.png

This is one of the better behaving headphones, AutoEQ (black) vs my blocked ear canal measurement (blue)

The only thing is, I haven't been able to get the lag down
Under 100 ms I don't feel the lag, but the usual 250+ ms lag of a regular bluetooth headset is too much. I listen usually sitting down, no big or fast moves, just the subtle, unconscious head movements. I've found the supperware to be the best, most universal and stable headtracker.

I also started experimenting with real impulse responses from a physical 7.1.4 speaker array...
This is very much depends on what are you listening to ( genre, stereo or multichannel). For me it is mostly stereo, classical or opera, so my solution is feed a Sparta binauralizer with the original 2 channels, plus 5 ambience channels generated with Waves 360 ambisonic plugin. Tried more channels, but because the music is happening in front of me, adding more channels make no real difference.
With the Eigenmike, pick a speaker layout ( 7.1.4, T12 etc) you want to simulate, get the right number of IRs in those directions, convolve the signals with those and feed them to the Sparta binauraliser with that layout. That will take care of headtracking.
The problem is that most eingenmike IRs are only one source location and multiple receivers. I prefer if they have 2 or 3 ( LR or LRC) sources for 1 central receiver, so you can get proper IRs for the L R or C channels. If you listen to just one channel binauralized with the direct sound muted, you still have to be able to guess where is the source just from the reflections.

The goal for me personally is to achieve a "virtual studio" solution for myself

I am definitely not in the creating the "virtual studio" camp. If I make the effort, I want the real performing space, not the studio.
On the net you can find some real binaural IRs, like the one from the WDR Large recording studio and the attached control room where they mix ( with Genelec coaxials ). Just convolve some music appropriate for the venue with headphones. The sound difference between the auditorium and the control room is night and day. ( so much about wanting to hear what the producer heard :-) )
 
@variance I already tried the mesh2SOFA, wanting to DFE my previously made project. It seems to me it's not possible to skip any steps in the gui? For example, go straight to step 6 and only feed it the finished numcalc output?
Hi @visualizer . Thanks for testing out my project. I did design the app as a step-by-step process to produce consistent output without the risk of errors, so the app essentially checks for specific files in specific folders to open up each step. I'll think about how best to implement this without breaking the step-by-step intent of the workflow.

That being said, you can run the `generate_extras.py` script against your `.sofa` HRIR file directly like this: `python generate_extras.py --input "path-to-your-HRIR.sofa" --output "path-to-output-folder" --tilt "-0.x"`. You can experiment with the tilt value (I find -0.7 works well). Note that the output folder needs to exist - the script doesn't automatically create it if it's missing.
 
I am definitely not in the creating the "virtual studio" camp. If I make the effort, I want the real performing space, not the studio.
On the net you can find some real binaural IRs, like the one from the WDR Large recording studio and the attached control room where they mix ( with Genelec coaxials ). Just convolve some music appropriate for the venue with headphones. The sound difference between the auditorium and the control room is night and day. ( so much about wanting to hear what the producer heard :-) )
This is my goal, too. But so far the best I can get is a monitor-like experience where the sounds feel like they're about 1m away.

The exception to this is the famous Decca Wagner recordings from the 60s. These are stupendously three dimensional and realistic.

Since I'm on windows, I don't have a way to route Atmos audio through a sofa plugin, and the best I can get is either multichannel recordings or upmixing 2 channel through JRiver into the binaural plugins in Reaper. Would really like a way to expand the headstage and create a more immersive feeling.
 
This is my goal, too. But so far the best I can get is a monitor-like experience where the sounds feel like they're about 1m away.,
Most Decca 60s 70s opera recordings are wonderful binauralized. If I listen to opera, it sounds like the stage is 10-20 ft away, and you can feel the big space around you. You can also pick ground floor or balcony perspective. If you can give me your personal sofa file, I can generate you a couple of static full stereo IRs to try. No headtracking but a simple full stereo convolver can do magic with a stereo source.

About atmos or multichannel source material I am a bit disappointed. If I feed it directly to a binauralizer, it still feels to close to my head and dry, much like the wdr studio sound, rarely satisfying. So it needs similar trickery like a simple stereo recording to sound good over headphones. Then why waste the bandwidth especially if everything happens in front of you ( no flyovers or circling around your head sources).
 
With the Eigenmike, pick a speaker layout ( 7.1.4, T12 etc) you want to simulate, get the right number of IRs in those directions, convolve the signals with those and feed them to the Sparta binauraliser with that layout. That will take care of headtracking.
The problem is that most eingenmike IRs are only one source location and multiple receivers. I prefer if they have 2 or 3 ( LR or LRC) sources for 1 central receiver, so you can get proper IRs for the L R or C channels. If you listen to just one channel binauralized with the direct sound muted, you still have to be able to guess where is the source just from the reflections.

I’ve found RWTH IKS Lab Eigenmike em64 impulse response database with impressive spherically arranged 36 speaker array. It’s not exactly with standard atmos heights but it’s quite close. It also has RT60 of about 0.25. I’m testing it now with SPARTA plugins to make sphere mic impulses ambisonic and then binaural with my sofa file. Localization seems good so far. Timber depends a lot on SPARTAs plugin settings (different algorithms and filtering) but I’m getting there.

I also know a good BBC impulse response library thats recorded with 22.2 speaker array in treated room but the microphone is Ku100 dummy head. Since I can’t apply my own HRTF to this I haven’t bothered myself with it yet. They did use a mechanism to rotate the head with 2 degrees interval and record speakers in every position so that’s interesting.
 
If you can give me your personal sofa file, I can generate you a couple of static full stereo IRs to try. No headtracking but a simple full stereo convolver can do magic with a stereo source.
I would be interested to try this. What's the best way to send of the file; can I send it via PM?
 
I’ve found RWTH IKS Lab Eigenmike em64 impulse response database with impressive spherically arranged 36 speaker array.
I'm curious to hear more about how your process works. Are you chaining multiple SPARTA plugins? I've only used Binauraliser myself.
 
Back
Top Bottom