This is amazing! Please do share details of how does it work and how to use it.
I had push my code into github :
EqualizerAPO Impulcifer and
CableGuardian
To use it, first need capture data, I suppose the distance should be same, so I will calculate the distance when position is below and up.
Impulcifer is modified for support input 12 direction by [0-12].wav (0 is back, then clockwise) , height diff is not calculated at current.
If use 1 layer, the file need put into ...\EqualizerAPO\config\brir\{name}\[0-12].wav.
If use 5 layer, the file need put into ...\EqualizerAPO\config\brir\[0-4]\[0-12].wav. (0 is below 60 degree ,2 is horizion and 4 is up 60 degree)
EqualizerAPO was added two new filter: BRIRFilter and BRIRMultiFiler, one for only 1 layer and another for 5 layer.
The filer will use lowpass filter to get unprocessed bass as my test speaker will not down to 100hz
The example config file:
BRIR: {"name":"index","directions":[-30, 30, 0, 0, -140, 150, -90, 90],"bassVolume":0.1,"receiveType":0,"port":3053}
BRIRMulti: {"bassVolume":0.2,"port":3053}
directions means the virutal speaker direction, bassVolume is control the raw bass input volume, port means the UDP position data will receive from.
The CableGuardian or sensor will capture the headposition, then send by UDP to audiodg.exe(APO) or VoiceMeter.exe(if use voicemeter client as it will be more flexible for debug)
The core algorithm :
The APO is process audio in batch (for example, 441 frames)
When one batch start, the filter will get position data and calculate the postion, such as if turn 30 degree left, the left channel need to be place at center in origin, etc.
1 layer will only support yaw, and it's quite easy to calcuate(just add/minus/devide),but multilayer will need vector rotate to support pitch and roll (that's why capture need in same distance, the capture points need to place on a sphere).
Then it will put sound data into capture points it need as some time the postion won't just fit into one speaker but betweeen 2 or 4(if multilayer) speaker.
After that, just batch convolution all capture points which has audio data, then add the data into output channels.
The tricky part is: when audio change position, the audio change need to be smooth, or it will create pop sound when rotate head, it will be audible when playing some pure tones like ( for example, windows system sound).
What I do is keep the channel allocation in last batch process, and in this time, if this brir is not used , then make it from origin volume down to zero in this process. vice versa.
The core processing code is in EqualizerAPO code (BRIRFilter.cpp and BRIRMultiLayerCopyFilter.cpp), if I don't explain myself very clear, just have a look at the code.
At current, the code is not very stable and is only pass few times test, it works on index, I think is not bad, but need good input data to check wether it has some audio algorithm problem, but it's hard to get good capture ( it need 60 times capture, position needs precise, the room need be big,need standstill during capture,the speaker should be small but good sound quality, and definitly need a person help moving the speaker)