• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Multiple subwoofer optimizer (GPU accelerated)

This one only optimizes multiple subs between each other calculating optimal relative time delays and polarity inversions, in other words the best single sub that can be made with all given individual sub responses.
I have 2 new co-located subs and am struggling to get meaningful and repeatable measurements that I am comfortable with to confirm the timing between subs and mains. If I measure the 2 subs and 2 mains full range and input them into your program will would I get valid results for optimum timing between the subs and mains?
 
Last edited:
I have 2 new co-located subs and am struggling to get meaningful and repeatable measurements that I am comfortable with to confirm the timing between subs and mains. If I measure the 2 subs and 2 mains full range and input them into your program will would I get valid results for optimum timing between the subs and mains?
You could if you use just one of the speaker responses. You will get relative delays of subs between each other and relative to the speaker. If you add the second speaker, it will shift it relative to the other speaker and you don't want that. Or may be use vector average of the speakers as one measurement.
 
You could if you use just one of the speaker responses. You will get relative delays of subs between each other and relative to the speaker. If you add the second speaker, it will shift it relative to the other speaker and you don't want that. Or may be use vector average of the speakers as one measurement.
I tried it with 1 sub and 1 main and got an error. See below.

MSO.png
 
I tried it with 1 sub and 1 main and got an error. See below.

View attachment 456232
You must be using the first .exe file I uploaded which was only compatible with RTX cards. I replaced it with something more compatible this morning which should work with your card.
 
You must be using the first .exe file I uploaded which was only compatible with RTX cards. I replaced it with something more compatible this morning which should work with your card.
I downloaded it about 15 min ago and it said "uploaded 14 hours ago". I just now tried downloading again with same result. The size of the .exe is 325 KB.
 
I downloaded it about 15 min ago and it said "uploaded 14 hours ago". I just now tried downloading again with same result. The size of the .exe is 325 KB.
I see then it's not compatible with GTX1660 I guess. You can try compiling the .cu code yourself in visual studio in the correct compute settings for your card.
 
FYI I tried on another computer with a lower end card, Nvidia GeForce GT 730, and it did not work on that either.
 
I see then it's not compatible with GTX1660 I guess. You can try compiling the .cu code yourself in visual studio in the correct compute settings for your card.
I just spent some time downloading Visual Studio and the CUDA SDK and as far as I can tell you are going to need an RTX series card to get this to work, but I could be wrong. Your card, the RTX 3090, uses "Compute Capability" 8.6, do you know the "minimum" "Compute Capability" that this will run on? See link to list of compatible cards based on "Compute Capability". https://developer.nvidia.com/cuda-gpus
 
I just spent some time downloading Visual Studio and the CUDA SDK and as far as I can tell you are going to need an RTX series card to get this to work, but I could be wrong. Your card, the RTX 3090, uses "Compute Capability" 8.6, do you know the "minimum" "Compute Capability" that this will run on? See link to list of compatible cards based on "Compute Capability". https://developer.nvidia.com/cuda-gpus
I'm not sure but I believe compute 5.2 (Maxwell) is the minimum the binary will run at.
 
Using 401 delay steps from -2e+01ms to 2e+01ms with 0.1ms step.
Using GPU: NVIDIA GeForce MX150
GPUassert: invalid device symbol C:\Users\Ronin\source\repos\Variable number of Subs\kernel.cu 340
Press Enter to exit...
 
Using 401 delay steps from -2e+01ms to 2e+01ms with 0.1ms step.
Using GPU: NVIDIA GeForce MX150
GPUassert: invalid device symbol C:\Users\Ronin\source\repos\Variable number of Subs\kernel.cu 340
Press Enter to exit...
That's a Pascal GPU and should be able to run the code. Maybe install Cuda first and try to compile from the .cu code on your machine.
 
It's just delay and inversion, how come it still takes several seconds on a GPU? Meaning no offense, but MSO does way more in a similar time
 
It's just delay and inversion, how come it still takes several seconds on a GPU? Meaning no offense, but MSO does way more in a similar time
None taken.

 
It's just delay and inversion, how come it still takes several seconds on a GPU? Meaning no offense, but MSO does way more in a similar time

To be fair, MSO doesn't evaluate responses with polarity inversions that can be changed during optimization. That's due to convergence issues caused by the sudden radical change introduced by flipping an inverter during the process.

Brute-force optimization isn't affected by this.
 
So if you have two subs and set of mains would it make sense to make two measurement sets (both subs same polarity and one sub polarity flipped assuming equal subs for simplicity) to get analyzed and both handed over to mso to get the best of both worlds?
 
To be fair, MSO doesn't evaluate responses with polarity inversions that can be changed during optimization. That's due to convergence issues caused by the sudden radical change introduced by flipping an inverter during the process.

Brute-force optimization isn't affected by this.
Do you think it's possible and makes sense to combine this two tools? Get delay and inversion from this multi-sub optimizer and let MSO do the rest?
 
Back
Top Bottom