• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

AI/neural network in MSO solver?

sveinse

Member
Joined
Mar 7, 2025
Messages
9
Likes
4
@andyc56 thank you for making MSO, it's fantastic and helped me solve (pun intended) the sub setup in my home cinema.

I'm winging an idea:

Have you ever considered experimenting with neural networks and reinforced learning algorithms in MSO? A property of AI learning is that they try to find the best solutions in a high-dimensional space. I.e. it has very many parameters that can be adjusted. A common problem is local minima which prevents it from finding the absolute best solution.


It struck me that MSO must surely be the same thing: A problem where one tries to find the best solution in a very high-dimensional space. I read this post https://www.audiosciencereview.com/forum/index.php?threads/a-better-vba.63228/post-2318817 and the differential evaluation reminds me about how differential calculus is used to train neural networks.

Perhaps this is just wishful thinking, but it would have been cool to investigate what neural networks and reinforced learning could do to improve MSO. I do not mean this from a "oh look how cool AI is, let's use it everywhere". I mean using AI in a quantitative way to improve MSO. FTR: I'm no AI engineer, so this isn't really my home turf.
 
@andyc56 thank you for making MSO, it's fantastic and helped me solve (pun intended) the sub setup in my home cinema.

Thanks for the comment. I'm glad you found it useful.

I'm winging an idea:

Have you ever considered experimenting with neural networks and reinforced learning algorithms in MSO? A property of AI learning is that they try to find the best solutions in a high-dimensional space. I.e. it has very many parameters that can be adjusted. A common problem is local minima which prevents it from finding the absolute best solution.

I'm familiar with the local minimum problem, which I first encountered when creating SPICE models for BJTs in 2006. It's a long story, but I ended up using a Differential Evolution (DE) algorithm to overcome that back then. I'm not familiar with neural networks or AI though. I'm an old guy (69), and I used to work at a circuit and EM simulator company.

The original MSO version used a DE variant called JADE, which self-adapts the two optimization parameters of DE, one of which is called the "probability of crossover" and the other the "mutation scale factor". I later rewrote it to be a variant of EPSDE, which uses an ensemble of 3 algorithms to calculate the guess vectors. I expanded that to 5, one of which is JADE. The whole story is long and involved. If you want to learn more about DE, the book by its originators is a good source. My review of the book from when I was first starting out with MSO is on that page. In a strange coincidence, one of the originators, Rainer Storn, originally used it to design special-purpose IIR filters.

If you look at a fully stuffed miniDSP 2x4 HD, it has a grand total of 82 biquad slots available. I'm typically using one PEQ (having 3 parameters) for each biquad. That's a total of 82*3 or 246 parameters. So the solution space is 246-dimensional in that case. For the next version, I'll be adopting a new filter type that's a modified Linkwitz transform, having the full 4 parameters for each biquad slot, so that will be a 328-dimensional space.

I'll have a look at that video. I won't be rewriting the optimizer again though. For MSO v3, which I've planned for the end of 2026, I'll just be implementing more user-level optimization methods, since for example the subs+mains configurations don't support SPL maximization at present.

It struck me that MSO must surely be the same thing: A problem where one tries to find the best solution in a very high-dimensional space. I read this post https://www.audiosciencereview.com/forum/index.php?threads/a-better-vba.63228/post-2318817 and the differential evaluation reminds me about how differential calculus is used to train neural networks.

The "differential" in "Differential Evolution" doesn't actually refer to differentiation in the sense of Calculus, but only to how vector differences (really linear combinations) are used to generate the succeeding population of new guess vectors from an existing population (of which there are 100).

Perhaps this is just wishful thinking, but it would have been cool to investigate what neural networks and reinforced learning could do to improve MSO. I do not mean this from a "oh look how cool AI is, let's use it everywhere". I mean using AI in a quantitative way to improve MSO. FTR: I'm no AI engineer, so this isn't really my home turf.

I'm no AI engineer either. I get the impression you're of a younger generation than I. When you get to be my age, you have to pick and choose what you spend time on. Working on MSO is more time that I can't spend on intensive DSP study. This sort of thing is way down on my list. At any rate, the optimizer proper is really a small portion of the total MSO source code and effort. The optimizer itself is a few thousand lines, and MSO in total is about 75,000 lines of just my own code, plus more third-party code. I can only do it because I'm retired.
 
Thank you for the answer! I really appreciate it!
I'm familiar with the local minimum problem, which I first encountered when creating SPICE models for BJTs in 2006. It's a long story, but I ended up using a Differential Evolution (DE) algorithm to overcome that back then. I'm not familiar with neural networks or AI though. I'm an old guy (69), and I used to work at a circuit and EM simulator company.
Back in the days, FETs was significantly more tricky to simulate. It often converged into impossible physics. "No, this port can't handle 1T Amps".
The original MSO version used a DE variant called JADE, which self-adapts the two optimization parameters of DE, one of which is called the "probability of crossover" and the other the "mutation scale factor". I later rewrote it to be a variant of EPSDE, which uses an ensemble of 3 algorithms to calculate the guess vectors. I expanded that to 5, one of which is JADE. The whole story is long and involved. If you want to learn more about DE, the book by its originators is a good source. My review of the book from when I was first starting out with MSO is on that page. In a strange coincidence, one of the originators, Rainer Storn, originally used it to design special-purpose IIR filters.

If you look at a fully stuffed miniDSP 2x4 HD, it has a grand total of 82 biquad slots available. I'm typically using one PEQ (having 3 parameters) for each biquad. That's a total of 82*3 or 246 parameters. So the solution space is 246-dimensional in that case. For the next version, I'll be adopting a new filter type that's a modified Linkwitz transform, having the full 4 parameters for each biquad slot, so that will be a 328-dimensional space.
That's really cool and I know first hand what it takes to make applications like this. Very impressive. Thank you.

I get the impression that MSO is always probing to test that it's not stuck in a local minimum just by looking at the graphs while solving. First it does very course adjustment that gives a lot of gains, then it progresses with less and less improvement. This is a classic diminishing returns type thing and resembles a local minima. But then it suddenly jumps, doing something drastic. And I can only assume it tunes a complexly different parameter to check if even better solutions can be found there.

It took me a while until I understood how to set the constraints right. It does take some efforts and a few simulation rounds until a good solution can be found. I think I'm running version 42 right now. It's just a numerical solver, so sometimes it can converge on some very drastic adjustments. E.g. it refuses to use one of the subs by pulling everything down. Things like that. I've found I need to steer MSO a little bit by harder constraints to keep it in reigns. When it does, it shines.
I'll have a look at that video. I won't be rewriting the optimizer again though. For MSO v3, which I've planned for the end of 2026, I'll just be implementing more user-level optimization methods, since for example the subs+mains configurations don't support SPL maximization at present.
Looking forward to it. I have some more learning to do in how to get subs+mains properly in place. I currently only have DSP filtering on the subs. I'm not overly fond over the audio quality of MiniDSP for the mains, so I don't use them for mains. It works ok, but the octave between the xover frequency and above is a little messy. So I need to address that.
I'm no AI engineer either. I get the impression you're of a younger generation than I. When you get to be my age, you have to pick and choose what you spend time on. Working on MSO is more time that I can't spend on intensive DSP study. This sort of thing is way down on my list. At any rate, the optimizer proper is really a small portion of the total MSO source code and effort. The optimizer itself is a few thousand lines, and MSO in total is about 75,000 lines of just my own code, plus more third-party code. I can only do it because I'm retired.
That's completely fine. MSO is good. I was trying to suggest possible improvements if the opportunities was there.

I think I'm older than you think I am. I'm following behind you 19 years younger. Perhaps that counts as a generation? I work as an technology manager, so I need to keep up with what the new generation is working and loving, and that includes AI these days. So I spend a lot of time trying to keep up. And I'm a sound engineer (live shows) for some 30+ years, but less of it since Covid.
 
It took me a while until I understood how to set the constraints right. It does take some efforts and a few simulation rounds until a good solution can be found. I think I'm running version 42 right now. It's just a numerical solver, so sometimes it can converge on some very drastic adjustments. E.g. it refuses to use one of the subs by pulling everything down. Things like that. I've found I need to steer MSO a little bit by harder constraints to keep it in reigns. When it does, it shines.

I don't know what version 42 is. If you're referring to 1.0.42, that version is really old. It came out over 5 years ago. Version 2, introduced in May of 2024, was specifically designed to address the very problem you just described, among others. The topic SPL Maximization: The New Feature of MSO v2 discusses this very problem.

I get the impression that MSO is always probing to test that it's not stuck in a local minimum just by looking at the graphs while solving. First it does very course adjustment that gives a lot of gains, then it progresses with less and less improvement. This is a classic diminishing returns type thing and resembles a local minima. But then it suddenly jumps, doing something drastic. And I can only assume it tunes a complexly different parameter to check if even better solutions can be found there.

It's much dumber than that, doing the same thing on every pass. There's no special checking of whether it's stuck on a local minimum or not. What you're observing is a property of population-based global optimizers in general. There is not one current guess, but 100 of them, and it's improving each one as it goes. What's displayed is the guess that currently has the lowest error. Sometimes another guess, that wasn't previously the best one, is improved to the point where it becomes the new best one. Then that one becomes what's used and displayed on the graph. This new best guess might be significantly different than the previous best one in its parameter values. That's when you see the "snapping" effect.
 
I don't know what version 42 is. If you're referring to 1.0.42, that version is really old. It came out over 5 years ago. Version 2, introduced in May of 2024, was specifically designed to address the very problem you just described, among others. The topic SPL Maximization: The New Feature of MSO v2 discusses this very problem.
Sorry for not being clear. It meant the 42th simulation run. I call them versions. I try out various adjustments to see how they turn out and some of them I do listening tests on. I am running (MSO) version 2.2.0 and have enjoyed the new features.
 
I am running (MSO) version 2.2.0 and have enjoyed the new features.

Okay, but it looks like you may not be using the new feature that was specifically designed to fix the problem you described,. By "the problem you described", I mean this...

...sometimes it can converge on some very drastic adjustments. E.g. it refuses to use one of the subs by pulling everything down

If you use the multi-stage optimization, when it is in the stage of minimizing seat-to-seat variation, it also applies the SPL penalty constraint. This constraint penalizes unequal per-sub transfer function magnitudes, so it prevents this problem. You can dial in the compromise you want between seat-to-seat variation and SPL penalty. Allowing a higher SPL penalty will give you better seat-to-seat variation at the expense of allowing sub drives to be more unequal. Allowing a lower SPL penalty gives you better sub level matching at the expense of seat-to-seat variation.

The problem here is not the optimizer itself, but the objective function and associated (global) constraints. The multi-stage optimization alters the objective function and constraints to fix that problem.
 
Last edited:
I should add one more thing. Fixing the problem you were having by using the multi-stage optimization just described requires the presence of at least 4 shared PEQs. But your DDRC-88BM has no common input channel for the shared PEQs as does, say, the 2x4 HD. I was just about to describe in detail how to deal with this for the DDRC-88BM, but I noticed in a previous conversation we had that I already did so. Please refer to that post for how to set it up.
 
I should add one more thing. Fixing the problem you were having by using the multi-stage optimization just described requires the presence of at least 4 shared PEQs. But your DDRC-88BM has no common input channel for the shared PEQs as does, say, the 2x4 HD. I was just about to describe in detail how to deal with this for the DDRC-88BM, but I noticed in a previous conversation we had that I already did so. Please refer to that post for how to set it up.
I'm not using the DDRC-88BM any longer. It gave out, so I'm running my older 8x10 instead, which works fine for the sub array. It has many input PEQs to use. I can tell more about this, but for the sake of keeping this thread on point, I think I'll continue that in that other thread.
 
Back
Top Bottom