• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

AI is already designing microchips that we can't understand

Winemaker

Active Member
Joined
Feb 5, 2022
Messages
141
Likes
173
Location
Sicily
I take inspiration from this article (in Italian):
to utopianize amplifiers designed by AI, which perfectly reproduce the sound event, or systems like DIRAC or similar, but with implemented AI.
In a context like the one above, will it make more sense to talk about HiFi as we talk about it now or will everything be delegated to the various AIs that will implement more and more DACs, amps with digital sections or themselves designed by AI, speakers developed by AI, ......etc .....etc .....etc.
I am interested in your opinion on the matter, we are at the dawn of a new expert also in the HiFi field, thanks, or perhaps because of, AI?
 
I think Dirac is slowly but surely heading into that direction - AFAIK a kind of a machine learning is already there with DLBC and ART
I am really eager to see how this is going to evolve
 
One could imagine typing in text how you want the sound transformed and the EQ will do it. "I like to have more bass and for the sound to be less bright." "The bass sounds boomy to me and vocals don't sound right."
 
So in the very near future will it still make sense to talk about speaker positioning, more or less good cables, amazing hardware..... if AI can intervene to correct everything and solve all the problems that have plagued all enthusiasts of this hobby so far?
I have no responsibility for the translation from Italian to English, it's all Google Translate's fault:facepalm::).
 
I take inspiration from this article (in Italian):
to utopianize amplifiers designed by AI, which perfectly reproduce the sound event, or systems like DIRAC or similar, but with implemented AI.
In a context like the one above, will it make more sense to talk about HiFi as we talk about it now or will everything be delegated to the various AIs that will implement more and more DACs, amps with digital sections or themselves designed by AI, speakers developed by AI, ......etc .....etc .....etc.
I am interested in your opinion on the matter, we are at the dawn of a new expert also in the HiFi field, thanks, or perhaps because of, AI?
We already have amps and DACs which can reproduce the audio from FLAC files at better than human hearing so what problems would the AI be solving? Would there be a blind-testable difference? Speakers/headphones, maybe. At the very least we could use AI to finally understand what subjectivists mean when they talk about "clinical" or "musical" output... Well, perhaps once the "many of the designs the algorithm produced didn’t work" problem is solved. It's solvable, right?
 
So in the very near future will it still make sense to talk about speaker positioning, more or less good cables, amazing hardware..... if AI can intervene to correct everything and solve all the problems that have plagued all enthusiasts of this hobby so far?
Unless you hook up AI to a robot which can position your speakers for you, no, AI isn't going to be able to correct very bad speaker placement in software.
 
It's not a huge leap from something like the REW room simulator to giving your AI room dimensions and a few photos, letting it make room measurements and for it to suggest changing speaker positioning as well as DSP settings ... sounds interesting
 
Unless you hook up AI to a robot which can position your speakers for you, no, AI isn't going to be able to correct very bad speaker placement in software.
I believe that we cannot intervene much, because we are not electronic and/or computer entities, AI is. This is exactly what makes me confused.
 
It's not a huge leap from something like the REW room simulator to giving your AI room dimensions and a few photos, letting it make room measurements and for it to suggest changing speaker positioning as well as DSP settings ... sounds interesting
My response was actually meant jokingly, but I now see that might not have come across as such clearly :)

That being said: while I do think you have a point the main issue still seems pretty hard to solve, though I do think that if it can be solved it's going to have to be with modeling because the calculations themselves just have way too many parameters to ever enter manually: real rooms are non-ideal, sometimes non-rectangular, have furniture and windows and walls with different and uneven absorbtion coefficients and so on. In other words to replace the manual 'move speaker one 5cm left 4cm back and toe-in somewhat more - bam! instant better sound' thingie measurements are still going to be needed. But it's not impossible that modeling would be able to give better results, and faster, by doing measurements like that: record a couple of mic positions for initial position of speakers, move speakers a bit according to AI instructions then record from same mic positions, couple of iterations, and a model might be able to tell you what's working best. Repeat with introducing some acoustic panels, for instance.
 
I believe that over time AI will be able to model sound waves and mock our psychoacoustics. They do this, to some extent, with Dolby systems, how much more could a good system implemented with AI be able to do this?
 
I'm still waiting for AI to learn how human eye and ear nerve signals work so we can one day replace them.
 
One could imagine typing in text how you want the sound transformed and the EQ will do it. "I like to have more bass and for the sound to be less bright." "The bass sounds boomy to me and vocals don't sound right."
And a special mode for typical audiophiles where no change is happening and its all just placebo, same as some mixing engineers had a non connected fader on their consoles for annoying musicians or customers. :D
 
One could imagine typing in text how you want the sound transformed and the EQ will do it. "I like to have more bass and for the sound to be less bright." "The bass sounds boomy to me and vocals don't sound right."
How will it respond when it inevitably gets asked to make things sound "more musical," or, to have a greater sense of PRaT? :)

1740504554951.png


In all seriousness, this does seem like one of the legitimately positive uses for AI. I hate to see it taking over human creativity in some areas and causing people to be too lazy to do basic things like write emails or term papers. There is also still a lot of garbage in/garbage out. But man, can it be a good tool for saving time in cases like this.
 
In a 2 speaker system, there is only so much you can do to correct the sound, but I wonder if something like a real time BACCH system (maybe based on AI) could work.
 
One could imagine typing in text how you want the sound transformed and the EQ will do it. "I like to have more bass and for the sound to be less bright." "The bass sounds boomy to me and vocals don't sound right."
Plus a series of A/B tests for continual improvement....until ROI diminishes (statistically).
 
In the design process, any time there is something analog, we can't completely and totally simulate the solution. The logical gates - yes. But often there's trial and error involved in the analog part. And if *we* don't have a model, then AI doesn't either. I don't dispute that chips designed with AI going forward may have an advantage in learning from failure faster (which to a large degree is an underlying principle).

Just as an example, as we can pack a lot more logic into 3nm chips, the problem is in the high speed interconnects, SerDes tech etc. Won't go into details. But even with all the advanced super million buck design and simulation tools we have, it's rare those chips come back without issues from the fab on the first try. It's quite typical to need a (very expensive) re-spin. If you don't budget for that at the front end, your project is in dire danger... :)

As to the applicability to fully automate the process of room correction etc... sure, I don't see why not. That's a pretty predictable closed-loop-system optimization process. I don't even think a lot of AI power is needed for that.
 
I take inspiration from this article (in Italian):
to utopianize amplifiers designed by AI, which perfectly reproduce the sound event, or systems like DIRAC or similar, but with implemented AI.
In a context like the one above, will it make more sense to talk about HiFi as we talk about it now or will everything be delegated to the various AIs that will implement more and more DACs, amps with digital sections or themselves designed by AI, speakers developed by AI, ......etc .....etc .....etc.
I am interested in your opinion on the matter, we are at the dawn of a new expert also in the HiFi field, thanks, or perhaps because of, AI?
I think that audio amplifiers are pretty much a solved problem, and if AI came up with one that surpassed the current state of the art, the improvement wouldn't be audible. I see no point in chasing after that outside of the learning experience that might arise form such an endeavor.
 
A multiple speaker setup where it can look at how many people are in the room and where they are located to adjust the sound to the best possible for all of them.
 
Back
Top Bottom