• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Audio Stem Splitters and AI - What's the current status?

Confused_by_tech

Member
Forum Donor
Joined
May 5, 2019
Messages
31
Likes
25
I've started to read about audio stem splitters. Basically the stems seem to separate the various components (e.g. bass, drums, vocal, etc.) of a music file/song into discrete files via AI. I have no real experience re: the degree to which this AI computer transform works, and the degree to which it is able to separate out various instruments (e.g. guitar, violin, harmonica, etc.). Has anyone had any experience and can they share it briefly? Thx so much!
 
Stem splitters basically filter specific frequencies to try to isolate instruments in their known frequency range. The AI part basically decides the most likely frequency range for each specific song.
I really have no idea how better these will get. It is more or less already good enough for extracting samples for use in music production.
Milage varies per song, as sometimes every resultant stem can be polluted with lots of artifacts.

At the end, stem splitters are a bunch of fancy (and very clever) tricks, because a master track is a a lot of waves summed into a completely new wave, that can't be deconstructed. As far as I know, it is literally like trying to reverse entropy.
 
Stem splitters basically filter specific frequencies to try to isolate instruments in their known frequency range. The AI part basically decides the most likely frequency range for each specific song.
I really have no idea how better these will get. It is more or less already good enough for extracting samples for use in music production.
Milage varies per song, as sometimes every resultant stem can be polluted with lots of artifacts.

At the end, stem splitters are a bunch of fancy (and very clever) tricks, because a master track is a a lot of waves summed into a completely new wave, that can't be deconstructed. As far as I know, it is literally like trying to reverse entropy.
I was hoping that ai could pick the timbre out of a mix for specific instruments. The timbre would be a group of frequencies related in some way specific to the instrument.
 
I've tried them out and they were working pretty well even a year or two ago. It works like @DJNX said, but I think they've got pretty clever about it. My guess is the AI has some heuristics about what kinds of harmonic information typically goes together and does some reconstruction on top of picking one instrument out from another, but I'm speculating.

FL Studio just shipped a stem splitter in their latest version, and they have a free demo. So you can try it out yourself in the next 30 minutes if you want. :)
 
that can't be deconstructed. As far as I know, it is literally like trying to reverse entropy.
Clearly it can - your brain does it routinely every time you listen to music, easily allowing you to recognise and focus on each instrument or voice as you like.

It is quite a remarkable ability. We haven't yet (to my knowledge) created a machine or software that can replicate this.
 
Back
Top Bottom