For video, TVs have them for a while now, Nvidia and AMD have it on their GPUs,, even a streaming stick (Nvidia Shield) is capable of real time AI upscaling with pretty impressive results.
So why doesn't anyone offer this for simple music files yet? I believe it would be a godsent for the bandwith issues and produce exceptional results for files that don't even have lossless never mind high res originals.
And I can't even think of why this would be hard to make. Just pick a few ten thousand music files with high res. Compress them with lossy formats. Feed the training data and expected results to a neural network and voila - there is the AI model with 99,999% success rate in upsampling lossy music to their original state. I think you could train this in a few days on a consumer GPU even.
So why doesn't anyone offer this for simple music files yet? I believe it would be a godsent for the bandwith issues and produce exceptional results for files that don't even have lossless never mind high res originals.
And I can't even think of why this would be hard to make. Just pick a few ten thousand music files with high res. Compress them with lossy formats. Feed the training data and expected results to a neural network and voila - there is the AI model with 99,999% success rate in upsampling lossy music to their original state. I think you could train this in a few days on a consumer GPU even.