• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

When is AI going to regenerate the lost data on recordings from cd or other digital source?

Status
Not open for further replies.
This is moving the goalposts, but the discussion is only meaningful if the music is enjoyable to human ears, something people want to listen to. I think that would greatly reduce the number of permutations and it would be found there would be little left for AI to produce that hadn't already been recorded before by people.
If this were true, the same species who rioted during the first performance of "The Rite of Spring" wouldn't have subsequently embraced the work as main stream and adapted (or outright stolen) elements from it to construct film scores, a most egalitarian form of music.

Audience conditioning is part of the process. In a mere 1000 years, the cats who dug Guido D'Arezzo were dancing to "Blue Suede Shoes." Not only do all artists stand on the shoulders of giants, so do the audience members. Problems occur only when either group forgets this truth. Artists who don't acknowledge and honor their forbearers produce only audio pablum or onanistic racket. The audience which stoppers its ears to the past will never appreciate anything more musically challenging than "Ah! vous dirai-je, maman"
 
This is moving the goalposts, but the discussion is only meaningful if the music is enjoyable to human ears, something people want to listen to. I think that would greatly reduce the number of permutations and it would be found there would be little left for AI to produce that hadn't already been recorded before by people.
In all seriousness, there is no shortage of compositions or recordings that people enjoy listening to that many or most other people would consider straight-up noise or random notes.

For example, I've intentionally listened to this album straight through more than once:

And I'm not even that weird.

I think you could limit the number by removing songs that are too similar, but I truly don't think there's any limit to what certain human ears find enjoyable.
 
In all seriousness, there is no shortage of compositions or recordings that people enjoy listening to that many or most other people would consider straight-up noise or random notes.

For example, I've intentionally listened to this album straight through more than once:

And I'm not even that weird.

I think you could limit the number by removing songs that are too similar, but I truly don't think there's any limit to what certain human ears find enjoyable.
 
This is moving the goalposts, but the discussion is only meaningful if the music is enjoyable to human ears, something people want to listen to. I think that would greatly reduce the number of permutations and it would be found there would be little left for AI to produce that hadn't already been recorded before by people.
Given the effectively infinite nature of the number calculated, and the fact that even then I haven't factored into it many of the much more complex variations that exist in music, I don't think it particularly matters if we divide that number by a few billions of billions of billions of billions of billions of billions to account for "what is listenable"

We could divide it by the number of atoms in the observable universe, then we could divide it by that again, and again, and again. And it is still going to be 10^ 410 TIMES GREATER than the number of atoms in the observable universe.
 
When he gets real data which whose shaved off or never. Taking bandwidth in consideration still newer. There were many attempts of busting top of the spectrum from many companies that are in the bisnis trough time but none whose really successfull. So you either need it done properly (compression shaving of filter to 18~19 bit it's 16 with dithering) or access to what's been shaved off to glue it back together. Future problem is that all lossy codecs change structure that it's not really recognisable to AI any more even if it has access to original data. It's also reason why such need more then 20~24 KHz frequency range for example LDAC. There are hibrid codecs containing lossy part and what's been removed from it as metadata and they are not destructive to source, problem is they need bandwidth which ain't much smaller from compressed loseles Wave ones like flac for 16 bit 44 KHz audio to have all what's important still in the lossy part.
 
Last edited:
It amazes me how many people will stand up for legacy playback and recording. Perfection should be the goal of music playback and we are far from that. Then all this theory would be pointless. The technological advances in music playback and recording seem ultra slow compared to other disciplines. Maybe because of dogma. That there are indeed better ways to recreate audio. Who knows AI might just about be able to crack this one.
 
It amazes me how many people will stand up for legacy playback and recording. Perfection should be the goal of music playback and we are far from that. Then all this theory would be pointless. The technological advances in music playback and recording seem ultra slow compared to other disciplines. Maybe because of dogma. That there are indeed better ways to recreate audio. Who knows AI might just about be able to crack this one.
Might be because all you need whose done 25+ year ago with first good 24 bit DAC's in comon user space with SINAD over 100 dB and even 40 year's with establishing CD as standard 16 bit integer with dithering in which you will found more than 90% of available materials today. Now they try to sell you less or not really better in nice packaging and bunch of lies.
AI is as dumb as who made it, good only for finding anomalies not resolving them, for that it still needs human cognitive scientific work of proper explaining them with all variations they contain, then it can process it.
 
The limits are not in the typical audio formats or current media se this excellent primer on digital audio.

But rather the actual channel count and how its recorded produced and played back , but full blown Atmos remain impractical ? or other solutions who would need a lot of channels and in practice a dedicated room . You wont get further with typical 2 channel . A well treated room with some kind off immersive system

 
It amazes me how many people will stand up for legacy playback and recording. Perfection should be the goal of music playback and we are far from that. Then all this theory would be pointless. The technological advances in music playback and recording seem ultra slow compared to other disciplines. Maybe because of dogma. That there are indeed better ways to recreate audio. Who knows AI might just about be able to crack this one.
I am new to this thread but would point out to you that most of the "improvements" wished for by enthusiasts are illusory IME.

Electronics have been capable of being indistinguishable to human ears for decades (though of course there are still items being made which are not that good).
I tested myself for the levels of distortion, noise and frequency and CD is way better than my ears.
Even LP records with their very basic technology are probably good enough most of the time with most recordings.

Noise lower than 16 bit (ie CD) is inaudible to me, yes, if you wanted to make a recorder that was capable of recording all sound from loud explosions to wind rustling through glass without adjusting recording level 16 bit is not enough, but for music, even dynamic classical music, it is enough.

I can hear to 14kHz, so CD is plenty good enough, even stereo FM radio is fine. With few outliers only young children hear to 20kHz and they are uniunterested by HiFi.

IME, and I have been experimenting for over 50 years now, the only part of music reproduction whic his not already superior to human hearing are the speakers and listening environment, ie the room, where the speakers are in it and where the listening position is.
All other aspects have hundreds of times less importance than these.
 
It amazes me how many people will stand up for legacy playback and recording. Perfection should be the goal of music playback and we are far from that. Then all this theory would be pointless. The technological advances in music playback and recording seem ultra slow compared to other disciplines. Maybe because of dogma.
The reason is more likely diminishing returns. There's not much if anything to gain from a higher bitrate or sampling frequency alone.

And your idea to recreate the full sound field from the recording area at your home would require an immensely higher cost and effort on both sides: In the studio and in your listening space. Dozens of additional microphones in the studio, lots of additional processing, a nightmare for any mastering engineer to handle. And the same for your home: Dozens of small point source speakers, very complex room EQ and setup, acoustic treatment and furniture placement dictated by room EQ.

If you would be happy with a good representation only at your main listening position, a 5.1 or 7.1 setup would probably be good enough today. See for yourself how many music recordings exist for that format. There is a very limited market for that sort of product.

That there are indeed better ways to recreate audio. Who knows AI might just about be able to crack this one.
AI can be powerful, but it's not a magic wand. The fewer people understand how it works, the more magic they expected it to perform...
 
Dogma. Saying something is good enough is not pushing the technology forward. Two one dimensional pistonic action speakers is not good enough to play a perfect reproduction of a concert. You can hear that perfectly when you attend a live concert. It is no where near typical hifi. I have a £70000 setup and I enjoy it, I also have Atmos with 13 speakers which I enjoy as well. However that does not mean there are no better ways of recording and recreating music.
 
Maybe because of dogma.
Yes, your own! Incisting to focus on the things that are not very relevant, while we've known for many decades what the most important factors in sound reproductions: How things are mixed and mastered, loudspeakers, and their room interactions. All else is many orders of magnitude less important once those components are halfway well designed, as 99% of equipment is nowadays.
 
Maybe AI could help writing a more plausible upmixer i used Meridian Trifield or Ambisonic in the past to upmix to 5.1 for more immersion it worked fine it really enhanced the presentation .

But my current 2.2 systems sounds better in plain stereo because KEF LS60 are so much better speakers than the DSP5200 I owned .

So speakers and room still trumps all imo .
 
Got some feedback that this thread has reached a point of diminishing returns on the original hypothesis.

Upon review, I concur and so is locked.
 
Status
Not open for further replies.
Back
Top Bottom