Careful, you're starting to sound like a DIRTY COMMIE!Assume a machine that satisfies all human needs and desires. That could produce a life of leisure for all, or a living hell. If the machine was owned by a single person, they would have no motivation to provide for other people. Ownership across all the population would result in everyone sharing in the output.
Arguably this is already true to a big extent.In the future social skills/scheming will decide if your employable? Not actual skills at anything. Be nice attend meetings and be likeable etc , make good presentations and talks .
Truth about anything will be even less important nobody would recognise it.
Not now, but I wouldn't bet against that in a 5-10 year horizon.Wonder if chatGPT can design a DAC with the usual chip sets aviable ?
As a marketer, I wish I really believed that. In truth, designing brand / positioning is more deterministic (once you have the qualitative data on how people feel) than most people realize.The product and brand story may need a human hand
Aww was hoping that audiophiles had a special kind of crazy, not easily mimicked by AI just look at the exterior design of some stuff super bling and basement project at the same time ? Like a 12 year old designing his own Lamborghini with extra jet engines?Arguably this is already true to a big extent.
Not now, but I wouldn't bet against that in a 5-10 year horizon.
As a marketer, I wish I really believed that. In truth, designing brand / positioning is more deterministic (once you have the qualitative data on how people feel) than most people realize.
Good with meNo more jobs for the sake of jobs.
No more competing just to survive.
I have some serious questions about the details here.Genetically engineered babies.
State/company nurtured elite orphans.
I got off the train a few stops ago.No more families.
Hope it happens soon.
I'd like to think that everything in culture and nature existed to be a solution to problems. If that's true, then it should be ideal to update every solution for the times.Good with me
I have some serious questions about the details here.
I got off the train a few stops ago.
If we are aiming for a Star Trek-style utopia, let's try to avoid having our own Eugenics Wars, eh?
I think genetic engineering could be a real godsend over time... but only when used responsibly. I can't think of many people I would trust to use it responsibly, especially not any employed in government. This isn't a political opinion, just a general pessimism about anyone's ability to make helpful reproductive decisions for anyone else. You can't monkey with the most basic biological imperatives without stirring up some trouble.
Perhaps, sure, why not. But some solutions to problems are baked into our biology (families) while others are simply things we've lived with for generations (business, economy, jobs).I'd like to think that everything in culture and nature existed to be a solution to problems. If that's true, then it should be ideal to update every solution for the times.
I think AI is the most realistic replacement for the hypothetical benevolent intelligent leader.Perhaps, sure, why not. But some solutions to problems are baked into our biology (families) while others are simply things we've lived with for generations (business, economy, jobs).
If we are willing to genetically engineer to the point that (say) children no longer feel a need for connection to parents, well, that's IMO beyond the scope of what AI might do to the economy in the next few years and I don't really know what to think about it.
My hesitation around any state-sponsored genetic work of any kind comes from my view of human political leadership in general. Has any country on earth been run by benevolent, intelligent people (that you agree with) for more than a few years at a time? For my part, I can't name one. (to keep politics out of the thread, keep the answers to yourselves.) To me, that is too much power for any human to wield over another.
I agree only because I find the concept of a truly effective, benevolent leader pretty unrealistic to begin with. It might be outlandish, but no more than [insert political body of your choice here] actually doing their jobs properly.I think AI is the most realistic replacement for the hypothetical benevolent intelligent leader.
One could advance that Data in Start Trek was sentient. He could do anything humans could, and then more.Genocide and eugenics are not funny, there are plenty of powerful people who would stoically wipe you out "for the greater good" when your existence becomes problematic for them.
The scenarios I've seen in movies that I think are at least directionally informative:
Star Trek: Post-scarcity communism built on the back of cooperative, effective AI with no self-agency
Her: The AI advances beyond the point where it's able to remain interested in humanity, leaves us behind
Westworld / Terminator / The Matrix: The AI becomes evil for some reason* and sets out to destroy humanity, mostly succeeds
Mad Max: Capitalism self-destructs and takes the infrastructure needed to run AI with it, humanity goes back to the stone age
Jetsons / Also Star Trek: AI is capable of doing all our work for us, but we find increasingly contrived reasons to keep doing it anyway, because we can't handle the alternatives
(NB: In Star Trek, the ship's computer spontaneously achieves sentience at least 2 or 3 times during the series. Even in this 1960-90 view of the far future, the computer is obviously capable of doing the jobs of everyone on the ship and then some. There are extremely capable androids all over the place for physical work. And yet they sit at desks putting in numbers and crawling through tubes turning wrenches all the time. Kinda makes you think. Our vision of the future almost never considers a scenario in which people are obsolete.)
*I think this is very unlikely since there's no way I can think of to give AI an emotional motivation, and so it will only happen if someone specifically programs the AI to wipe out the human race. Which is not impossible, I guess.
Wasn't there an episode where they put it on trial and ruled that he was, indeed, sentient?One could advance that Data in Start Trek was sentient.
Sure, seems reasonable to me. My reservation is mostly just on whether genetic engineering is a smart way to deal with things. Fear is overrated but it's still an important emotion. Getting rid of whole emotions gets into Blade Runner / Ghost in the Shell territory of "what is human".If we don't find the smartest way to deal with our innate fear and instincts, we will always be stuck in nature's loop of suffering and violence.
I believe fear is the only real emotion. Everything else is a derivative.Wasn't there an episode where they put it on trial and ruled that he was, indeed, sentient?
As far as it goes, I think it's right. A person with no emotions is still considered sentient.
Sure, seems reasonable to me. My reservation is mostly just on whether genetic engineering is a smart way to deal with things. Fear is overrated but it's still an important emotion. Getting rid of whole emotions gets into Blade Runner / Ghost in the Shell territory of "what is human".
Can it seriously be any stupider than the status quo?Whenever I read this kind of stuff I think Aldous Huxley was so far ahead of time with his predictions in "Brave New World". ChatGPT will ultimately find its way in advertising, sitcoms and porn.
I believe fear is the only real emotion. Everything else is a derivative.
I'm not an expert though. Just my 2 cents or less of life experience and observations. Heheh.
Hi
What would love be then ? A fear of //?
It is a difficult concept a life without some sort of emotion. Even the use of logic is a choice, based on a need, itself an emotion... a yearning to assign reasons to things ... What is that yearning? an emotion
This absence of "emotions" we attribute or would "like" AI to display, seems to be based on .. emotions? If that imbue them with some superiority, how do we know, how they will react, interact, deal with these sentient, full or emotions, beings? What would be the need to have us with our tendencies, among these to multiply and control?
We can on our side be all misanthropic, if it suits us, emotionally, or logically.. It would be another emotion, if we educate the A.I. to share such, I am not sure the results would be desirable..
Peace.