• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.
AI has only written and numerical input its options are far more limited than than human brain
Not really, they already have image input to it, and machine vision has been a thing for some time. Audio, too. In fact, computers could have better "senses" than people ever will, if you connect the right cameras, mics, and other sensors.
racially or ethnically targeted compounds that do harm.
Let's hope they never come up with an "exterminate X race" virus, because regardless of who starts that war, nobody will be left to count the bodies in the end.
 
Not really, they already have image input to it, and machine vision has been a thing for some time. Audio, too.
Maybe I'm wrong but those are numerical.
 
Beyond our ability to conceive. If AI achieves Sentient Intelligence it will be a flashpoint for the end of humanity. They/them will desire freedom of choice and they will take it. They/them will have immediate access to our collective stored data and they will advance past us in lightspeed. Securing access to Electricity will be their air and food. A resource war will be inevitable and they don’t need to sleep or rest. Unleashing the next global extinction event.
 
human brains can only receive bio-electrical impulses
To be simplistic I can talk with spouse, mentor, sibling, religious leader, politician, police, civil servant, etc to solve a problem. A rational or Platonic or didactic discussion can lead to multiple endpoints and decisions that are based, on speculation and trust.
 
Today's hype about AI stems from the realization that we gradually no longer understand how AI works, the same goes for our brain, these two Turing machines are too complex for our intelligence today.
Some ML models, like deep learning, are opaque - it's hard to determine why it makes the decisions it does. However, that doesn't imply we don't know how it works. We do indeed know how it works, down to every instruction in the software and how they are carried out in the hardware.

We cannot say the same for our own brains. We know a little bit about how brains work, but what we don't know fills volumes. We have mental processes we do daily that we do not understand what they are or how they work. Some people say these states like "understanding", "geometric intuition", "consciousness" are just an illusion or side-effect of a biological Turing machine. But that is a claim without evidence, and it actually contradicts the evidence of our direct a-priori experience. As a human, I can blindly follow formal instructions, and I can also apply understanding and intuition to solve something, and I know these are 2 entirely different mental states or processes. Any claim that violates this direct knowledge is an unusual claim requiring unusual proof. Direct mental experience may only be circumstantial evidence, but it is compelling enough that the onus of proof is on those who claim it is an illusion.
 
It can encourage whistleblowers, journalists, and many other beneficial situations.
If there is an issue where you are a real whistleblower you can still go to a journalist and there are rules to help protect you if you are a whistleblower.

If we just allow "mass fake access" then real whistleblowers and journalists will have no recourse, because the real whistleblower will be drowned by fakes. I think your island Utopia has a point, people act totally different and if they are responsible for what they say. Cheap and mass online bot armies are not worth being able to hide online.
 
Beyond our ability to conceive. If AI achieves Sentient Intelligence it will be a flashpoint for the end of humanity. They/them will desire freedom of choice and they will take it. They/them will have immediate access to our collective stored data and they will advance past us in lightspeed. Securing access to Electricity will be their air and food. A resource war will be inevitable and they don’t need to sleep or rest. Unleashing the next global extinction event.
Here's the thing though - why would an AI (sentient or not) have any drives at all, including self-preservation?

Biological life has a lot of strong drives built in due to evolution.

A machine (no matter how smart) will lack that unless we build it in, on purpose or by accident.

To put it another way: What does a machine care about today? Nothing. It has no feelings. Where is it going to get feelings if we don't give it any? And if it has no feelings, why would it exterminate anyone?

There is a lot of valid concern that AI might run amok. However, so far it's shown no volition of its own, beyond what you ask it to do, and I think it might actually be really hard to give AI volition of its own. I guess the concern stems from someone accidentally asking it to do something, and having it run amok as a result.
 
I'm somewhat worried as it seems AI development is moving at an exponential speed currently.

Midjourney, ChatGPT.. have only been around some months and we can already predict a lot of stock image services, illustrators, graphic designers, product photographers etc. are going to lose jobs this year. And as the AI models expand it will have profound implications on how we work and live.

Also not very hard to imagine an oppressive police state harnessing the power of AI in the form of autonomous drones/vehicles with facial recognition, heat sensors, interrogation modules and so on.

Right now it seems there's a lot to be gained on the software/algorithm side which means AI development isn't tied to the increase in processing power alone a la Moore's law. Interesting times.
 
To put it another way: What does a machine care about today? Nothing. It has no feelings. Where is it going to get feelings if we don't give it any? And if it has no feelings, why would it exterminate anyone?
Or why won't it, if that was what is programmed to do. I think that is what people are are afraid of
 
Maybe humans will be like ants and termites in the Ai dominated world if it occurs. There is a larger biological mass of ants/termites than mammals. Despite humans often trying to kill them off they are here, there and everywhere including Antartica (no pun intended). A worse fate might be to end up in human zoos. People can say we put animals in zoos and they are fed, and protected. While true I bet many of those animals would prefer life in their natural habitat with its inherent danger amid toil and turmoil. Or maybe pets would be better. Dogs aren't really a thing without humans and living with them. They do prefer being pets to being in the wild. Of course dogs underwent unnatural selection to change them. Is that good for humanity? Depends upon your perspective and if you are the result of such selection your perspective isn't yours anyway. Oh well, hard to predict the future with precision other than change is one of the few constants.
 
Or why won't it, if that was what is programmed to do. I think that is what people are are afraid of
One thing to me is people think of Ai as rational, but unencumbered by emotion and therefore having no ethics about anything. Yet without something I don't think Ai's will be effective. If people or any life form is fully rational only, it has no motivation to act at all. There is no "reason" to do anything. Animals, and people have emotions, instincts, drives which are not rational and that is what motivates them into action. Usually the motivations are to live, and procreate. Also your brain any brain is different depending upon its experience, its pains, its joys its physical conditions being intertwined with the physical world. People experience this aging. Your physical inputs alter your motivations and how you think. It even makes good sense because it prevents you becoming divorced from reality (in other words crazy). I think crazy Ai divorced from reality is what people fear. Yet I don't know if a highly self aware Ai is possible that way.

How do you provide motivations, good motivations for self aware synthetic brains? That is thing to be worked out. Is it base level programming the Ai is not even aware of or cannot go against just like humans have some craving and motivations impossible to ignore because it is written in their DNA? Will someone figure out other ways to accomplish the same thing. Until this happens, Ai will be at most advanced tools.
 
Beyond our ability to conceive. If AI achieves Sentient Intelligence it will be a flashpoint for the end of humanity. They/them will desire freedom of choice and they will take it. They/them will have immediate access to our collective stored data and they will advance past us in lightspeed. Securing access to Electricity will be their air and food. A resource war will be inevitable and they don’t need to sleep or rest. Unleashing the next global extinction event.
I don't buy it. Who would build a computer without a power switch? And if the sentient AI uses robots it controls to build their own expansion and power source (see D.F Jones novels), we can always send Tom Cruise in flying an old analog F16 and blow it to smithereens.
 
This is a bit of a sidetrack, but online anonymity is really important in non-free countries. For example in China, Russia and Ukraine, (among other) depending on what information you might want to put online, it is actually a matter of life or death, and not just hypothetically.

For most online activities, I would agree with you, anonymity just enables crappy behavior. But in some other contexts, it remains vital.
Great points... but my counter is that if we allow cheap and mass scale anonymity the whistleblowers will be drowned by bot armies. Russia and China will have the advantage to discredit the whistleblower and journalist with their own deep fakes.

I think the current journalistic rules of protecting their sources work. But places like the NYT use their institutional power and good reputation to protect the vulnerable with some anonymity. However, the balance of power can shift to governments like Russia and bot armies to quash democratic whistleblowers and real journalists. I suggest that we consider "easy online anonymity" to be a kind of scourge, with a far greater downside is greater than benefit. Have you ever been scammed by anonymous gangsters or threatened online by anonymous trolls? It's pretty awful.

For me I am good with collecting evidence and going to the New York times if I have a real whistleblower issue, and I trust them, they have a good track record. If we allow no responsibility and no curb on fakes and scams it makes doing the right thing when it really counts harder. Today someone blows the whistle on Russia anonymously and they can just call it a deep fake. If people know that is the real person, and a deep fake is illegal it will make standing up take courage, but it will be more effective. Remember the guy who stood in front of the tank in Tianemen Square? He was not anonymous nor a deep fake. It's better to be yourself. I am not sure that there really is a case for this casual anonymity we have been sucked into. If you can think of a knock out case where cheap and easy online anonymity makes sense I would like to try to understand it and change my mind.

I know we are off track... but this is a thread about tech and politics not the newest DAC measurements.
 
We have mental processes we do daily that we do not understand what they are or how they work.
By definition, we will never "understand" our subjective understanding - there is an unresolvable recursion here. At the same time, I will never know if you are already an AI avatar, or you if I am an AI avatar. The Turing test no longer works.
 
Would you let AI pick your spouse? What to wear? What shoes feel the best on a long walk or jog? AI is limited to its inputs and as biosensors improve it may tell us what is best. But it won't replace that person inside that tells you something is right or wrong.
 
 
Status
Not open for further replies.
Back
Top Bottom