• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.

Ron Texas

Master Contributor
Forum Donor
Joined
Jun 10, 2018
Messages
6,632
Likes
10,152
https://www.foxnews.com/tech/ai-exp...esnt-enough-literally-everyone-earth-will-die

Elon Musk and 1000 other luminaries signed a letter calling for a 6 month pause in AI development. An AI expert now says that's not enough and the annihilation of humanity is the likely result if care is not taken. I'm a bit shocked. Is this guy just looking for his 15 minutes of fame or is he right? Anyone got a scientific perspective on this?
 
I'm interested, too. I smell synthetic drama to sell the concept.
 
The risks of AI are great, not in the sense that computers get smarter then us, but in the sense that our human stupidity becomes mirrored, automatised and powerful on a previously unimaginable scale. I am not sure that the likes of Elon Musk assess the danger in quite the same way.
 
Musk must be reeling from that recall Tesla just did for their Full Self-Driving vehicles. 363,000 of them. And he's worried about LLM software?


I've read that letter. What a steaming pile of BS.
 
I have done a very small amount of machine learning but I have chatted with professors of the subject here in Silicon Valley after class. My professor did not seem worried. AI like ChatGPT is a statistical model, it basically makes a prediction based on a vast amount of data using sophisticated neural nets that at base are like a million integrated linear regression nodes. When you chat with it you are sifting through vast amounts of numbers and only numbers that then convert to sentences or images. The transformer net was a break through, there will be more.

If we stopped AI research what would happen? Probably the Chinese would throw a party and then work even harder to get ahead. In my humble opinion a well regulated system is important. For example people want privacy, so we need regulation and systems in place to help protect people and stand together, this is basic. For example an AI today can copy an artists style and put them out of business. How do we deal with this? It's complicated but when I talk to my artist friends and their bosses are asking them for AI content, their current issue is that they cannot clearly modify the art their to bring out their own idea. AI does not steer well when you are in the creative process toward your vision, but it could make trashy knockoffs basically free.

Supporting artists that are worried about copy right might be one way to get involved, but I don't think panic is the answer. Personally I don't think we are in danger of an AI Skynet today. Transforming economies will continue to evolve and probably accelerate with more math behind it. AI music will probably make a splash at some point, do you care? Will AI music be fun? Could you ask it adjust the dynamic range and add more bass and do some remastering of the vocals with some Japanese rap and a riff on the Ventures?
 
Last edited:
I have a theory about changing the opinion of a large group of people. Suppose you have population X, who are neutral in opinion about something they have never heard about. If you make some alarmist claims about something that will eventually affect their lives, they will take notice. Even if those claims never eventuate. The more people you have making these claims, the better. Eventually the opinion of the population will change and shift towards your argument.
 
If we stopped AI research what would happen? Probably the Chinese would throw a party and then work even harder to get ahead.
In general, yes. Except that the Chinese have a major issue: access to SOTA silicon. That will slow them down quite a bit, but eventually they will catch up, but not in 6 months.

The problem with these AI tools starts if you give them agency. ChatGPT can create some script or code for you, but it can’t execute it. I also question it’s ability to solve the various issues that might occur due to errors it made. It will take a lot more iterations for it to make that happen. And even if it has agency, it needs some kind of “assignment”. One can give an AI an assignment, but if it can create those by itself, it starts to get scary. And since it can already make stories, it’s not far fetched that it can come up with it own assignments. You don’t want an IA with delusions of grandeur with the ability to create and execute its own code. None of this is sentience, mind you! Still all statistical models, just vastly more advanced then what we already have right now.

When will this become dangerous? No idea. Safeguards will need to be in place before that happens.

Do I think we need a 6 month stop? No, not really.
 
I wouldn't say Al is dangerous, per se. I've hung out with him and the boys on the corner many times. Even borrowed some dough from him to buy some product, if you catch my drift. He's a pretty cool guy. But I will say this - if you drag your feet paying him back, he's gonna send a couple of goomba's to investigate your kneecaps.
 
Yesterday, an issue came up on a two story building I have under construction. The City inspector asked a question about the fire rating of the elevator shaft. And while we resolved the issue, I asked ChatGPT the same question this morning.



Me: Per the International Building Code, does an elevator shaft in a two story building need to be constructed as a 1-hour rated fire barrier?

ChatGPT: The International Building Code (IBC) sets forth requirements for the design and construction of buildings, including requirements related to fire protection. According to the IBC, elevator hoistways and elevator machine rooms must be constructed as fire barriers that have a fire-resistance rating of at least one hour if they are located in buildings that exceed two stories in height.

Therefore, if your two-story building has an elevator hoistway or machine room, it would need to be constructed as a one-hour rated fire barrier in accordance with the IBC. However, if your building is not equipped with an elevator, then the code would not require you to construct any fire barriers related to elevators.

It is important to note that building codes may vary depending on the specific jurisdiction, so it is always a good idea to check with your local building department for the specific requirements in your area.



It literally has no understanding of what it's saying. That being said, the International Building Code is notoriously obtuse and poorly written and is tweaked every three years. I can understand how ChatGPT (a Natural Language Processor mores so than a conventional AI) would have trouble making heads-or-tails of such a question.
 
I am fairly certain that our dear Planet Earth will kill us off before AI is capable of doing so.

I usually draw parallels between AGI and fusion power, where no matter how far we develop fusion power it is constantly 50 years away from being a viable power source. None of our current developments in LLMs are bringing us closer to actual AGI. However, our latest creations are becoming ever better at fooling us of their intelligence.

I am worried though, we will soon be flooded with an infllux of fake news on a level never seen before and I am uncertain if our current controls are able to handle it. The confidence of the GPT based LLMs is quite something to behold.
 
https://www.foxnews.com/tech/ai-exp...esnt-enough-literally-everyone-earth-will-die

Elon Musk and 1000 other luminaries signed a letter calling for a 6 month pause in AI development. An AI expert now says that's not enough and the annihilation of humanity is the likely result if care is not taken. I'm a bit shocked. Is this guy just looking for his 15 minutes of fame or is he right? Anyone got a scientific perspective on this?

The (ab)use of AI can have catastrophic results, including mass deaths and destruction. It's not the AI achieving consciousness and then killing us that I'm worried about, it's more about the creation of autonomous, armored AI to kill/wage war on others.
 
It's not the AI achieving consciousness and then killing us that I'm worried about, it's more about the creation of autonomous, armored AI to kill/wage war on others
Automatic weapons are old news unfortunately. Doesn't really require AI to do it, even though that's often conflated with neural nets today.

The Switchblade drones from the US and automatic machinegun turrets from Daewoo (South Korea) spring to mind. Both can kill without human input. We passed that treshold some time ago unfortunately.
 
Don't let Musk's name taint your perceptions of this. Several other credible people have signed this. Whether it is a threat beyond incautious misuse like any new tool I don't know.
 
Anyone got a scientific perspective on this?
I have been following these speculative arguments for years, and there is something to it, but it's not science per se.

It's just a chain of reasoning:

  1. It is believed that it should be possible and even practical to create a computer system that is more intelligent than humans.
  2. There is a lot to be gained by creating such systems, and lots of people are trying to do it, so there's reason to believe it will happen eventually.
  3. We have never encountered an entity that's smarter than humans before, so the outcomes are completely unpredictable.
  4. Given how much of our world is connected to the internet and computers, smarter = more powerful.
  5. The history of civilization has many dire warnings for less-powerful species or groups coming into contact with more powerful ones.
It boils down to this. A truly intelligent computer is anticipated to be effectively all-powerful. It will be able to do whatever it wants to. We currently have no way to be sure it won't want to wipe us off the face of the earth. So that makes some people concerned. Why not all people, though? :oops:

Now, you might doubt whether "truly intelligent computer" is even conceptually valid, let alone possible. But smarter people than you or me are convinced it's not only possible, but inevitable. And not just fools like Musk, I mean (some) computer scientists going back decades.

Let's not get confused by the concept of "consciousness", either. This is a concept that can't even be scientifically defined yet. What we're really talking about is a computer that looks and acts intelligent, whether it has any "subjective experience" or "inner life" of its own is probably irrelevant.
 
Status
Not open for further replies.
Back
Top Bottom