• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,656
Likes
5,819
Location
US East
Here is a list. The most notable by most people would be Steve Wozniak and the two guys who once lead Google's DeepMind project. Also Max Tegmark who runs the MIT Center for AI activities.
Scroll down below the letter.

I see Bengio, but not LeCun nor Hinton (the three "fathers" of artificial neural networks).
 

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,213
Likes
24,172
Most interesting to me is a professor at the University of the Sunshine coast in Australia. Never heard of him or the uni. Sounds like a cool place to be professor however. I picture a campus on the coast where you go and surf all day after classes. With the ubiquitous beautiful Australian ladies of course. :cool:
Looks like anyone can sign :)
I did see one colleague from the university where I occasionally pontificate re: matters glycobiological (and of whom I actually aware :)) is on the list.
 

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
369
Likes
693
Location
Vancouver, WA
Cause and effect still make sense in our physical world, until you get to the quantum level, which has more challenges.
Nice link to Hardy at the Perimeter Institute, where a lot of good work is being done. Regarding superposition, you might like this video with Tim Maudlin and Carlo Rovelli (Carlo worked with Lee Smollin (Perimeter Institute) on Loop Quantum Gravity.

 

PierreV

Major Contributor
Forum Donor
Joined
Nov 6, 2018
Messages
1,437
Likes
4,686
I've been playing with ChatGPT 4 a bit. In roughly 70% of the cases, its output is either better or equivalent but much faster than what my best output would be. In around 20 to 25% of the cases, it is definitely worse. Sometimes obviously worse, sometimes perniciously worse, in the sense that its output is flawed, but not obviously so. In roughly 10% of the cases, it hallucinates or goes outrageously wrong. From a productivity point of view, that is not optimal. Yes, I can see huge gains in this 70% of excellent results. Yes, I can iterate or intervene to solve a lot of the 20-25% "worse" cases but the non-obviously wrong answers can be tricky to investigate. At this point, the lottery aspect is the most annoying in practice, but also the most reassuring for us humans.

As far as future progress is concerned, there is a serious concern - the topic of several recent academic papers - which is that we will run out of training data.

The most worrying thing in the long term (whatever that means if progress continues to accelerate), as far as I am concerned, is we don't understand why the process works so well and we start noticing unexpected emergent capabilities (see the widely publicized 154 pages or so "tech" report released by OpenAI.

Even if we don't exactly know how it happened, our current conscience, intelligence, and abilities ultimately emerged out of a soup of "organic" (by definition) molecules and an evolutionary process. We are plenty scary ourselves already. And, even if one dismisses the idea of our spontaneous emergence, for example for religious reasons, the problem remains identical: if divine intervention somehow bootstrapped us out of the soup what are we doing now if not bootstrapping something potentially as radical?

On the "plus" side for us humans, we can count on the tech industry to over-hype intensively. When and if the shit really hits the fan, I think (hope?) I will be already dead.
 

Reddoc

Member
Joined
Feb 22, 2023
Messages
50
Likes
57
The (ab)use of AI can have catastrophic results, including mass deaths and destruction. It's not the AI achieving consciousness and then killing us that I'm worried about, it's more about the creation of autonomous, armored AI to kill/wage war on others.
I agree 100%- stupid, ignorant selfish and shortsighted use of technology with such extraordinary power might end our way of life really soon. Not Terminators, just remorseless chaos eg banking and financial systems, flight guidance, war based on invented data that is indistinguishable from truth.
Musk and the others are right but
The genie is out of the bottle. You can't stop it. Someone (or some nations) will do (or keep doing) it.
NTK is righter.
 

Galliardist

Major Contributor
Joined
Jun 26, 2021
Messages
2,558
Likes
3,273
Location
Sydney. NSW, Australia
It looks like ASR is already starting to be trolled with "Chat GPT says" posts. One danger that is already with us, is that people are treating bad AI answers as authoritative.

Those bad answers, though, are down to us and the crap we feed these things. Our best hope is to teach AI better than we teach ourselves: but we after all remain only human.
 

Galliardist

Major Contributor
Joined
Jun 26, 2021
Messages
2,558
Likes
3,273
Location
Sydney. NSW, Australia
Fox News would report on zombies, werewolves, vampires, perpetual motion, ancient astronauts, and the Bermuda Triangle if they thought it would be good for their ratings. :p
But that's what the History Channel is for, surely?
 
  • Like
Reactions: JRS

JeffS7444

Major Contributor
Forum Donor
Joined
Jul 21, 2019
Messages
2,347
Likes
3,507
Before the start of the millennium, some people were in a genuine panic, believing that computer glitches would bring the worldwide power grid down, and that once down, it would not be possible to restart it. But I had faith in human greed, and figured that American corporations were not about to lose money just because some long-ago computer programmers (including one W. H. Gates III who wrote the software for the Tandy 102 portable computer) saved a few bytes of data by abbreviating the year to just 2 digits. And needless to say, come January 1, 2000, bills still needed to be paid, and life went on.

By the same token, I have complete confidence that AI shall not bring about the end of humanity, because that would be bad for business! Speaking of business, it’s not at all clear to me that there’s a sustainable business here, versus a bunch of startups putting on their best dog-and-pony show in hopes of a buyout before exhausting their cash reserves.
 

JktHifi

Senior Member
Joined
Mar 9, 2023
Messages
387
Likes
195
AI will be dangerous if it‘s implemented on a robot to create another robot, specifically on military units.
 

Pilot

Member
Joined
Feb 25, 2020
Messages
23
Likes
25
Location
Greece
I think the "danger" is that AI will change who we are, literally. Not that it will take over but, rather, that we will absorb it.

A few thousand years ago, our species split from a biological and ecosystem-based evolution. A better-suited mutation today will not promote gene propagation.
A sufficiently beneficial "technological mutation", however, is bound to propagate.

When AI becomes irresistibly beneficial, we will try to integrate it with our biology (setting ethical questions aside the moment someone does it first). There will be no more "asking" that nearly-perfect chatGPT anything, it will be there in your head, field of vision, auditory perception, constantly assisting, guiding, translating, augmenting. Just like in science fiction movies.

I think progress in AI will force us to understand the mechanism of our consciousness - in order to defend against the damage we might inflict upon it.
 

JSmith

Master Contributor
Joined
Feb 8, 2021
Messages
5,153
Likes
13,211
Location
Algol Perseus
When AI becomes irresistibly beneficial, we will try to integrate it with our biology (setting ethical questions aside the moment someone does it first). There will be no more "asking" that nearly-perfect chatGPT anything, it will be there in your head, field of vision, auditory perception, constantly assisting, guiding, translating, augmenting. Just like in science fiction movies.
Agree... however the issue then is who controls the information? ;)


JSmith
 

Pilot

Member
Joined
Feb 25, 2020
Messages
23
Likes
25
Location
Greece
Agree... however the issue then is who controls the information? ;)
I don't think it will be possible to control, in a "totalitarian" sense. The Internet probably already works the same way: If not controlled, can it be influenced? No question
 

pierre

Addicted to Fun and Learning
Forum Donor
Joined
Jul 1, 2017
Messages
962
Likes
3,046
Location
Switzerland
AI is dangerous: here is a proof. You cannot spot a difference between some AI marketing automatically generated text and an article written by a “reviewer” in an audiophile magazine.

More seriously, AI drives and will drive a lot of changes. Jobs which are easy to be replicated by an AI will be replicated… etc similar to almost every advance in technology.
Does that make it dangerous? The fad on LLM is likely to pass. They will be replace by better models etc.

An interesting video:

1680245734743.png
 
Last edited:

bluefuzz

Major Contributor
Joined
Jan 17, 2020
Messages
1,043
Likes
1,772
I recently read "Mad" Max Tegmark's "Life 3.0: Being Human in the Age of Artificial Intelligence," which is a highly speculative and occasionally downright bonkers exploration of the potential consequences of AI surpassing human intelligence and becoming the dominant life form on Earth.

While I agree with Tegmark that truly intelligent AI is both possible and likely inevitable, it will take centuries, if not millennia, to reach human levels of general intelligence, sentience or cognition. In the near term, however, we will certainly see significant amounts of disinformation and social upheaval caused by the use and misuse of AI, typically driven by that time-honoured motive: greed. A six-month moratorium is unlikely to make much difference, if any.

Hopefully, all this will prompt some reassessment of humanity's relationship to technology, specifically in terms of who benefits from it, and perhaps spark a reconsideration of the nature of work, employment, privacy, and human dignity ...
 

jsilvela

Senior Member
Joined
Dec 1, 2022
Messages
440
Likes
401
Location
Spain
I took an AI course in grad school. What is considered "AI" is a very arbitrary boundary.
Many modern search algorithms originated solving AI problems.

But we (humanity) are too stuck on human intelligence as the measuring stick.
The famous (for a computer scientist) Edsger Dijkstra has a great sentence.
The question of whether computers can think is like the question of whether submarines can swim.

On the current climate, I have to say:
  • Neural networks, LLM, ChatGPT are *not* intelligent.
  • There was already a huge hype cycle on AI in the 80s, which led to a burst bubble, and the the "AI winter" of the following two decades. We may see something similar.
I read an essay from some well regarded AI scientist, say Yann LeCun or someone like that.
They made the point that many animals are able to pick up new skills or solve new problems with 5-10 tries.
Current AI technology needs tens of thousands of "data points".

Also, AI is an unknown. We have not yet isolated a "unit of intelligence".
So, "AI is dangerous" is way way way more fuzzy than saying "nuclear fission is dangerous".
 

jsilvela

Senior Member
Joined
Dec 1, 2022
Messages
440
Likes
401
Location
Spain
Here was the thread I meant, by Yann LeCun:

World models, intuitive physics, planning, problem solving, discrete search for solutions, continuous optimization of control parameters... Dogs manage to do all this with 2 billion neurons. Why debate human-level AI when we can't approach dog-level intelligence yet?

We, humans, give way too much importance to language and symbols as the substrate of intelligence. But primates, dogs, cats, crows, parrots, octopus, and many other animals don't have humans-like languages, yet exhibit intelligent behavior beyond that of our best AI systems.

What they do have is an ability to learn powerful "world models" that allow them to predict the consequences of their actions and to search for and plan actions to achieve a goal. The ability to learning such world models is what's missing from AI systems today.
 

Galliardist

Major Contributor
Joined
Jun 26, 2021
Messages
2,558
Likes
3,273
Location
Sydney. NSW, Australia
It's people that are dangerous.

AI is just a name that is misapplied to what we have at the moment. And people will decide what it does and is allowed to do.

By the end of the year I predict it will be just another political battlefield, because of that. So we'd better have our idle discussion here now before it joins the banned list of topics.
 

Triliza

Senior Member
Forum Donor
Joined
Jun 23, 2021
Messages
477
Likes
558
Location
Europe
BTW the takes of the sort "If we don't do it, the Chinese will".
I love the US, which I consider my second home.
But, ay, those kinds of takes ...
True. Historically speaking (based on a little I have read about), given the size and power they wielded at times, they can even be considered as the least expansionist and aggressive, always relatively to what some other civilizations and countries around the globe were doing.
 
Status
Not open for further replies.
Top Bottom