• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.

FrantzM

Major Contributor
Forum Donor
Joined
Mar 12, 2016
Messages
4,337
Likes
7,729
I have been following these speculative arguments for years, and there is something to it, but it's not science per se.

It's just a chain of reasoning:

  1. It is believed that it should be possible and even practical to create a computer system that is more intelligent than humans.
  2. There is a lot to be gained by creating such systems, and lots of people are trying to do it, so there's reason to believe it will happen eventually.
  3. We have never encountered an entity that's smarter than humans before, so the outcomes are completely unpredictable.
  4. Given how much of our world is connected to the internet and computers, smarter = more powerful.
  5. The history of civilization has many dire warnings for less-powerful species or groups coming into contact with more powerful ones.
It boils down to this. A truly intelligent computer is anticipated to be effectively all-powerful. It will be able to do whatever it wants to. We currently have no way to be sure it won't want to wipe us off the face of the earth. So that makes some people concerned. Why not all people, though? :oops:

Now, you might doubt whether "truly intelligent computer" is even conceptually valid, let alone possible. But smarter people than you or me are convinced it's not only possible, but inevitable. And not just fools like Musk, I mean (some) computer scientists going back decades.

Let's not get confused by the concept of "consciousness", either. This is a concept that can't even be scientifically defined yet. What we're really talking about is a computer that looks and acts intelligent, whether it has any "subjective experience" or "inner life" of its own is probably irrelevant.
I share your view. You said it, better than I could. Emphasis is mine.
to add to this.
Do we take a few seconds to realize how much our lives is controlled by computers? Interconnected computers? Communicating endlessly?
Someone wrote that those AIs can write code, but seems to find solace (my take) in that they can't run it.. That is the same for us, humans, we write codes but don't /cannot run these. We feed the codes to machines to execute, to "run". AI will eventually feed the codes to other machines to run it .. or better or worse, depending on perspective, humans will teach some AI, how to feed the codes without human intervention..to other machines and validates the results.. farfetched? I don't think so. It may be already happening.
And we go happily building machines with the potential of being more "intelligent" and powerful than our feeble minds can fathom... I find this scary.

Peace.
 

Propheticus

Senior Member
Joined
Nov 18, 2020
Messages
431
Likes
644
Location
Vleuten, Netherlands
Those bad answers, though, are down to us and the crap we feed these things.
This is in my opinion what it boils down to. The old trope "Crap in = Crap out" is still true for these large language models.

If a misconception is very common and widespread it gets a bigger 'weight' -if not manually corrected- in the resulting model. We're not all experts on all topics, yet the internet/written media is full of people that share 'facts' and pieces of advice that are simply wrong. Not even with ill intent, they often really believe it and/or simply don't know any better. It takes some knowledge/intellect to clearly see the limitations of your knowledge. (Aaaand I now realise I'm essentially describing the Dunning-Kruger effect.)

Why would we trust a generalistic model that values all input equally and applies most weight to the most prevalent? Quantity (# of repetitions) <> Quality.
We've built a sophisticated parrot that echoes the ~average body of knowledge found online. Average can be correct (for common knowledge), but I'd rather get a good answer for more specialised topics as well.
 

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,213
Likes
24,172
But that's what the History Channel is for, surely?
Right!

Oh.
I am serious -- and...

Dy3zLcCUYAInXde.jpg
 

Galliardist

Major Contributor
Joined
Jun 26, 2021
Messages
2,558
Likes
3,273
Location
Sydney. NSW, Australia
I share your view. You said it, better than I could. Emphasis is mine.
to add to this.
Do we take a few seconds to realize how much our lives is controlled by computers? Interconnected computers? Communicating endlessly?
Someone wrote that those AIs can write code, but seems to find solace (my take) in that they can't run it.. That is the same for us, humans, we write codes but don't /cannot run these. We feed the codes to machines to execute, to "run". AI will eventually feed the codes to other machines to run it .. or better or worse, depending on perspective, humans will teach some AI, how to feed the codes without human intervention..to other machines and validates the results.. farfetched? I don't think so. It may be already happening.
And we go happily building machines with the potential of being more "intelligent" and powerful than our feeble minds can fathom... I find this scary.

Peace.
Automated share dealing has already done economic damage by selling through other computerised systems that determine share prices, so it has already happened, in a primitive way.

It's not an intelligent computer we have to worry about but all the dumb computers it can and will influence and control. We've already had botnets and they seem to have been largely put out of business, so direct communications or old fashioned hacking would be what the AI system would have to do, and it would do it very quickly, It;'s speed rather than cleverness that would do the damage with today's levels of technology.
 

FrantzM

Major Contributor
Forum Donor
Joined
Mar 12, 2016
Messages
4,337
Likes
7,729
Automated share dealing has already done economic damage by selling through other computerised systems that determine share prices, so it has already happened, in a primitive way.

It's not an intelligent computer we have to worry about but all the dumb computers it can and will influence and control. We've already had botnets and they seem to have been largely put out of business, so direct communications or old fashioned hacking would be what the AI system would have to do, and it would do it very quickly, It;'s speed rather than cleverness that would do the damage with today's levels of technology.
Can I multi-like?


Peace.
 

FrantzM

Major Contributor
Forum Donor
Joined
Mar 12, 2016
Messages
4,337
Likes
7,729
This is in my opinion what it boils down to. The old trope "Crap in = Crap out" is still true for these large language models.

If a misconception is very common and widespread it gets a bigger 'weight' -if not manually corrected- in the resulting model. We're not all experts on all topics, yet the internet/written media is full of people that share 'facts' and pieces of advice that are simply wrong. Not even with ill intent, they often really believe it and/or simply don't know any better. It takes some knowledge/intellect to clearly see the limitations of your knowledge. (Aaaand I now realise I'm essentially describing the Dunning-Kruger effect.)

Why would we trust a generalistic model that values all input equally and applies most weight to the most prevalent? Quantity (# of repetitions) <> Quality.
We've built a sophisticated parrot that echoes the ~average body of knowledge found online. Average can be correct (for common knowledge), but I'd rather get a good answer for more specialised topics as well.
Children also parrot.. then they learn, grow up ... the results can be Mother Theresa or Pol Pot... or ...


Peace.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,191
Location
Northern Virginia, USA

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,191
Location
Northern Virginia, USA

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,179
Likes
1,494
Location
USA
BTW the takes of the sort "If we don't do it, the Chinese will".
I love the US, which I consider my second home.
But, ay, those kinds of takes ...
It is illogical to be worried enough about the potential of nefarious uses of LLMs that it warrants an ill-conceived and unenforceable "pause", but then dismiss worries about what adversaries might do with LLMs while we pause and discuss philosophy and get mired in introspection.
 
OP
Ron Texas

Ron Texas

Master Contributor
Joined
Jun 10, 2018
Messages
6,075
Likes
8,908
Long before there is a real danger of AI's wiping out humanity there will be massive disruption of the job market. I saw one prediction that lawyers were particularly vulnerable, but I believe the author did not really understand what lawyers do. However, I believe paralegals are at risk.

Above there is some discussion of AI's giving nonsensical or wrong output. For this reason someone has to be checking their output constantly.

Is it possible to give an AI a moral compass or true empathy? I don't know, but the world is full of sociopaths already.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,191
Location
Northern Virginia, USA
Is it possible to give an AI a moral compass or true empathy? I don't know, but the world is full of sociopaths already.
Yes. Frankly, I'd trust almost any AI over some people I've known.
 

Galliardist

Major Contributor
Joined
Jun 26, 2021
Messages
2,558
Likes
3,273
Location
Sydney. NSW, Australia
This is in my opinion what it boils down to. The old trope "Crap in = Crap out" is still true for these large language models.

If a misconception is very common and widespread it gets a bigger 'weight' -if not manually corrected- in the resulting model. We're not all experts on all topics, yet the internet/written media is full of people that share 'facts' and pieces of advice that are simply wrong. Not even with ill intent, they often really believe it and/or simply don't know any better. It takes some knowledge/intellect to clearly see the limitations of your knowledge. (Aaaand I now realise I'm essentially describing the Dunning-Kruger effect.)

Why would we trust a generalistic model that values all input equally and applies most weight to the most prevalent? Quantity (# of repetitions) <> Quality.
We've built a sophisticated parrot that echoes the ~average body of knowledge found online. Average can be correct (for common knowledge), but I'd rather get a good answer for more specialised topics as well.
Well, humans are very bad at trust.
 

computer-audiophile

Major Contributor
Joined
Dec 12, 2022
Messages
2,565
Likes
2,879
Location
Germany
Italy's data protection authority blocks chat GPT
The AI application will be banned from processing user data from Italy with immediate effect, the authority announced on Friday. An investigation has also been launched.
 

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,213
Likes
24,172
We were warned by 2001. Luckily, Star Trek taught us how to disable AIs by asking them to calculate pi to all decimal places.
and by invoking the Epimenides Paradox. Indeed, this episote of Star Trek TOS (as it's become known) is where, at a tender age, I learned about it! :)


Skip up to ca. 4:06 if it's just too much camp to bear. :cool:
 
Status
Not open for further replies.
Top Bottom