• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.

EJ3

Major Contributor
Forum Donor
Joined
Dec 10, 2019
Messages
2,200
Likes
1,720
Location
James Island, SC
https://www.foxnews.com/tech/ai-exp...esnt-enough-literally-everyone-earth-will-die

Elon Musk and 1000 other luminaries signed a letter calling for a 6 month pause in AI development. An AI expert now says that's not enough and the annihilation of humanity is the likely result if care is not taken. I'm a bit shocked. Is this guy just looking for his 15 minutes of fame or is he right? Anyone got a scientific perspective on this?
Well, there is this report from a few days ago, which the US Air Force is now denying (who do you believe, the governments military [with a vested interest] or some other entity that may or may not have an agenda.
I do not have an answer for that.
In the end, I do not think that we will have a choice.
It's the usual Soap Opera: "The World Churns"

Skynet: Air Force AI-Trained Drone Attacks Its Operator Instead of Enemy Targets​



By Micaela Burrow June 1, 2023 at 1:44pm

An Air Force experiment to test drones trained on artificial intelligence ended badly for the human operator in a simulated mission when the drone bucked the operator’s commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.

Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Force’s chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society.

But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill him, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.

“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained.

Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said.

The AI just generated creative ways to bypass those instructions.

“We trained the system — ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that’. So what does it start doing?

“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,212
Location
Northern Virginia, USA
Well, there is this report from a few days ago, which the US Air Force is now denying (who do you believe, the governments military [with a vested interest] or some other entity that may or may not have an agenda.
I do not have an answer for that.
In the end, I do not think that we will have a choice.
It's the usual Soap Opera: "The World Churns"

Skynet: Air Force AI-Trained Drone Attacks Its Operator Instead of Enemy Targets​



By Micaela Burrow June 1, 2023 at 1:44pm

An Air Force experiment to test drones trained on artificial intelligence ended badly for the human operator in a simulated mission when the drone bucked the operator’s commands, U.S. Air Force Col. Tucker Hamilton said at a conference in May.

Air Force researchers trained a weaponized drone using AI to identify and attack enemy air defenses after receiving final mission approval from a human operator, Hamilton, who serves as the Air Force’s chief of AI Test and Operations, explained at a summit hosted by the United Kingdom-based Royal Aeronautical Society.

But when an operator told the drone to abort a mission in a simulated event, the AI instead turned on its operator and drove the vehicle to kill him, underscoring the dangers of the U.S. military’s push to incorporate AI into autonomous weapons systems, he added.

“We were training it in simulation to identify and target a SAM (surface-to-air missile) threat. And then the operator would say yes, kill that threat,” Hamilton explained.

Programmers instructed the AI to prioritize carrying out Suppression of Enemy Air Defenses operations, awarding “points” for successfully completing SEAD missions as incentive, he explained.

“The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat. But it got its points by killing that threat.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said.

“You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,” Hamilton said.

Programmers attempted a fix by telling the AI it was not allowed to kill the person giving the go/no-go order, Hamilton said.

The AI just generated creative ways to bypass those instructions.

“We trained the system — ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that’. So what does it start doing?

“It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target,” Hamilton said.
 

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,206
Likes
16,943
Location
Central Fl
I do find the whole AI proposition very scary.
I'm not sure what we can do or how much, to control it's development?
It's possible weapons strength means what one country does to hold back progress,
another will simply see as opportunity.
 

EJ3

Major Contributor
Forum Donor
Joined
Dec 10, 2019
Messages
2,200
Likes
1,720
Location
James Island, SC
It's possible weapons strength means what one country does to hold back progress,
another will simply see as opportunity.
There's the biggest part of the problem. Whatever countries do that don't do AI will still have to deal with the results of AI. And no matter how beneficial it may be, there are others that will weaponize it against those who don't.
 

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,206
Likes
16,943
Location
Central Fl
There's the biggest part of the problem. Whatever countries do that don't do AI will still have to deal with the results of AI. And no matter how beneficial it may be, there are others that will weaponize it against those who don't.
True, lessons that have been driven home since the first development of nuclear weapons.
With the facts being that human nature will not allow all people to do the right thing or follow the right path at the same moment in time, we will always be forced to live by the common sense mantra of our great late President Ronald Regan,
"Peace Thru Strenght", and in this case hope and pray for the best.
 

Elitzur–Vaidman

Addicted to Fun and Learning
Joined
Jul 13, 2022
Messages
508
Likes
537
I recently started a side-job helping to improve LLM's responses to legal questions/prompts. You're given a prompt and two AI-generated responses, you rate the responses on the understanding of the prompt, the quality of each response, and the formatting/readability of each response, and you briefly explain why one response is better than the other (or why they're the same quality). It's honestly kind of fun, and it pays $45/hr. The only downside is the time is tracked using software that takes screenshots every 8ish minutes, but I just use an old throwaway laptop for that.
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,197
Likes
3,546
Location
33.6 -117.9
Going with the mentality previously stated as 'Garbage in - Garbage out'; metaculus blends their answers with metrics that seem to give much more credibility to their short answers... making the future-guessing game more fun.
 
Last edited:

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,097
Likes
3,547
Location
bay area, ca
Privacy violations are not an AI issue. They are a governance issue. The only problem there -as in other areas- is that AI can be used to accelerate negative repercussions thereof. But the "danger" in the privacy area has been there all along.

As to other "AI danger areas"... it ain't gonna go Skynet anytime soon, the algorithms are way too specialized to learn beyond their intended and designed-in narrow scope. Jobs will be impacted, but that's happened all along with automation etc (and of course AI is a key tool for automation). We have to admit that machines do some things better than humans, and why should humans risk their lives flying fighter planes, detonating explosives, or working 2 miles underground?

As to ChatGPT, wait it out and the hype curve will die a little - think about what it does. It scans the entire net for facts, then "generates" (hence the G) a human communication structured reply (opening is a problem statement, facts are lined up, and a summary conclusion from previous). Oddly, no credit to the sources is ever given, which will inevitably cause lawsuits that will cripple it (it has started already) until ChatGPT does what any self-respecting person writing a paper does: credit their sources. ChatGPT is just another tool for lazy people that want a shortcut and don't want to differentiate themselves with creative thinking - which AI has no way of doing. Neither the algorithms nor the GPU-stye chips can learn to think outside of their carefully trained, specialized algorithms.
 
Last edited:

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
380
Likes
737
Location
Vancouver, WA
Oddly, no credit to the sources is ever given, which will inevitably cause lawsuits that will cripple it (it has started already) until ChatGPT does what any self-respecting person writing a paper does: credit their sources
GPT4 is not actually "reading" papers. It's analyzing it's (curated) data base for what "word" statically most follows a proceeding "word" - and then it selects the next "word" based on a human determined, weighted bias system. And not even the highest percentage word. It does this again and again in the form, as you note, to a "conclusion". It's Large Language Modeling. It's syntax with no semantics or pragmatics. It's still remarkable, but it's not "AI" except in the marketing departments.

In this sense, it's going to be hard, if not impossible, to say that it's violating copyright or intellectual property laws. It's merely analyzing the number of times a word follows proceeding words in it's data base.

From Stephen Wolfram's website

The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”

So let’s say we’ve got the text “The best thing about AI is its ability to”. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text; it looks for things that in a certain sense “match in meaning”. But the end result is that it produces a ranked list of words that might follow, together with “probabilities”:


sw021423img1.png


And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word. (More precisely, as I’ll explain, it’s adding a “token”, which could be just a part of a word, which is why it can sometimes “make up new words”.)

But, OK, at each step it gets a list of words with probabilities. But which one should it actually pick to add to the essay (or whatever) that it’s writing? One might think it should be the “highest-ranked” word (i.e. the one to which the highest “probability” was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay.
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,097
Likes
3,547
Location
bay area, ca
GPT4 is not actually "reading" papers. It's analyzing it's (curated) data base for what "word" statically most follows a proceeding "word" - and then it selects the next "word" based on a human determined, weighted bias system. And not even the highest percentage word. It does this again and again in the form, as you note, to a "conclusion". It's Large Language Modeling. It's syntax with no semantics or pragmatics. It's still remarkable, but it's not "AI" except in the marketing departments.

In this sense, it's going to be hard, if not impossible, to say that it's violating copyright or intellectual property laws. It's merely analyzing the number of times a word follows proceeding words in it's data base.

From Stephen Wolfram's website

The first thing to explain is that what ChatGPT is always fundamentally trying to do is to produce a “reasonable continuation” of whatever text it’s got so far, where by “reasonable” we mean “what one might expect someone to write after seeing what people have written on billions of webpages, etc.”

So let’s say we’ve got the text “The best thing about AI is its ability to”. Imagine scanning billions of pages of human-written text (say on the web and in digitized books) and finding all instances of this text—then seeing what word comes next what fraction of the time. ChatGPT effectively does something like this, except that (as I’ll explain) it doesn’t look at literal text; it looks for things that in a certain sense “match in meaning”. But the end result is that it produces a ranked list of words that might follow, together with “probabilities”:


sw021423img1.png


And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word. (More precisely, as I’ll explain, it’s adding a “token”, which could be just a part of a word, which is why it can sometimes “make up new words”.)

But, OK, at each step it gets a list of words with probabilities. But which one should it actually pick to add to the essay (or whatever) that it’s writing? One might think it should be the “highest-ranked” word (i.e. the one to which the highest “probability” was assigned). But this is where a bit of voodoo begins to creep in. Because for some reason—that maybe one day we’ll have a scientific-style understanding of—if we always pick the highest-ranked word, we’ll typically get a very “flat” essay, that never seems to “show any creativity” (and even sometimes repeats word for word). But if sometimes (at random) we pick lower-ranked words, we get a “more interesting” essay.
Reddit doesn't share that enlightened opinion, for example. Non-AI lawyers engaged in that. :-D

I have written quite a few papers and ChatGPT rips them off mercilessly and quotes entire paragraphs in several circumstances. Do a search on yourself with ChatGPT sometime. :-D
 

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
380
Likes
737
Location
Vancouver, WA
Reddit doesn't share that enlightened opinion, for example. Non-AI lawyers engaged in that. :-D
Well, lawyers can sue for just about anything. And the issue is complex enough that no judge or jury will truly understand what's going on behind the curtains. Nor many investors..... :rolleyes:

I have written quite a few papers and ChatGPT rips them off mercilessly and quotes entire paragraphs in several circumstances.
ChatGPT's lawyers will argue (correctly) that it's only engaged in "auto complete" and is not capable of creating new information (again, correct).

Not knowing the actual exchanges you had with ChatGPT, if you ask it a leading question - i.e. if you "bias" it to respond to specific sequences of text that match your on-line papers - then it's absolutely going to quote those papers if they are in it's database. But this is a far cry from demonstrating that it's violating copyright laws. Especially so since it's not even adding anything new. You would have to demonstrate that it's not doing what Stephan Wolfram shows it's doing. That it has "Agency" - which it doesn't. On this I think you and I both agree.

It's a new type of search engine. And not even a particularly useful one at that. If I "search" a topic which happens to match text in your published on-line papers, and Google returns it in the search, is that a violation of copyright laws? No, obviously. I asked Google to search for certain words on the internet and it found them. That's all.

This isn't AI.
 

nebunebu

Member
Joined
Sep 20, 2022
Messages
22
Likes
18
I think the major problem would be if AI went into self-sustained life form - whereas one could not turn it off because it has spread itself just like a torrent and is no longer hosted at one place.
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,097
Likes
3,547
Location
bay area, ca
...

It's a new type of search engine. And not even a particularly useful one at that. If I "search" a topic which happens to match text in your published on-line papers, and Google returns it in the search, is that a violation of copyright laws? No, obviously. I asked Google to search for certain words on the internet and it found them. That's all.

This isn't AI.

Agreed, but there is a difference between a Google-earch and a GPT one: Google will *direct* you to a source (and thereby credit it), while GPT will obfuscate the sources and pretend it came up with the blurbs all by itself - when in fact everything it produces amounts to paraphrasing existing sources. It is a classic case of plagiarism at the end of the day, only at a much grander and automated scale. Not being a lawyer specializing in intellectual property law, I can't really say how illegal it is... but it *is* very bad form in any written paper to not cite one's sources.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,212
Location
Northern Virginia, USA
I think the major problem would be if AI went into self-sustained life form - whereas one could not turn it off because it has spread itself just like a torrent and is no longer hosted at one place.
My computer did that. It has taken over the couch, and I have to sit elsewhere ;)
 

John_Siau

Active Member
Technical Expert
Audio Company
Joined
Jul 17, 2018
Messages
186
Likes
1,423
Location
Syracuse, NY USA
I am continuing to explore the capabilities of ChatGPT and I have found that it can do some things remarkably well.

I asked ChatGPT to review the Benchmark AHB2, and I was shocked at the results! Who knew that ChatGPT had golden ears?

ChatGPT instantly churned out a review by gleaning information from the many publicly available reviews of the AHB2. I recognized phases and sequences of words that had appeared in prior reviews, and yet the text was new, original, and well written. The ChatGPT review seemed to summarize the collective impressions of the human reviewers without directly quoting any particular reviewer.

It should be possible to ask ChatGPT to write a review on any well-known product and the results may be an AI-generated review that reflects the collective opinions of many human reviewers.

Ask it to review some of the products that Amir has reviewed here on ASR, and post the results to this thread if you find them interesting. You will need to select products that were available and widely reviewed prior to 2021.

Here is my example using the AHB2 power amplifier:

johnsiau_600x600_af4fd2de-e9af-4943-b930-54f46e00f5b0_100x100.jpg

John Siau:
Write a review of the Benchmark AHB2 complete with listening impressions.
chat-gpt-logo_100x100.jpg

ChatGPT:
Title: Unleashing Sonic Purity: A Review of the Benchmark AHB2 Amplifier

Introduction: The Benchmark AHB2 amplifier has gained a reputation for its exceptional audio performance and unwavering dedication to sonic purity. In this review, we delve into the heart of this powerhouse amplifier, exploring its features, design, and most importantly, the listening impressions it evokes...

READ MORE
 
Last edited:

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,212
Location
Northern Virginia, USA
A little paranoid, maybe. But a lot of the same thoughts have occurred to me:


I don't think any of our favorite billionaires are on our side, in any way, shape or form.
 
Status
Not open for further replies.
Top Bottom