• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,201
Location
Northern Virginia, USA
That wacky ChatGPT is hallucinating again:

 

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,350
Likes
1,850
AI 'hallucinating' is just a tech-bro euphemism for fabricating bullshit.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,201
Location
Northern Virginia, USA

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,086
Likes
3,510
Location
bay area, ca
It is a generative AI term. Means insufficient context or training data. But shows there is no magic cognitive anything... it is the training you put into the algorithms.
 

GaryH

Major Contributor
Joined
May 12, 2021
Messages
1,350
Likes
1,850
I'm sure they chose an anthropomorphic term for a reason. 'Broken' would have done just fine.
 

wgb113

Active Member
Joined
Feb 25, 2019
Messages
145
Likes
151
Location
Pennsylvania
I’m a bit of a simpleton. If it’s creators are having an ”oh shit, what did we do moment” we should probably listen.

Not worried about AI on it’s own…more worried about who/what it’s being fed.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,201
Location
Northern Virginia, USA
I’m a bit of a simpleton. If it’s creators are having an ”oh shit, what did we do moment” we should probably listen.
Its creators are trying to generate trendy alarmist buzz, and they're succeeding in that they're collecting vast fortunes to fund more research. Odd how that works, considering how "scared" they are.
 

wgb113

Active Member
Joined
Feb 25, 2019
Messages
145
Likes
151
Location
Pennsylvania
Its creators are trying to generate trendy alarmist buzz, and they're succeeding in that they're collecting vast fortunes to fund more research. Odd how that works, considering how "scared" they are.
Given that all of our ideas seem to be coming from the world of SciFi entertainment we’ve probably already written how this all ends.
 

Bow_Wazoo

Addicted to Fun and Learning
Joined
Jan 3, 2021
Messages
774
Likes
676
FB_IMG_1686908572421.jpg
 

Trell

Major Contributor
Joined
May 13, 2021
Messages
2,752
Likes
3,285
Here is one case were a lawyer used ChatGPT and the judge was not amused.

While this is a serious :facepalm: for a lawyer it also shows one type of danger of this technology where people just unquestionably accepts the output. This type of technology will also, if not already, be used by governments for administrative work and we can all see how that could work out not so well.


>>>A lawyer is in trouble after admitting he used ChatGPT to help write court filings that cited six nonexistent cases invented by the artificial intelligence tool.

Lawyer Steven Schwartz of the firm Levidow, Levidow, & Oberman "greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity," Schwartz wrote in an affidavit on May 24 regarding the bogus citations previously submitted in US District Court for the Southern District of New York.

Schwartz wrote that "the use of generative artificial intelligence has evolved within law firms" and that he "consulted the artificial intelligence website ChatGPT in order to supplement the legal research performed."

The "citations and opinions in question were provided by ChatGPT which also provided its legal source and assured the reliability of its content," he wrote. Schwartz admitted that he "relied on the legal opinions provided to him by a source that has revealed itself to be unreliable," and stated that it is his fault for not confirming the sources provided by ChatGPT.


Schwartz didn't previously consider the possibility that an artificial intelligence tool like ChatGPT could provide false information, even though AI chatbot mistakes have been extensively reported by non-artificial intelligence such as the human journalists employed by reputable news organizations. The lawyer's affidavit said he had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false."...<<<
 

Steven Holt

Senior Member
Forum Donor
Joined
May 8, 2022
Messages
437
Likes
564
After reading thru this thread, I'm getting a strong urge to watch COLUSSUS : THE FORBIN PROJECT again.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,201
Location
Northern Virginia, USA

ferrellms

Active Member
Joined
Mar 24, 2019
Messages
299
Likes
258
https://www.foxnews.com/tech/ai-exp...esnt-enough-literally-everyone-earth-will-die

Elon Musk and 1000 other luminaries signed a letter calling for a 6 month pause in AI development. An AI expert now says that's not enough and the annihilation of humanity is the likely result if care is not taken. I'm a bit shocked. Is this guy just looking for his 15 minutes of fame or is he right? Anyone got a scientific perspective on this?
Let an AI answer for itself (GPT-4 with web plugins) -

The potential dangers of artificial intelligence (AI) are a topic of ongoing debate among scientists, ethicists, and policymakers. Here are some of the main concerns:
  1. Job Displacement: AI and automation could replace many jobs, leading to significant social and economic disruption. While new jobs may be created by the AI revolution, there's concern that these jobs may require skills that displaced workers don't have.
  2. Privacy: AI systems can process vast amounts of personal data, leading to potential privacy violations. For example, facial recognition technology, if misused, can lead to mass surveillance.
  3. Security: AI could be used maliciously, such as in autonomous weapons or through AI-powered cyberattacks.
  4. Bias: AI systems can perpetuate and amplify existing biases in society if they're trained on biased data. This can lead to unfair outcomes in areas like hiring, lending, and law enforcement.
  5. Lack of Transparency: Many AI systems, especially those based on deep learning, are often described as "black boxes" because their operations are not fully understood, even by their creators. This lack of transparency can make it difficult to ascertain why an AI made a particular decision.
  6. Superintelligence Risk: Some theorists, like philosopher Nick Bostrom, have raised concerns about the possibility of AI systems becoming so advanced that they surpass human intelligence (a scenario often referred to as the "singularity"). In this scenario, it's feared that such a "superintelligence" might act in ways that are harmful to humanity, either inadvertently or intentionally.
It's important to note that many of these risks are not inherent to AI itself, but rather to how it's used and governed. As such, many researchers and policymakers are working on ways to mitigate these risks, such as through better AI transparency and explainability, robust and fair machine learning techniques, privacy-preserving AI methods, and regulations to ensure responsible AI use.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,201
Location
Northern Virginia, USA
ChatGPT does it again:


I'm starting to be a fan. It's weeding out incompetence.
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,162
Likes
3,501
Location
33.6 -117.9
ChatGPT does it again:
I'm starting to be a fan. It's weeding out incompetence.
So, humans are not allowed to lie but the real criminal (=chatbot) gets away with it!;)
The Truth About Lying Statistics
  1. According to a study by the University of Massachusetts, 60% of people can't go 10 minutes without lying.
  2. Every week, Americans tell 11 lies.
  3. Children start lying at the age of 2, and by the age of 4, they can lie convincingly.
Ain't no big thang!
 
Status
Not open for further replies.
Top Bottom