• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.

Peterinvan

Senior Member
Joined
Dec 10, 2021
Messages
306
Likes
237
Location
Canada
To me AI will be damaging unless we can be sure it is offering up factual information.
At this stage, it can easily be used to manipulate masses of people with fake propaganda. I hope FactCheck.org can evolve to verify GPT responses.
 
  • Like
Reactions: BJL

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,197
Likes
3,546
Location
33.6 -117.9
Please read this from "The Conversation":
There are so many that are popping up everywhere, even if search is filtered down to 30days... using the words 'sentience (sentiency)' and 'AI'.
There is a joke about the daughter who asks her dad why he speaks so quietly around the house.
"Because there is artificial intelligence everywhere that is listening to what we say," the dad replies.
The daughter laughs, the father laughs. And Alexa laughs.
19 researchers say AI is NOT sentient -- not YET... anyways!
I hope I get notified... I like fresh popcorn.:facepalm:
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,100
Likes
3,552
Location
bay area, ca
To me AI will be damaging unless we can be sure it is offering up factual information.
At this stage, it can easily be used to manipulate masses of people with fake propaganda. I hope FactCheck.org can evolve to verify GPT responses.

True. That is because we expect it to be intelligent by itself - but it is entirely trained by humans with all their biases and preferences etc etc. The usefulness of AI is entirely dictated by the goal. And market analysts like Gartner report over 75% of AI projects fail to deliver on results. Basically unless the focus of the application is razor sharp, AI is pretty useless.

Big data is a discipline that has been around for a while. But the -ultimately- manual interpretation thereof in Data Science disciplines (which was supposed to be the hottest career track 5 years ago) with stuff like R was not that successful, hence AI trained models are now the new hope to skim and interpret big data.

For those with data concerns (a concern I entirely share), it is Big Data that is the concern, not necessarily AI (which is just a big data consumer).

For example, as much as I like the notion of AI aided Healthcare apps providing my Doc with better ways to treat me preventively going forward, the thought it simultaneously sends a note to my employer telling I have a health condition that may render me unproductive in a few months is profoundly disturbing.
 

Shiva

Active Member
Joined
Apr 14, 2021
Messages
116
Likes
97
I make use of the Ai Bard quite often. His information is usually good, but the Ai is also,... often wrong.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,357
Likes
6,881
Location
San Francisco
I don't think AI needs sentience (whatever that means) to be super-intelligent (or a threat) any more than an airplane needs feathers to fly.

Concerning ourselves with the philosophical status of AI isn't that interesting yet, because we don't even have a solid way to prove any given HUMAN is sentient, let alone a machine.

What's a lot more interesting is what's going to happen to us when AI reaches certain levels of effectiveness and in whose hands the AI is at that time. As ever, the most dangerous animal is man.
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,100
Likes
3,552
Location
bay area, ca
I don't think AI needs sentience (whatever that means) to be super-intelligent (or a threat) any more than an airplane needs feathers to fly.

Concerning ourselves with the philosophical status of AI isn't that interesting yet, because we don't even have a solid way to prove any given HUMAN is sentient, let alone a machine.

What's a lot more interesting is what's going to happen to us when AI reaches certain levels of effectiveness and in whose hands the AI is at that time. As ever, the most dangerous animal is man.
If you look at all the disciplines in AI and consider their potential going forward if you somehow combine them (and innovation through reuse has been a movement for many years), you could get something that sees some things, interprets some things, and can suggest (or if you trust it, execute) actions based on that data. But each and every step is trained by humans. Even what we call deep neural networks (DNN) for deep learning isn't anywhere remotely close to human thinking - it is utterly specialized and can't learn out of the configured training. It just can consume more data which in turn it may need to be trained on.

What we call AI is primarily a bunch of specialized algorithms to process a variety of big data. Algorithms. No one is afraid of Shannon Nyquist because it "learned" to present frequencies between 20-22kHz to be perfectly preserved, and I think of a lot of current AI stuff the exact same way. It is about a specialized model to process big data. The AI nomenclature has been used for over 50 years for lame stuff, and this latest iteration will also inevitably ride the hype curve from top to disillusionment... and then to specialized usefulness.
 

EJ3

Major Contributor
Forum Donor
Joined
Dec 10, 2019
Messages
2,202
Likes
1,720
Location
James Island, SC
If you look at all the disciplines in AI and consider their potential going forward if you somehow combine them (and innovation through reuse has been a movement for many years), you could get something that sees some things, interprets some things, and can suggest (or if you trust it, execute) actions based on that data. But each and every step is trained by humans. Even what we call deep neural networks (DNN) for deep learning isn't anywhere remotely close to human thinking - it is utterly specialized and can't learn out of the configured training. It just can consume more data which in turn it may need to be trained on.

What we call AI is primarily a bunch of specialized algorithms to process a variety of big data. Algorithms. No one is afraid of Shannon Nyquist because it "learned" to present frequencies between 20-22kHz to be perfectly preserved, and I think of a lot of current AI stuff the exact same way. It is about a specialized model to process big data. The AI nomenclature has been used for over 50 years for lame stuff, and this latest iteration will also inevitably ride the hype curve from top to disillusionment... and then to specialized usefulness.
Here is an interview with AI humanoid robots (at least that is what it appears to be to me):
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,357
Likes
6,881
Location
San Francisco
If you look at all the disciplines in AI and consider their potential going forward if you somehow combine them (and innovation through reuse has been a movement for many years), you could get something that sees some things, interprets some things, and can suggest (or if you trust it, execute) actions based on that data. But each and every step is trained by humans. Even what we call deep neural networks (DNN) for deep learning isn't anywhere remotely close to human thinking - it is utterly specialized and can't learn out of the configured training. It just can consume more data which in turn it may need to be trained on.

What we call AI is primarily a bunch of specialized algorithms to process a variety of big data. Algorithms. No one is afraid of Shannon Nyquist because it "learned" to present frequencies between 20-22kHz to be perfectly preserved, and I think of a lot of current AI stuff the exact same way. It is about a specialized model to process big data. The AI nomenclature has been used for over 50 years for lame stuff, and this latest iteration will also inevitably ride the hype curve from top to disillusionment... and then to specialized usefulness.
I guess this is all true or at least directionally so. But it doesn't really matter. An airplane flies faster, farther and carries vastly more weight than any bird.

A plane doesn't "know how" to fly. It can't land on a branch or catch a bug. It can't walk on the ground. It can't even see, really. It can't do many things that birds do. But as a flying machine, it is vastly superior in many important ways.

A thinking machine may be just like this. Humans don't have infinitely variable abilities, either. A series of machines that can only do one thing well will eventually do everything humans can do, better.

Will it actually think or feel the way we do? Doubtful. Will that have any practical importance? Doubtful.
 

AdrianusG

Senior Member
Joined
Feb 28, 2023
Messages
392
Likes
296
I don't think AI needs sentience (whatever that means) to be super-intelligent (or a threat) any more than an airplane needs feathers to fly.

Concerning ourselves with the philosophical status of AI isn't that interesting yet, because we don't even have a solid way to prove any given HUMAN is sentient, let alone a machine.

What's a lot more interesting is what's going to happen to us when AI reaches certain levels of effectiveness and in whose hands the AI is at that time. As ever, the most dangerous animal is man.
Exactly, who's gonna profit the most, us the consumers, or the people/companies that invest billions into these systems.
they have shown already that they are not exactly beacons in Ethics so far.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,357
Likes
6,881
Location
San Francisco
Exactly, who's gonna profit the most, us the consumers, or the people/companies that invest billions into these systems.
they have shown already that they are not exactly beacons in Ethics so far.
The basic concept of capitalism is that the investors / owners profit and control the business. I don't see that there are any publicly-owned AI systems out there. So yeah, that's the expected outcome of all this.

The problem is that if AI turns out as powerful as people say they are, it will obviate all human labor in a single generation. If owners no longer need workers, the economy will collapse because it isn't designed to handle that scenario. It's predicated on the existence of a labor market in which people can earn money and afford to buy stuff that businesses produce, if that goes away then it all goes away.

This is why Sam Altman is trying to push Worldcoin, the basis for a future universal basic income, because AFAIK that's currently the best idea for replacing the labor market anyone has that preserves the current capitalist status quo, at least in part.

I for one would prefer a more robust solution, but at least some people are aware we're potentially headed for a serious problem.
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,197
Likes
3,546
Location
33.6 -117.9
I woke up this morning to the news that California is thinking of a single button "DELETE" of your data everywhere.
POOF! All gone!
It's nice waking up to what sounds like good news... then... reality hits you because it is going to be another BOTCHed half-baked idea that is another bridge to nowhere...:mad:
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,197
Likes
3,546
Location
33.6 -117.9
*Sheila has 3 brothers. *Each brother has 2 sisters. *How many sisters does Sheila have?
^^This prompt was asked to dozens of LLMs (LargeLanguageModels)^^
None of the LLMs offered the correct answer, although ChatGTP4.0 had the best (yet, still incorrect) answer. o_O

Why would I ever tell you the correct answer???? evilgrin.gif
[One reason is that I don't what any of LLMs to learn the answer from me.]
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,357
Likes
6,881
Location
San Francisco
I woke up this morning to the news that California is thinking of a single button "DELETE" of your data everywhere.
POOF! All gone!
It's nice waking up to what sounds like good news... then... reality hits you because it is going to be another BOTCHed half-baked idea that is another bridge to nowhere...:mad:
The EU has rules similar to this in the form of GDPR. Let me tell you that as a marketer, it's not that we don't want to comply with these rules, but it's a bit difficult. Even at a small-to-medium size company it's hard to know where every bit of data about someone is and how to delete it.

I tend to agree that it's a nice idea in theory but tricky in practice.

Maybe it could work if CA set up something like the "Do Not Call" registry. A big list of online IDs that need to have their data purged. Access the list and purge accordingly. Shouldn't be that hard to deal with if it goes that way. What's tricky is dealing with individual requests on individual platforms.
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,197
Likes
3,546
Location
33.6 -117.9
I woke up this morning to the news that California is thinking of a single button "DELETE" of your data everywhere.
I apologize, I forgot to provide any links for this brand new CA law that is called the "Delete Act".
Previously:
California Privacy Rights Act (The CPRA) is a CA law (via ballot/referendum) that protects individuals' data privacy rights, in effect since 2023/01.
The CPRA builds on previous legislation, the 2018 California Consumer Privacy Act (The CCPA), expanding employers' obligations when it comes to collecting, storing, using and sharing personal data belonging to their employees.
They promise they will finish the bridge-to-nowhere... whenever.
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,100
Likes
3,552
Location
bay area, ca
... Even at a small-to-medium size company it's hard to know where every bit of data about someone is and how to delete it.

I tend to agree that it's a nice idea in theory but tricky in practice.

Well, if it was worth investing in gathering data, it is doubly important to invest in protecting it. :)

The (huge) potential penalties in Europe and California will force companies to make strict data protection happen.
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,197
Likes
3,546
Location
33.6 -117.9
Europe and California will force companies to make strict data protection happen.
... but haven't those 'protections' leave the train station a long ago?
People got tired of hearing about just another data compromise.:confused:

"Annual number of data compromises and individuals impacted in the United States from 2005 to 2022"
Here are the stats:
202309_DataBreachStats.jpg

I am guessing that if statista.com (et al.) have already parsed the breach'd databases to show the above trends; I am certain that the "algorithms" have had a hand in it and 'they' may not want to release that data for their own subsistence and longevity! :mad:
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,100
Likes
3,552
Location
bay area, ca
... but haven't those 'protections' leave the train station a long ago?
People got tired of hearing about just another data compromise.:confused:

"Annual number of data compromises and individuals impacted in the United States from 2005 to 2022"
Here are the stats:
View attachment 312966
I am guessing that if statista.com (et al.) have already parsed the breach'd databases to show the above trends; I am certain that the "algorithms" have had a hand in it and 'they' may not want to release that data for their own subsistence and longevity! :mad:
Indeed, but I'd like to point out "data compromises" are mostly about security breaches in the data-storage domain. And while I have little doubt we will have to learn some lessons about AI security, AI hasn't been the driver for that statistic on data breaches. In theory you could use AI to anonymize data right away, so that you completely separate statistical data trends from any personal information (which is the only thing GDPR etc worry about). Stuff like AI or Blockchain could help one establish if personal data was somehow shared with third parties. It could be something as innocuous as the email addresses we share with this very website to be able to participate being shared with some audio vendor sponsor (note I am not at all claiming this happens, it's just a theoretical example), but it really gets serious when it is about heath information and similar stuff.

In a nutshell: I separate data privacy issues and AI issues. Data privacy is a big issue whether AI exists or not. And companies will have to be vigilant, because data privacy violations can and will come with big financial penalties - and *that* is something companies care about... :)
 

pseudoid

Master Contributor
Forum Donor
Joined
Mar 23, 2021
Messages
5,197
Likes
3,546
Location
33.6 -117.9
ChatGPT has learned to talk:eek:
OpenAI, the San Francisco artificial intelligence start-up, released a version of its popular chatbot on Monday that can interact with people using spoken words. As with Amazon’s Alexa, Apple’s Siri, and other digital assistants, users can talk to ChatGPT and it will talk back.
Maybe soon, ChatGPT will be able to perform arias in your living rooms:rolleyes:
 

pablolie

Major Contributor
Forum Donor
Joined
Jul 8, 2021
Messages
2,100
Likes
3,552
Location
bay area, ca
ChatGPT has learned to talk:eek:

Maybe soon, ChatGPT will be able to perform arias in your living rooms:rolleyes:
Unlikely - the model isn't great for music. Below something I am actively working on to establish AI guidance in a consultative way. However, for music check out something like https://magenta.tensorflow.org/ .. :) It is in no way an exhaustive list, just highlights. It's the wild wild west out there. But many use cases on the 75% right side require prohibitive investment. Even OpenAI can not afford to train their new GPT-5 model. The complex AI models require extremely deep pockets (and use cases that actually make $$$).

genai models.png
 
Last edited:

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,357
Likes
6,881
Location
San Francisco
To me AI will be damaging unless we can be sure it is offering up factual information.
I wonder if this is likely to happen anytime soon. Let's rule out omniscience or whatever, but just say that the AI has to do as well as a smart, unbiased human in establishing whether a bit of information is true or not.

This is not so trivial. If a given factoid is not well-established, you have to search for many sources of information, judge the credibility of each one, draw upon existing knowledge, sometimes in many fields, and finally come to a conclusion as simple as "yes, no, or maybe".

AFAIK the current GPT-style generative tools do not do this kind of work. I am not sure if the tool exists or is close to existing.

Also, there are plenty of people with vested interests in making sure the AIs DON'T offer up factual information. For example, reporting certain facts in certain countries gets you thrown in prison, others have a lot riding on spreading misinformation, and I'm not just referring to Shunyata here.

And then there is, of course, the depressingly large set of facts that are up for political or religious debate, for whatever reason.

To give an extreme example, some people maintain it's a fact that slavery in the USA was good for the slaves. Most others maintain the opposite. Few will be pedantic enough to point out that "good" and "bad" are subjective judgments and so aren't "facts" per se. Almost nobody would be satisfied with the latter response anyway. What should an AI do about this, let alone the people who have to answer for its output?

I think we can make AIs avoid spouting total fantasies or outright lies at some point. However, the status of unconfirmed and controversial facts will probably always be up to people.
 
Status
Not open for further replies.
Top Bottom