• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
383
Likes
743
Location
Vancouver, WA
As I understand it, he was forced out of a previous AI company. My guess is his ego has taken over again. He'll probably lose another fortune, but who knows.
I'm fairly certain the company you are referring to is OpenAI - the maker of ChatGPT. He and his old business partner, Peter Thiel, were initial investors, with others, in OpenAI.

https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai
https://www.businessinsider.com/elo...ad-openai-left-after-founders-objected-2023-3
 
Last edited:

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
383
Likes
743
Location
Vancouver, WA
Elon Musk recently warned about AI and is now founding a company himself called X.AI. Do we need to understand this?
It's about money. I linked to an article about Elon's leaving OpenAI (the company that makes ChatGPT). Elon has no more idea about how to make AI any more so than does OpenAI. (ChatGPT is not AI, except in the marketing department). Excerpts of two of Elon's tweets taken from the first article:

On Feb. 17, he tweeted “OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.”

On March 15, he tweeted, “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?”


Companies like OpenAI don't actually have to make money. In fact, if they do, they just have to pay higher taxes (yes, big companies do pay taxes). Silicon Valley is full of companies like this. They just need to drive up their market cap value (the perceived value of the company by investors) and then the executives start paying themselves inordinate salaries and bonuses, etc. and leveraging the value of the company. That's what Elon means by "If this is legal....".

I don't want to say the whole thing is yet another AI investment scam, but it has all the hallmarks.

When Sam Altman (with OpenAI) starts making statements about how close they are to having full-blown, robust AI, and how he's "worried" about the implications this will have on Society, he's just driving investors to a fever pitch and driving up the market cap value of the company. Same with Elon wanting to shut down research on AI (while building his own company) due to the "dangers" of AI. It's all nonsense. They don't play the same game you and I do.

Elon would love to have a $30B company in his back pocket - not actually making money by delivering a profitable, working product, mind you - but just for the on-hand cash that investors bring to the table when buying stock in the "Next Big Thing".
 

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
383
Likes
743
Location
Vancouver, WA
Investors


3ena.gif
 
Last edited:

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,196
Likes
1,551
Location
USA
I'm fairly certain the company you are referring to is OpenAI - the maker of ChatGPT. He and his old business partner, Peter Thiel, were initial investors, with others, in OpenAI.

https://www.semafor.com/article/03/24/2023/the-secret-history-of-elon-musk-sam-altman-and-openai
https://www.businessinsider.com/elo...ad-openai-left-after-founders-objected-2023-3
Nope. It’s called X.AI.

 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,519
Likes
7,166
Location
San Francisco
The mechanism by which it operates is entirely different from how humans think, even if the results are indistinguishable.
I don't know that human cognition is reducible to individual operations yet (i.e. Neuron A sends information X to Neuron B, and so on) so I don't know if we can really say how well the current GPT tools do or don't resemble the mechanisms of human cognition.

I think if we DID know exactly how the human brain worked, it would be a lot easier to build AI software... :D
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,519
Likes
7,166
Location
San Francisco
How are we supposed to take him seriously?
I think the "taking elon seriously" ship sailed when he promised full autonomous driving like years ago and failed to deliver on every single milestone on time. Or maybe it was when he sold promotional flamethrowers for his blatantly unworkable tunneling company. Not sure.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,519
Likes
7,166
Location
San Francisco
It's about money. I linked to an article about Elon's leaving OpenAI (the company that makes ChatGPT). Elon has no more idea about how to make AI any more so than does OpenAI. (ChatGPT is not AI, except in the marketing department). Excerpts of two of Elon's tweets taken from the first article:

On Feb. 17, he tweeted “OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.”

On March 15, he tweeted, “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?”


Companies like OpenAI don't actually have to make money. In fact, if they do, they just have to pay higher taxes (yes, big companies do pay taxes). Silicon Valley is full of companies like this. They just need to drive up their market cap value (the perceived value of the company by investors) and then the executives start paying themselves inordinate salaries and bonuses, etc. and leveraging the value of the company. That's what Elon means by "If this is legal....".

I don't want to say the whole thing is yet another AI investment scam, but it has all the hallmarks.

When Sam Altman (with OpenAI) starts making statements about how close they are to having full-blown, robust AI, and how he's "worried" about the implications this will have on Society, he's just driving investors to a fever pitch and driving up the market cap value of the company. Same with Elon wanting to shut down research on AI (while building his own company) due to the "dangers" of AI. It's all nonsense. They don't play the same game you and I do.

Elon would love to have a $30B company in his back pocket - not actually making money by delivering a profitable, working product, mind you - but just for the on-hand cash that investors bring to the table when buying stock in the "Next Big Thing".
You make some good points. It's hard to know how to weigh the statements of people with vested interests in driving hype. However, when they're saying they're angry and afraid about the pace of development toward AGI, that seems like a pretty aggressive hype strategy to me...


Lots of people are skeptical about the likelihood or even possibility of creating AGI. However, the risks are extremely large, and the possibility is not obviously zero, so I am not inclined to scoff at this kind of rhetoric.

If SAI ever takes shape, it might as well be a god. Check out r/singularity for a bunch of people who approach the topic with genuine religious fervor... sick and fascinating, but let's hope they're way off base about the whole thing...
 

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
383
Likes
743
Location
Vancouver, WA
If SAI ever takes shape, it might as well be a god.
I'm replying to this in the spirit of an open exchange of ideas, and not to "prove a point". So I hope you take it as such. :)

Humans are autonomically distracted by both external and internal stimuli. Humans can't really focus on anything for more than 12 seconds at a time. We can't subitize more than 3 or 4 things. We can't keep in our working memory more than 7 things at once.

We, obviously, are not "Gods".

AI presumably would not be subject to such limitations. And it's from the AI's supposed lack of these "limitations" that many fear AI, AGI and SGI.

Are our limitations "bugs" or "features" of our intelligence?

OpenAI approaches the "problem" of intelligence (both human and AI) from the school of Computationalism. There are other schools of thought (broadly falling under the umbrella of Embodied Cognition) which approach the issue of intelligence from a different direction. To me, these offer a greater insight into the issues of cognition.

Computationalism died with Godel's work in the 1930's. It just doesn't know it's dead yet.
 
Last edited:

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,920
Likes
37,984
I'm replying to this in the spirit of an open exchange of ideas, and not to "prove a point". So I hope you take it as such. :)

Humans are autonomically distracted by both external and internal stimuli. Humans can't really focus on anything for more than 12 seconds at a time. We can't subitize more than 3 or 4 things. We can't keep in our working memory more than 7 things at once.

We, obviously, are not "Gods".

AI presumably would not be subject to such limitations. And it's from the AI's supposed lack of these "limitations" that many fear AI, AGI and SGI.

Are our limitations "bugs" or "features" of our intelligence?

OpenAI approaches the "problem" of intelligence (both human and AI) from the school of Computationalism. There are other schools of thought (broadly falling under the umbrella of Embodied Cognition) which approach the issue of intelligence from a different direction. To me, these offer a greater insight into the issues of cognition.

Computationalism died with Godel's work in the 1930's. It just doesn't know it's dead yet.
Godel's work is certainly bothersome to consider in regard to computationalism. Even as computer like brain, the brain's parallel processing ability is tremendous, and current computing is not. Of course we don't know enough about the brain to say whether you can trade off speed and memory size in place of parallel processing advantages.
 

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
383
Likes
743
Location
Vancouver, WA
the brain's parallel processing ability is tremendous,
This is from Jeff Hawkins' 2004 book "On Intelligence". He's in the Embodiment Cognition camp of researchers working on cognition and AI. As an aside, I just got his latest book today and am about 1/3 the way through.

p. 66
There is a largely ignored problem with this brain-as-computer analogy. Neurons are quite slow compared to the transistors in a computer. A neuron collects inputs from its synapses, and combines these inputs together to decide when to output a spike to other neurons. A typical neuron can to do this and reset itself in about five milliseconds, or around two hundred times per second. This may seem fast, but a modern [2004] silicon-based computer can do one billion operations in a second.... So how is it possible that a brain could be faster and more powerful than our fastest digital computers? "No problem," say the brain-as-a computer people. "The brain is a parallel computer....."

I always felt this argument was a fallacy, and a simple thought experiment shows why. It is called the "one hundred-step rule." A human can perform significant tasks in much less time than a second. For example, I could show you a photograph and ask you to determine if there is a cat in the image. Your job would be to push a button if there is a cat, but not if you see a bear or a warthog or a turnip. This task is difficult or impossible for a computer to perform today
[Edit: It's still impossible 19 years later without training on labeled data], yet a human can do it reliably in half a second or less. But neurons are slow, so in that half a second, the information entering your brain can only traverse a chain one hundred neurons long. That is, the brain "computes" solutions to problems like this in one hundred steps or fewer, regardless of how many total neurons might be involved. From the time light enters your eye to the time you press the button, a chain no longer than one hundred neurons could be involved. A digital computer attempting to solve the same problem would take billions of steps. One hundred computer instructions are barely enough to move a single character on the computer display, let alone do something interesting.


As a note, the 15W lightbulb in your refrigerator consumes about as much energy as your brain....
 
Last edited:

jsilvela

Senior Member
Joined
Dec 1, 2022
Messages
441
Likes
401
Location
Spain
The comparison of 100 neurons to 100 machine instructions is an apples to oranges comparison though: a more fair comparison could be 100 function compositions. You can do a lot with that.

How has Godel's work put an end to computationalism?
I mean, definitely Godel proved the limits of math as a formal system.
But I have always believed that brains are affected by Godel and Turing's results just as much as silicon chips.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,519
Likes
7,166
Location
San Francisco
This task is difficult or impossible for a computer to perform today [Edit: It's still impossible 19 years later without training on labeled data]
What a bogus objection, a human can't identify a cat either, until you teach them what a cat is. Talk about moving the goalposts.
the information entering your brain can only traverse a chain one hundred neurons long.
I might be totally off-base here, but can't one neuron communicate with multiple neurons at once? If so, it's not a chain of 100 but a tree of activated neurons with up to 100 branches, no? Which could be... you know, several neurons.
 

fpitas

Master Contributor
Forum Donor
Joined
Jul 7, 2022
Messages
9,885
Likes
14,241
Location
Northern Virginia, USA
I think the "taking elon seriously" ship sailed when he promised full autonomous driving like years ago and failed to deliver on every single milestone on time. Or maybe it was when he sold promotional flamethrowers for his blatantly unworkable tunneling company. Not sure.
Or, ask Twitter or Tesla investors these days. Hopefully he won't "help" the SpaceX engineers :facepalm:
 

MRC01

Major Contributor
Joined
Feb 5, 2019
Messages
3,503
Likes
4,145
Location
Pacific Northwest
What a bogus objection, a human can't identify a cat either, until you teach them what a cat is. Talk about moving the goalposts. ...
I don't think it's bogus; it highlights an important difference between AI and Humans.

Suppose you have an image recognition algorithm that can already identify some animals accurately. But not cats. You want to teach it how to identify cats. To do this, you need to show it millions of different pictures of cats.

If you have a human child who knows a bunch of different animals on sight but has never seen a cat, how many cats do you need to show her? One, maybe two.

One reason why, is the algorithm relies on pattern recognition and statistics, nothing more. The human has additional cognitive modes she applies: for example geometric and spatial intuition, among others. She knows the picture you showed her is a 2-D photo and she can use spatial intuition to imagine what it would look like from any other angle. She knows it is an animal and sees the shape of the body, number of legs, etc. so she can imagine different postures the cat might assume. All of this obviates the need to show her thousands upon thousands of additional examples of cats from different angles, in different positions, etc.

In short, the human understands what cats are, what animals are, etc. and applies that understanding.
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,519
Likes
7,166
Location
San Francisco
I don't think it's bogus; it highlights an important difference between AI and Humans.

Suppose you have an image recognition algorithm that can already identify some animals accurately. But not cats. You want to teach it how to identify cats. To do this, you need to show it millions of different pictures of cats.

If you have a human child who knows a bunch of different animals on sight but has never seen a cat, how many cats do you need to show her? One, maybe two.

One reason why, is the algorithm relies on pattern recognition and statistics, nothing more. The human has additional cognitive modes she applies: for example geometric and spatial intuition, among others. She knows the picture you showed her is a 2-D photo and she can use spatial intuition to imagine what it would look like from any other angle. She knows it is an animal and sees the shape of the body, number of legs, etc. so she can imagine different postures the cat might assume. All of this obviates the need to show her thousands upon thousands of additional examples of cats from different angles, in different positions, etc.

In short, the human understands what cats are, what animals are, etc. and applies that understanding.
OK sure, but let's not act as if inferential logic and generalization are out of reach for computers. They don't exist in consumer products yet, but even ChatGPT seems to be close to that level of "cognition".
 

kemmler3D

Major Contributor
Forum Donor
Joined
Aug 25, 2022
Messages
3,519
Likes
7,166
Location
San Francisco
I liked your post because you put quotes around the word cognition ;)
If I can't even robustly prove that fellow humans are self-aware and conscious, I'm sure not going to bother arguing whether a machine is alive, thinking, etc. ;)

I think the best we can ever do is show that a machine's internal processes resemble those of a (believed to be) conscious human. I think we're decades away from that, and honestly we might even get SAI before we get a machine that we can confidently describe as "alive" or "conscious". I think they will show all the signs of consciousness for a long time before we can come up with any proof that they are.

Since we also can't describe the mechanism of consciousness in people / animals yet, we also can't disprove that a machine is conscious, which makes the whole thing doubly sticky and not worth arguing about. :D

See also: Star Trek TNG S2E9, The Measure of a Man, in which nobody can prove or disprove that a thinking machine is conscious, so they just agree to treat him as if he is.
 
Status
Not open for further replies.
Top Bottom