• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

How Dangerous is AI?

Status
Not open for further replies.

617

Major Contributor
Forum Donor
Joined
Mar 18, 2019
Messages
2,433
Likes
5,383
Location
Somerville, MA
Given this prompt: Per the International Building Code, an elevator shaft in a two story building needs to be constructed
How would you rate this response from GPT-4?
It's not a meaningful response. Here's how this normally gors; you as a designer or engineer need to know what fire rating is required for the elevator because it impacts structure, cost, leasable area, egress paths, whatever. It's a yes or no question.

As an engineer or architect I can pick up the local amendments to the IBC and look it up. About 3/4 of the time the answer will be 'it depends'. It depends on a host of issues which I haven't thought about and may not have been worked through yet. The wording of the regulations may be ambiguous, and sometimes this is intentional as it gives local authorities more opportunity to get involved in the approval process.

So, you hire a code consultant who deals with fire and life safety stuff all day every day, and they accept the risk embodied by making these decisions because they know the code back to front and understand the political climate you are building in. They may have relationships with local fire chiefs and inspectors and.may understand that you may get away with X if you also do Y.

It's not that Chat Gpt is providing a dumb or inaccurate answer, it is just presuming to do the job of a code consultant. An AI language model cannot accept liability for making mistakes. That is what counts.
 

JaMaSt

Senior Member
Forum Donor
Joined
Sep 14, 2021
Messages
380
Likes
734
Location
Vancouver, WA
On a side note, how can there be a discussion about the IBC requirements without stipulating the year? Did ChapGPT already know which year of the IBC the local jurisdiction required to be used when this project was designed?
I deliberately did not state the year of the Code or the State, County or City in which the building is located. I wanted to see how it would respond. The exchange I posted was the entirety of the conversation. A person would have asked those questions.

But, that being said, ChatGPT's first paragraph flatly contradicted it's second paragraph. It stated two things which cannot both be true.
 

audiofooled

Addicted to Fun and Learning
Joined
Apr 1, 2021
Messages
533
Likes
594
How dangerous is AI?

IMO, as dangerous as we make it to be. And history as we know it teaches us that sadly, danger is our civilization's middle name. If we look at major breakthroughs in technology, the chain of events or, application if you will, more often than not leads to harmful consequences at first and only then the ones who survive may use it for more benevolent purposes. The question is just how many things go bang and how many of us survive.

Currently one aspect of playing with fire is Store Now Decrypt Later. Here is a good video about encryption algorithms and how in the future they may be broken by quantum computing:


So, we don't even need AI to put us in danger. We just need a few key people and that's what still fascinates me. In other words, we don't even need AI to be smarter than us, we just need to be stupid enough, which is not very hard to do...
 

jsilvela

Senior Member
Joined
Dec 1, 2022
Messages
441
Likes
401
Location
Spain
A lot of the (reasonable) fears I read in this thread are already happening today with software in general. The disruption to the job market and the economy, the hasty/unfettered deployment, etc.

The evil AI narratives often surprise me because of how they put humans at the pinnacle of all creation.
So, it seems, the first "AI" to be smarter than human, would create the "singularity", and progress from then on would be exponential and unstoppable.

We've been writing computer programs for almost a century, yet we intelligent humans have not managed to replicate the intelligence of a dog.
But somehow, a smarter-than-human AI would have no hurdles left to create ever smarter AI's.
And it would be evil to boot.
 

antcollinet

Master Contributor
Forum Donor
Joined
Sep 4, 2021
Messages
7,698
Likes
12,992
Location
UK/Cheshire
How dangerous is AI?

IMO, as dangerous as we make it to be. And history as we know it teaches us that sadly, danger is our civilization's middle name. If we look at major breakthroughs in technology, the chain of events or, application if you will, more often than not leads to harmful consequences at first and only then the ones who survive may use it for more benevolent purposes. The question is just how many things go bang and how many of us survive.

Currently one aspect of playing with fire is Store Now Decrypt Later. Here is a good video about encryption algorithms and how in the future they may be broken by quantum computing:


So, we don't even need AI to put us in danger. We just need a few key people and that's what still fascinates me. In other words, we don't even need AI to be smarter than us, we just need to be stupid enough, which is not very hard to do...
Well that was an interesting video.

At least according to that we have around 13 years (until around 2036) to pivot to a new form of quantum immune encryption. At which point we can assume that all encrypted information stored up until then is public. So change it. (That which can be changed, at any rate)

We might be able to implement new encryption in that time. Pretty certain we can't change everything in that time.
 

antcollinet

Master Contributor
Forum Donor
Joined
Sep 4, 2021
Messages
7,698
Likes
12,992
Location
UK/Cheshire
We've been writing computer programs for almost a century, yet we intelligent humans have not managed to replicate the intelligence of a dog.
True - it might be some time off. But the development and progression is already exponential.

But somehow, a smarter-than-human AI would have no hurdles left to create ever smarter AI's.
At the singularity point - no - but it doesn't matter, because we can still (with exponential development) create ever smarter AI's. We don't even need to make them smarter. Once we create one, it is trivial to replicate it an arbitrary number of times. Within days you could have an army of a million human capable intelligences focussed teamwork style on a single problem - for good or ill. One of those problems will be how to create smarter AI.

Or with simple application of processing power - even if they are not smarter we can make them faster. So a million human capable AI's which can all think 10 times faster than a human.

And it would be evil to boot.
No, but they will be alien. They will not think the way we think. They will not feel the way we feel. If we lose control (and we don't yet know how to retain control**) they will not share our values or objectives.

Further - even if controlled, they will be as evil or good as the organisation creating them.


** and that is a political problem, as well as a technical one.
 
Last edited:

audiofooled

Addicted to Fun and Learning
Joined
Apr 1, 2021
Messages
533
Likes
594
Well that was an interesting video.

At least according to that we have around 13 years (until around 2036) to pivot to a new form of quantum immune encryption. At which point we can assume that all encrypted information stored up until then is public. So change it. (That which can be changed, at any rate)

We might be able to implement new encryption in that time. Pretty certain we can't change everything in that time.

Well, there's another video to expand on that matter, if you're interested. What I suggest is really reading between the lines on this and already you'll see a lot of contradiction and some real steps in the wrong direction. IMO this is real, it's already a thing, it's profit driven, and the goals are not up to majority of us. And you truly don't even need to understand what quantum computing is, or the principles of quantum mechanics. In the video I would highlight the very introduction and "The goals for machines" starting from 22:25:

 

Keith_W

Major Contributor
Joined
Jun 26, 2016
Messages
2,658
Likes
6,059
Location
Melbourne, Australia
We've been writing computer programs for almost a century, yet we intelligent humans have not managed to replicate the intelligence of a dog.

You think your dog is smarter than ChatGPT? Ask your dog to add two numbers together then. Or maybe something a bit more interesting, ask your dog to write a thesis on Thucydides.
 

Galliardist

Major Contributor
Joined
Jun 26, 2021
Messages
2,558
Likes
3,276
Location
Sydney. NSW, Australia
You think ChatGPT is smarter than a dog?
Ask it to fetch a stick.
Maybe it can be taught how to convince someone else to fetch the stick.

That's when the trouble really starts.
 

Triliza

Senior Member
Forum Donor
Joined
Jun 23, 2021
Messages
481
Likes
578
Location
Europe
Robots can already do this:


Not to mention fly cruise missiles autonomously through your window.

Integrating AI with robots and getting it to fetch a stick should be a trivial task.
Cute robots, but there is something uncomfortable about robots in general, it must be our inability to read their facial and body language, which I guess is a crucial part for us to be able to evaluate the safety of a n unknown situation. The only robot I feel comfortable with is Vernie (from Lego Boost), he seems a fine guy and the kids like him.
 

antcollinet

Master Contributor
Forum Donor
Joined
Sep 4, 2021
Messages
7,698
Likes
12,992
Location
UK/Cheshire
There's also the method used in Fredric Brown's 1942 classic short story, ETAOIN SHRDLU. It was about a Linotype machine that achieved sentience. I won't include a spoiler, since it's one of my favorite short stories haha! As far as I know, it's the first cautionary tale about artificial intelligence.

The story's title comes from the keys on the Linotype, arranged in descending order of frequency in the English language (as determined by experienced manual compositors of the time, which differs from other letter frequency rankings such as those used in cryptography).

View attachment 276205

Thanks - that was a fun little story.

(For anyone else interested, I found it free here : https://www.you-books.com/book/F-Brown/Etaoin-Shrdlu)
 

617

Major Contributor
Forum Donor
Joined
Mar 18, 2019
Messages
2,433
Likes
5,383
Location
Somerville, MA
A lot of the (reasonable) fears I read in this thread are already happening today with software in general. The disruption to the job market and the economy, the hasty/unfettered deployment, etc.

The evil AI narratives often surprise me because of how they put humans at the pinnacle of all creation.
So, it seems, the first "AI" to be smarter than human, would create the "singularity", and progress from then on would be exponential and unstoppable.

We've been writing computer programs for almost a century, yet we intelligent humans have not managed to replicate the intelligence of a dog.
But somehow, a smarter-than-human AI would have no hurdles left to create ever smarter AI's.
And it would be evil to boot.
How you can be an occupant of this completely inexplicable, miraculous creation without any apparent cause and consider yourself the pinaccle of anything is beyond me. Hell I can't even watch LeBron James and feel that way. We are worms.
 

computer-audiophile

Major Contributor
Joined
Dec 12, 2022
Messages
2,565
Likes
2,881
Location
Germany
Now There’s an AI GPT Chatbot Without Censorship: Introducing FreedomGPT.com

An AI tool that can guide suicide, praise Hitler or help bank robbers..
 

FrantzM

Major Contributor
Forum Donor
Joined
Mar 12, 2016
Messages
4,376
Likes
7,873
You think ChatGPT is smarter than a dog?
Ask it to fetch a stick.
Thing is: it cannot. That is not a proof of being stupid. It just wasn't build for that. The issue that we seem to take very lightly is that it can write code::
1) It can write codes on its own or be coerced/controlled/ordered, to writing the code for other entities with unknown intentions.
2) Keeping in mind that these are connected machines, things can evolve from not being able to pick a stick, to controlling machines whose purposes are to pick up the stick and beat you to a pulp up with the stick ...
Post 211 on this thread, is a reminder of what some autonomous robots can do...

I do not lose sleep over that, but think , often, that we are playing gods with no safeguards.
And I am sure we cannot bottle these AI up, anymore. They are out already with, no way to know to what they are connected or, worse, whose running the codes and on what computers. Do we NOT realize how serious the consequences shall be?

Peace.
 

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,393
Likes
24,706
Thanks - that was a fun little story.

(For anyone else interested, I found it free here : https://www.you-books.com/book/F-Brown/Etaoin-Shrdlu)
Thanks for sharing the link :)
I didn't get very far in to it at all before I had, as Robert Hunter once put it "one of those flashes" ;) -- or what Jung, or maybe it was Sting, called synchronicity.

Mergenthaler.
Ottmar Mergenthaler.
Inventor of the linotype -- and namesake of the old Biology Department building at my alma mater in Baltimore.
"We" built a new building while I was there, so only part of my career was spent in Mergenthaler Hall, but I remember it very, very well. Spent many, many many hours in and around that building.

Just felt compelled to mention that.
:cool:

220px-OttmarMergenthaler.jpg
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
And I am sure we cannot bottle these AI up, anymore. They are out already with, no way to know to what they are connected or, worse, whose running the codes and on what computers. Do we NOT realize how serious the consequences shall be?
That's entirely possible. For example in my Mac I'm typing this on right now, when I'm running only Safari, Chrome, and Microsoft Outlook, there are 660 processes running with over 2900 software threads. Looking through the MacOS Activity Monitor, I see a bunch of processes I don't recognize at all, they have cryptic names, and when I click on them to see the Statistics, I see many of them have done numerous Mach System Calls (like over 10,000 in one case), which means this thread is doing all sorts of low-level things with the OS, like memory management and inter-process communication. The truth is, I have no idea of what's really running on my Mac. (And I just upgraded the OS to Ventura 13.3.)
 

mhardy6647

Grand Contributor
Joined
Dec 12, 2019
Messages
11,393
Likes
24,706
There's also the method used in Fredric Brown's 1942 classic short story, ETAOIN SHRDLU. It was about a Linotype machine that achieved sentience. I won't include a spoiler, since it's one of my favorite short stories haha! As far as I know, it's the first cautionary tale about artificial intelligence.

The story's title comes from the keys on the Linotype, arranged in descending order of frequency in the English language (as determined by experienced manual compositors of the time, which differs from other letter frequency rankings such as those used in cryptography).

View attachment 276205
Thanks - that was a fun little story.

(For anyone else interested, I found it free here : https://www.you-books.com/book/F-Brown/Etaoin-Shrdlu)

That was great! :)
 

bluefuzz

Major Contributor
Joined
Jan 17, 2020
Messages
1,067
Likes
1,824
Ottmar Mergenthaler.
Linotype operators, i.e. typesetters were some of the first casualties of first phototype machines in the 60s and 70s and then PC 'desktop publishing' in the 80s and early 90s. Now the desktop publishers, i.e. 'graphic designers' are being replaced by secretaries wielding Canva and Midjourney.
 
Status
Not open for further replies.
Top Bottom