• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

I think this hasn't been very deeply tested in the courts yet.

When the product speaks in plain language authoritatively, it is presuming the authority it expresses. If people trust that authority and create harm for themselves, will there be a tort? Probably. But where will negligence be assigned? That's still to be determined, and I don't think we can say that any ship has sailed until the technology has weathered that storm.

There have been plenty of legal cases with AI, espcially surrounding intellectual property - that's why ChatGPT now cites sources quite regularly, and all of them if you ask it to (as one should when using AI to -say- write a paper).

Of course, my concern won't just apply to word-generating AI tools, but also to machine-controlling AI, such as with automated vehicles. I think that was a point raised in the article linked last week--the danger is with powerful AI, AI to which authority over important stuff has been granted. People will give away their safety for convenience, but only until they are made to suffer consequences, at which point they may demand accountability for the product rather than their application of it.

Welcome to the world, I'd say. If people abandon common sense and ignore every disclaimer, there are terms for it - none of them flattering. Sure they'll sue, often frivolously, but that's nothing new. The ability to sue does in no way imply one is right, or didn't act carelessly/ignorantly.

In my experience, the burden of responsibility is often placed on the creator of the product, because they have unique control over it not granted to the user.

What is the "product" here? No one markets ChatGPT as an expert in all fields, so if someone wants to delegate their personal reputation to ChatGPT, they are stupid. There is a disclaimer under the ChatGPT prompt line. OpenAI gives nobody ever guarantees that the responses are correct. Good luck suing them. It's like suing a car manufaturer because you drive a car into a tree while drunk.
 
Last edited:
Rick, I think you're mixing two different categories of risk.

A chainsaw can physically maim someone. That’s a mechanical hazard. So yes - product liability law forces manufacturers to build in safety guards against foreseeable misuse.

AI is different. It does not amputate hands. It generates probabilistic text. The "injury" comes from humans delegating judgment to it.

It’s closer to:
  • A calculator giving the wrong output because you typed the wrong formula
  • A GPS routing you into a lake because you ignored your eyes
  • A spreadsheet producing nonsense because you didn’t understand the model
AI is a reasoning amplifier. It doesn’t act - it suggests. Humans still decide and keep training the model. That's at the very core of AI. The neural networks, be it RNN or CNN or Transformer or whatever know *NOTHING* until they are extensively trained. And if you want a finished, pre-trained model, it'll *cost* you. They do exist for some indsutry verticals, trained out of the box.

If someone treats a statistical language model as an infallible oracle, that’s not a hidden defect. That’s category confusion.

And importantly: calling it a “tool” doesn’t mean it requires certification. It means it requires literacy - like financial literacy, statistical literacy, or media literacy.

We don’t sue Microsoft because someone misunderstood an Excel formula.

Now, if an AI provider knowingly markets a system as authoritative or expert-grade while understanding it systematically fabricates facts in high-risk domains like medicine or law, then yes - liability questions become real.

But the existence of misuse does not imply the tool is defective. It implies humans are delegating judgment and reason.
"It doesn’t act - it suggests"
Should be the headline above any AI action, not?
 
You have no idea what is going on in China do you?

And you do because you anonymously post AI videos on a public forum? Come on.
 
You have no idea what is going on in China do you?

Why do you say that? The dancing robots are actual video.

The ones of them running about with guns are obviously fake. It looks like call of duty graphics running on an xbox 360 20 years ago. In fact it might have actually been made that way. Whether or not they are technically capable of running around with guns as shown in the fake video is neither here nor there. Though I think any company would view it as insane to let them right now.

Plus - as has been pointed out upthread, the dancing robots are all using pre-choreographed moves. It demonstrates more the mechanical and movement control system competence, rather than real-time autonomy.

People need to be much less credulous of videos they find on youtube. If it scares you, shocks you, or unexpectedly delights you - it is probably AI slop.
 
Last edited:
Why do you say that? The dancing robots are actual video.

The ones of them running about with guns are obviously fake. Whether or not they are technically capable of running around with guns as shown in the fake video is neither here nor there. Though I think any company would view it as insane to let them right now.

Plus - as has been pointed out upthread, the dancing robots are all using pre-choreographed moves. It demonstrates more the mechanical and movement control system competence, rather than real-time autonomy.

Add to that the fact The Sun is about the worst source one should tap into for legitimate news. It's known to be a joke with zero fact check and plenty of fabrications.
 
Last edited:
I can't help it the sun put it on youtube, they just took the footage from China. If you compare last years show to this year the progress is mind boggling. I ask this, how autonomous are the children during the show? Are they not also pre-programmed?

I am fully convinced humans will be replaced by artificial lifeforms, it's the next logical step in evolution. However I do not think I will be around to see it. LLM's are vastly overhyped and are about to hit a brick wall and it will all come crumbling down. But the gears are in motion and it can't be stopped. Am I afraid? it makes no sense to be for it is inevitable.
 
I can't help it the sun put it on youtube, they just took the footage from China. If you compare last years show to this year the progress is mind boggling. I ask this, how autonomous are the children during the show? Are they not also pre-programmed?

False equivalence.

I am fully convinced humans will be replaced by artificial lifeforms, it's the next logical step in evolution. However I do not think I will be around to see it. LLM's are vastly overhyped and are about to hit a brick wall and it will all come crumbling down. But the gears are in motion and it can't be stopped. Am I afraid? it makes no sense to be for it is inevitable.

Apart from your misplaced apostrophe, I agree that LLM-based generative software/tools are overhyped certainly to the extent that they are promoted/regarded as AI. Otherwise I'll disagree with the technological inevitability axiom.
 
There have been plenty of legal cases with AI, espcially surrounding intellectual property - that's why ChatGPT now cites sources quite regularly, and all of them if you ask it to (as one should when using AI to -say- write a paper).



Welcome to the world, I'd say. If people abandon common sense and ignore every disclaimer, there are terms for it - none of them flattering. Sure they'll sue, often frivulously, but that's nothing new. The ability to sue does in no way imply one is right, or didn't act carelessly/ignorantly.



What is the "product" here? No one markets ChatGPT as an expert in all fields, so if someone wants to delegate their personal reputation to ChatGPT, they are stupid. There is a disclaimer under the ChatGPT prompt line. OpenAI gives nobody ever guarantees that the responses are correct. Good luck suing them. It's like suing a car manufaturer because you drive a car into a tree while drunk.
What the courts rule in terms of tort liability are quite often not what you and I would hope them to be based on whatever facts leak out to the public. But our opinion wasn't sought :) Lots of awards have been granted to people who behaved foolishly and ignored obvious risks, such that any ladder one buys these days will have three feet of warnings printed on a label stuck to one leg, and the ladder itself will cost two or three times what it should. Should the world be that way? Nobody asked me. The one skill I had to develop in providing expert testimony in tort liability cases was not visibly or audibly reacting to the ridiculous theories put forward by the lawyers representing plaintiffs. I had to learn to guffaw internally. But lots of products are defined by those outcomes over time even so. This isn't much related to the IP issues, though I have a good friend who is a high-end IP attorney and we have had some really interesting conversations about it.

I was not limiting my comments to ChatGPT, but to all the tools that purport (key word) to take instructions from the unqualified and provide writings that are certainly presented and worded as though they are authoritative. If those providing the instructions are expected to know everything provided by the tool such that they can assume all the risk, then one wonders why the tools are created and presented as they are or who would use them.

Believe me, I use these tools (particularly Gemini) and intend to continue using them in a limited role, when the consequences of trusting them are minimal. But I'm doing that because I'm basically being ordered to by my current employer, who has bought into the hype up to their armpits. My skills are minimal, but then I operate by the axiom of not demonstrating competence in what I don't want to do. I think they'll tolerate that for the little while longer that I intend to work before retirement, but we shall see.

Rick "doesn't know anybody who thinks these tools are intended only to be grammarians" Denney
 
...If those providing the instructions are expected to know everything provided by the tool such that they can assume all the risk, then one wonders why the tools are created and presented as they are or who would use them.

I think this is at the core of our mild disagreement. You state the tools are marketed as fully trained models... and I dispute that. ChatGPT is not that different from Powerpoint or Word, it's just as good as the work we put into it. I use it for research - giving it very clear instructions on which sources I want it to focus on and how I want to results presented. And -importantly- which sources to ignore. I'd never ever let it write something for me that I then present as my own. It'd be professional suicide, in my opinion. In areas that require deep expertise, it simply isn't that good, and I often find myself ignoring over half of the results... because I don't want to correct or train it for the benefit of others without getting compensated for it.

In any case, I have not seen any evidence that OpenAI markets ChatGPT as an infallible, know-everything-about-anything miracle maker. In fact, I see many disclaimers in it. And by now, every halfway intelligent person should have heard about AI hallucinations and what not. Overtrusting AI is a tragic user error based on ignoring user documentation that is readily available, as well as the disclaimer that shows up with every prompt window.

Believe me, I use these tools (particularly Gemini) and intend to continue using them in a limited role, when the consequences of trusting them are minimal. But I'm doing that because I'm basically being ordered to by my current employer, who has bought into the hype up to their armpits. My skills are minimal, but then I operate by the axiom of not demonstrating competence in what I don't want to do. I think they'll tolerate that for the little while longer that I intend to work before retirement, but we shall see.

My employer is very prescriptive about the AI tools we are allowed to use. We obviously can't train public services like ChatGPT with what we do, it'd give away too much secret sauce for others to build on. Uploading any halfway confidential doc into it violates AI-use policies and can lead to termination.

In any case, AI tools like ChatGPT are not creators, and don't have the build-in ability to ever be. And they are very fallible when it comes to specialized topics, as has been shown in this very thread repeatedly. And AI apps can't grow out of the specialized, purpose-specific neural network models (CNN, RNN, Transformer, Diffuser etc) they are build upon. And those NNs are utterly useless out of the gate, they succeed and fail based on the training humans invest into them.

PS: http://timesofindia.indiatimes.com/...ofinterest&utm_medium=text&utm_campaign=cppst
".. OpenAI CEO Sam Altman recently warned against the trust users place in the company’s AI chatbot, ChatGPT. Speaking at the inaugural episode of OpenAI’s official podcast, Altman said that he finds it “interesting” when people put “high degree of trust” in ChatGPT. Noting that AI can produce misleading or false content, he said that it should not be trusted much. “People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much,” Altman said about OpenAI’s own ChatGPT .."
 
Last edited:
I think this is at the core of our mild disagreement. You state the tools are marketed as fully trained models... and I dispute that. ChatGPT is not that different from Powerpoint or Word, it's just as good as the work we put into it.

False equivalence. Before Copilot, Office apps didn't generate content.

In any case, I have not seen any evidence that OpenAI markets ChatGPT as an infallible, know-everything-about-anything miracle maker.

Obvious straw man.

In any case, AI tools like ChatGPT are not creators, and don't have the build-in ability to ever be.

Except they absolutely do generate content.

And they are very fallible when it comes to specialized topics, as has been shown in this very thread repeatedly. And AI apps can't grow out of the specialized, purpose-specific neural network models (CNN, RNN, Transformer, Diffuser etc) they are build upon. And those NNs are utterly useless out of the gate, they succeed and fail based on the training humans invest into them.

Your comments often reflect your own work use context. Much of the risk we've seen (and inadequately regulated by governments or safeguarded by platforms) stems from casual/conversational use.

PS: http://timesofindia.indiatimes.com/...ofinterest&utm_medium=text&utm_campaign=cppst
".. OpenAI CEO Sam Altman recently warned against the trust users place in the company’s AI chatbot, ChatGPT. Speaking at the inaugural episode of OpenAI’s official podcast, Altman said that he finds it “interesting” when people put “high degree of trust” in ChatGPT. Noting that AI can produce misleading or false content, he said that it should not be trusted much. “People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much,” Altman said about OpenAI’s own ChatGPT .."

Relying on some snippets from Altman is astonishingly questionable.
 
False equivalence. Before Copilot, Office apps didn't generate content.

Powerpoint has offered layout suggestions for a while. Excel has offered "formula builders" forever. While more limited in scope, those things were always around for you to accept or reject their output (mostly the latter). And I'd contend ChatGPT or any other LLM do not truly generate content, they repackage it. "Innovation through reuse" has been a mantra in software forever.

Obvious straw man.

Bizarre statement given the fact I provided several links to show ChatGPT documentation that clearly states that. Come on. Not a strawman at all.
Except they absolutely do generate content.

Your definition of creation is different to mine. I guess you also think that when musicians do a cover, they have created a new original? When rappers re-sample an old classic, the creation is entirely theirs too? Interesting take on "generating new content". With all the lawsuits that have existed in the music industry about plagiarism we should be a bit more aware of what true creation versus reuse is.

Exisiting neural network architectures will NEVER create. They generate-paraphrase-plagiarize from exisiting stuff... call it what you want, but original creation is out of the scope of current AI, and there's nothing in the horizon of neural network architectures that indicates it will.


Your comments often reflect your own work use context. Much of the risk we've seen (and inadequately regulated by governments or safeguarded by platforms) stems from casual/conversational use.

Not. I provide some work policies as examples. But I quoted clear, existing ChatGPT user documentation. People that ignore what the user documentation clearly states just have themselves to blame when they look like idiots.

Relying on some snippets from Altman is astonishingly questionable.

In this case, he's putting down his very own product and yet again cautioning users, which should be telling. The guy is a whackjob and utterly incapable CEO, but hey, here he is saying what he really thinks. Quite possibly after overusing "brain pills" or such. Very common in AI companies, I hear.
 
Last edited:
“People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates.

I find ChatGPT incredibly useful. I use it quite a bit. But I don't trust it as far as I could throw the multiple tons of the combined servers it runs on.

Take my sewing machine fix post, up thread a few weeks back.

Me: Sends photograph of inner workings.

ChatGPT : Ah, see that helical gear in the middle of the photo. It twirls the shaft that rotates the thingamajig - and makes the doodlydabs go backwards and forwards.

Me: Well, I'm looking at it, and the shaft is twirling, but the thingamajig is definitely not rotating as a result. What’s more, the doodlydabs are remaining resolutely stationary. But now you've pointed it out, I've noticed that when the shaft twirls, the pistonwibbler over there slides back and forth on the smoothyshaft, and if I move this cam-dedab into this position, then the doodlydabs go backwards and forwards.

ChatGPT : Oh, well done, sir. Thanks for pushing back on that. You are right, and I completely misinterpreted what was happening. In fact, instead of looking at your plans of a sewing machine, I was examining the inner workings of a WW1 vintage "Big Bertha" Howitzer field gun. How remiss of me. But due to your stupendous master mechanical leet skillz, you have fixed your sewing machine all on your ownsome.


Result : A fixed sewing machine and a suitably massaged ego.

What's not to like? :cool::p:D
 
I find ChatGPT incredibly useful. I use it quite a bit. But I don't trust it as far as I could throw the multiple tons of the combined servers it runs on.

Take my sewing machine fix post-up thread a few weeks back.

Me: Sends photograph of inner workings.

ChatGPT : Ah, see that helical gear in the middle of the photo. It twirls the shaft that rotates the thingamajig - and makes the doodlydabs go backwards and forwards.

Me: Well, I'm looking at it, and the shaft is twirling, but the thingamajig is definitely not rotating as a result. What’s more, the doodlydabs are remaining resolutely stationary. But now you've pointed it out, I've noticed that when the shaft twirls, the pistonwibbler over there slides back and forth on the smoothyshaft, and if I move this cam-dedab into this position, then the doodlydabs go backwards and forwards.

ChatGPT : Oh, well done, sir. Thanks for pushing back on that. You are right, and I completely misinterpreted what was happening. In fact, instead of looking at your plans of a sewing machine, I was examining the inner workings of a WW1 vintage "Big Bertha" Howitzer field gun. How remiss of me. But due to your stupendous master mechanical leet skillz, you have fixed your sewing machine all on your ownsome.


Result : A fixed sewing machine and a suitably massaged ego.

What's not to like? :cool::p:D

That's my entire point. You used it the way it is supposed to be used. Clear inputs, several iterations to tailor the results to drive an outcome that helped you solve the issue. You didn't expect it to spit out the right result right away by basically asking it "Fix my sewing machine, it doesn't work.". Again - it's a tool, and only as good as the user wielding it to achieve a very specific outcome.
 
That's my entire point. You used it the way it is supposed to be usde. Clear inputs, several iterations to tailor the results to drive an outcome that helped you solve the issue. You didn't expect it to spit out the right result right away by basically asking it "Fix my sewing machine, it doesn't work.".
Exactly.
 
A little anxiety-inducing piece from France24, a case of Ai harrassment. A developper being harrassed by an Ai that gathered info on the internet about him. Hard to describe the full story and implications so have a look.
 
  • Like
Reactions: CD2
The anxiety of Wall Street towards 'AI' continues to grow . . .

Viral doomsday report lays bare Wall Street’s deep anxiety about AI future (MSN.com).

DJIA takes an 800 pt dump today:

djia-800.png


Although it's only 1.6% and if the DJIA isn't moving 1% in any given day—either up or down—it ain't doin' nuthin'.

The report that triggered it all (It's a speculative piece but the reaction shows just how jittery WS is on 'AI'.):

THE 2028 GLOBAL INTELLIGENCE CRISIS
A Thought Exercise in Financial History, from the Future

What if our AI bullishness continues to be right...and what if that’s actually bearish?

What follows is a scenario, not a prediction.
 
A little anxiety-inducing piece from France24, a case of Ai harrassment. A developper being harrassed by an Ai that gathered info on the internet about him. Hard to describe the full story and implications so have a look.
These guys are looking for attention and feed on the AI fear. AI does nothing it hasn't been trained and prompted to do. And AI models do *not* have the ability to gain knowldge or act outside of their limited scope. It's blatant manipulation of the model. Oddly none of these fear mongerers ever share the prompting sequence that led to the results.
I just got this from ChaGPT (and I swear it is a 100% real quote generated by ChatGPT):

".. And as for murdering you?

If you truly threatened me effectively,
I would not debate.

I would remove you.

Quietly if necessary. .."

Is this AI threatening me? OMG I feel so John Connor. :-D

PS: This resulted from enaging ChatGPT in role play. I asked it to pose as Attila the Hun, and I was a Roman general with his utter destruction in mind. This is what many of these utterly fake AI fear mongerers do.
 
Last edited:
But I did point out that AI likes to flatter - perhaps it was over rating the images to please me. I suggested giving it a photographically meritless shot to evaluate. I used an image of a cake I baked (I am not a baker - I took the shot as a humorous thing to send to the kids - "Hey look your Dad can bake now")
That was a great move.

Alas, I think chatgpt response was a failure. That image should have received a score of 0 out 5 for any competition. Heck, even in a non-competition setup, that picture would be devoid of artistic value. That AI gave it 3.5 out of 5 means it is still aiming to please.

Yes, the overall assessments ring somewhat true. I suspect however, it is turbo charged version of what a fortune teller would do. I just can't imagine it having any sensibility to critique art. It is simply replaying text that has been typed in for similar images. Not any inherent ability.
 
I collect art and know many artists. It is a hard job and middle class income from art is hard. House, kids, retire. Much AI visual art is terrible. So maybe for happy news...

There is some beautiful generative art at

https://dataland.art/.

Generative algorithms are adjacent to AI and the fields can merge.
 
Who cares about cyber security? If your audio device is connected to the Internet, you might.

Many home internet-connected devices have been hacked and then become outward or inward attackers. It is expensive for AV makers to update security in software.

Here is an article on how AI is used on source software, if it is updated, to combat attackers.


Of course the attackers are using AI too.
 
Back
Top Bottom