I think this hasn't been very deeply tested in the courts yet.
When the product speaks in plain language authoritatively, it is presuming the authority it expresses. If people trust that authority and create harm for themselves, will there be a tort? Probably. But where will negligence be assigned? That's still to be determined, and I don't think we can say that any ship has sailed until the technology has weathered that storm.
There have been plenty of legal cases with AI, espcially surrounding intellectual property - that's why ChatGPT now cites sources quite regularly, and all of them if you ask it to (as one should when using AI to -say- write a paper).
Of course, my concern won't just apply to word-generating AI tools, but also to machine-controlling AI, such as with automated vehicles. I think that was a point raised in the article linked last week--the danger is with powerful AI, AI to which authority over important stuff has been granted. People will give away their safety for convenience, but only until they are made to suffer consequences, at which point they may demand accountability for the product rather than their application of it.
Welcome to the world, I'd say. If people abandon common sense and ignore every disclaimer, there are terms for it - none of them flattering. Sure they'll sue, often frivolously, but that's nothing new. The ability to sue does in no way imply one is right, or didn't act carelessly/ignorantly.
In my experience, the burden of responsibility is often placed on the creator of the product, because they have unique control over it not granted to the user.
What is the "product" here? No one markets ChatGPT as an expert in all fields, so if someone wants to delegate their personal reputation to ChatGPT, they are stupid. There is a disclaimer under the ChatGPT prompt line. OpenAI gives nobody ever guarantees that the responses are correct. Good luck suing them. It's like suing a car manufaturer because you drive a car into a tree while drunk.
Last edited: