• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Do RCA cables have a sound? ChatGPT says they do.

Willfully lying is a wrong doing. Thus, there is no such thing as the right to willfully lie.

That doesn't mean censorship is not a wrong doing. Censorship ends up stealing vital information from people. Stealing vital information from people is a wrong doing.

What we need to do is to expose lies and fight lies by amplifying truths and educating people better. Fight those who speak lies in the open light. Censorship ends up keeping most people in the dark. That's a bad thing.
Now you’re throwing out the baby along with the bath water.
 
"MOST people are educated badly". No they are not. Lazy maybe!
Prison experiments reveal that most people would gladly inflict pain on prisoners if they are given orders from perceived authority figures. The experiment results would be similar today. Most people still follow evil orders from authority figures. That's evil.

They are not inherently evil. They are just programmed badly.

This is now social science. We can always conduct experiments if people care enough.
 
ChatGPT makes stuff up, and it’s ‘database’ is the Internet.
At the moment, it is not yet digging very deep into the net, not yet penetrating password-protected forums, archives etc. However, it is conceivable that it could absorb most of humanity's knowledge soon and learn.
 
Now you’re throwing out the baby along with the bath water.
Just in case, note that spamming and verbal abuse are different matters. They are not really speaking any ideas, but they are harming people. So, banning abusers and spammers is not censorship. So, I'm not saying we should not ban bad people.
 
Even a broken clock is right twice a day :)
Actually - and perhaps more impressively - a stopped analog clock is exactly right twice a day. I don't think that is true of any other human-devised device meant to track the passage of time.
Digital clocks, AFAIK, cannot do this. ;)
We could ask ChatGPT to comment.

 
Actually - and perhaps more impressively - a stopped analog clock is exactly right twice a day. I don't think that is true of any other human-devised device meant to track the passage of time.
Digital clocks, AFAIK, cannot do this. ;)
We could ask ChatGPT to comment.

Another reason why analog is better than digital!
 
What we need is better education, not censorship. Most people are educated badly and thus evil.

Education fixes the problems at the root. Censorship just makes sure that "real" knowledge is highly concentrated at the top of power hierarchy. We want knowledge to be available to all including good people and evil people. Widespread knowlege creates level-playing field. Anything other than knowledge for all creates a cruel power pyramid.

Education is the slow path, but it is the ultimate solution and the only long-term solution. What doesn't work(censorship) is infinitely slower than what works even if that which doesn't work seems to change things fast.

So, my recommendation is to amplify the voice of truths through efforts and care so that the voice of lies are overwhelmed. We need to become amplifiers of truths.
Become a human amplifier of truths. A free market of ideas is the best environment. Instead of censoring lies, expose lies. Let lies fight with truths in the open light. Everything should be laid out in the open. We will collectively lose in the dark.

If we look to censorship, it is going to be abused 100% by evil people, given some time. In the current condition, most people are badly informed, and the kinds of censorship they advocate are misguided as well.
"Most people are educated badly and thus evil".

Firstly, I refuse to believe that most people are evil.

Secondly, education inevitably involves learning a system of values. When someone states that another person is uneducated, in practice it usually means that other person has not acquired the same set of values. When you teach a system of values, inevitably that involves a degree of self-censorship, which is baked in before you get to the point of being able to discuss what to regulate as an "educated member of society".

The fact is that a society is in part a regulated market of ideas. We are discussing here regulating a knowledge tool that has lying embedded in it. Failure to regulate it involves granting the right to willfully lie. You also said
Willfully lying is a wrong doing. Thus, there is no such thing as the right to willfully lie.
No ideas are banned by the regulation of ChatGPT, it is not censorship but preventing a distortion of the market for ideas.

I can't go further in arguing the things you raise because of the ban on politics here, which we are again potentially breaking with this discussion already.
 
Actually - and perhaps more impressively - a stopped analog clock is exactly right twice a day. I don't think that is true of any other human-devised device meant to track the passage of time.
Digital clocks, AFAIK, cannot do this. ;)
We could ask ChatGPT to comment.

Daylight saving ended in NSW today, which prompts me to suggest that not all stopped analogue clocks are right exactly twice a day in all circumstances. A clock stopped at five to three here in Sydney will be right three times today!
 
Last edited:
At the moment, it is not yet digging very deep into the net, not yet penetrating password-protected forums, archives etc. However, it is conceivable that it could absorb most of humanity's knowledge soon and learn.
ChatGPT is not connected to the internet. It can't penetrate password-protected forums. ChatGPT is not Artificial Intelligence. It's a Large Language Model program.

https://help.openai.com/en/articles/6783457-chatgpt-general-faq


Can I trust that the AI is telling me the truth?
  • ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.

    We'd recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the "Thumbs Down" button.
 
ChatGPT is not connected to the internet. It can't penetrate password-protected forums. ChatGPT is not Artificial Intelligence. It's a Large Language Model program.

https://help.openai.com/en/articles/6783457-chatgpt-general-faq


Can I trust that the AI is telling me the truth?
  • ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.

    We'd recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the "Thumbs Down" button.
The thing is, the ChatGPT that we can get at publicly has those limitations. That does not mean that other instances of the GPT engine or similar are not connected to the Internet and being used by people to do things we don't know about.

As far as I'm aware, the version of GPT4 in Bing and Google's Bard are dealing in up to date information and therefore are reading the live Internet, and we probably don't know what parts of the Internet they are reading. We will know more only as they become available outside Beta testing.

Also remember that there are overrides on how ChatGPT and presumably the newer engines are being allowed to reply to queries. So, who controls those overrides?
 
The thing is, the ChatGPT that we can get at publicly has those limitations. That does not mean that other instances of the GPT engine or similar are not connected to the Internet and being used by people to do things we don't know about.

As far as I'm aware, the version of GPT4 in Bing and Google's Bard are dealing in up to date information and therefore are reading the live Internet, and we probably don't know what parts of the Internet they are reading. We will know more only as they become available outside Beta testing.

Also remember that there are overrides on how ChatGPT and presumably the newer engines are being allowed to reply to queries. So, who controls those overrides?
:facepalm:
 
Depends on the definition of garbage, and the ability of the system to recognise that garbage. The point of the trial systems bein made available now is in part to gather data on the garbage coming out.
 
No ideas are banned by the regulation of ChatGPT, it is not censorship but preventing a distortion of the market for ideas.
I'm pretty sure that ChatGPT can easily be used as an excuse to censor legitimate ideas. Anything controversial is exploited as an excuse to censor legitimate ideas. That's what I'm worried about. People don't know what is legitimate because they have learned mostly garbage during their entire lives.

History has shown us that there will always be those who will try to use existing events as an excuse to exploit people and fool them.

Garbage in/Garbage out.

For example, it's very easy to exploit someone who doesn't know arithmetic. A shopkeeper can give the person a wrong change, and the person will not know a difference.

The same goes for other areas of life. If people only know garbage in all areas of life, it's easy to exploit them in all areas of life. It's as easy as exploting someone who doesn't know basic arithmetic.

As long as people don't have real knowledge, it's easy for smart but evil people to use anything controversial as an excuse to exploit people.

People end up being evil because they don't know what is moral and what is evil. If they knew they were doing evil, they probably wouldn't do it. Most people aren't intentionally evil. Knowing what is moral or evil is an important piece of knowledge because it determines the amount of harm inflicted on the world population.

If people don't know what is evil, it's easy for tyrants and manipulators to make people do evil things on the behalf of tyrants and manipulators. As easy as fooling someone who doesn't know basic arithmetic at a shop. You wouldn't believe how many people think it's okay to hurt someone for personal benefits. You can just pay them to make them do your bidding. Previously, I met a christian who thought it was okay to hurt people and send missiles to them for public utlity, not for self defense. To people like them, war is just another way to stimulate economy. Make people wage wars with each other. Sell weapons to them. Make profits from their misery. That's how screwed people are.
 
Last edited by a moderator:
ChatGPT is not connected to the internet. It can't penetrate password-protected forums. ChatGPT is not Artificial Intelligence. It's a Large Language Model program.
That is not a contradiction to what I said. It is the current state of affairs, as far as it is known. I don't expect it to stay that way for long.
 
Not sure why you facepalmed his post.

The Italian privacy regulator on Friday temporary banned ChatGPT for alleged privacy violations saying the company lacks a legal justifying "the mass collection and storage of personal data ... to 'train' the algorithms" of ChatGPT.
 
Back
Top Bottom