• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Do RCA cables have a sound? ChatGPT says they do.

alpha_logic

Active Member
Joined
Aug 16, 2021
Messages
126
Likes
202
Doesn't matter if I agree or not, this is what ChatGPT had to say on the issue:

Screenshot_20230330_213910.png
 
Try asking if it affects the sound, rather than how it affects the sound, and get back to us.

You could also ask why RCA cables don't affect the sound.

And while you're at it, ask it how we know that the Earth is flat.
 
Try asking if it affects the sound, rather than how it affects the sound, and get back to us.

You could also ask why RCA cables don't affect the sound.

And while you're at it, ask it how we know that the Earth is flat.
Oh, I got back to you:) Either ChatGPT is wrong, or you are.

Screenshot_20230330_234638.png
 
Oh, I got back to you:) Either ChatGPT is wrong, or you are.

View attachment 276133
Well, we're already going from them definitely sounding different, to "unlikely to be significant in most cases".

My point isn't that I or ChatGPT are "wrong", but that the answer that it gives is highly dependent on the questions and what it has "learnt". Because it also tries to give a balanced answer, the way you read the answer can be just as important. See how I have ignored that the second answer still ends with "listen carefully to the differences in sound"?

It's algorithmically playing the read-the-customer game as well, because your next question will have it hone in on what you want to hear in most cases, just as a medium will make educated guesses until you say "yes, that's my old aunt" or whoever.
 
Well, we're already going from them definitely sounding different, to "unlikely to be significant in most cases".

My point isn't that I or ChatGPT are "wrong", but that the answer that it gives is highly dependent on the questions and what it has "learnt". Because it also tries to give a balanced answer, the way you read the answer can be just as important. See how I have ignored that the second answer still ends with "listen carefully to the differences in sound"?

It's algorithmically playing the read-the-customer game as well, because your next question will have it hone in on what you want to hear in most cases, just as a medium will make educated guesses until you say "yes, that's my old aunt" or whoever.
Yes, I understand how transformer networks work. ChatGPT will not 'hone' in on 'what I want to hear, it will simply respond according to the data it was trained on. So, taking your example, me wanting to hear that 'The Earth is flat' will not force or 'hone' GPT to give me an answer other than 'the earth is round'. So yes - unlikely to be significant in most cases means significant in some cases, which is pretty much saying not much at all. I do find it interesting though that GPT follows the 'consensus' opinion, while the 'cables don't have a sound' data - which surely it must have been trained on - seems to be weighted less.
 
I do find it interesting though that GPT follows the 'consensus' opinion, while the 'cables don't have a sound' data - which surely it must have been trained on - seems to be weighted less.
How do you know that? Do you have access to the training dataset? I'd assume the exact opposite - that it was trained on the subjective mumbo jumbo which dominates in terms of volume, and that it doesn't read or interpret the AP plots which substantiate the scientific POV.
 
....it will simply respond according to the data it was trained on....

Exactly - garbage in, garbage out.


People - please stop trying to use Chat GPT as an encylopedia - it isn't.
 
People should never believe a stochastic parrot. Unfortunately they do ... have a look how a simple programmed Eliza (by J. Weizenbaum) impressed., Although ChatGPT is more elaborated, in fact it is nothing really different.
 
How do you know that? Do you have access to the training dataset? I'd assume the exact opposite - that it was trained on the subjective mumbo jumbo which dominates in terms of volume, and that it doesn't read or interpret the AP plots which substantiate the scientific POV.
Screenshot_20230331_015226.png
 
What's the fascination with asking ChatGPT and similar AI dumb or even intelligent questions.
Why not ask Hans Bee or Paul McClown these questions. You can get similar 'truths mixed with crap' answers.
 
Last edited:
Then we are down to the capability of the curators. Do you think these curators are expert in all topics, so they can assure that only factual content gets put into the training set? Over how many billions of data points? Especailly with the variaty of sources including web pages - which as we know, in large part have little to no validation.

And quoting GPT's description of its own training to show how accurate it is, is slightly biblical thinking

"How do we know the Bible is true?"
"Because the Bible tells us so"
 
Well, there you go. It is settled science.
I feel so much better now.

;)
 
Back
Top Bottom