JSmith
Master Contributor
CoPilot also got it right...lug-nut torque spec
... but always take these AI assistants "advice" with a massive grain of salt. We know what trusted sources are, they often do not differentiate.
JSmith
CoPilot also got it right...lug-nut torque spec
I think it's because this kind of information is seldom available as plain text on the web. It's akin to trying to find the value of capacitor C426 in a Yamaha A-S2200 amplifier -details like these are typically buried in PDFs. Similarly, when I asked ChatGPT for the tire pressure of our cars, its response was based on information sourced from online user discussions.Tire pressure is something they often get wrong. I remember trying to look up the number for both my tractor and Quad and in both cases, they were wrong. I think the reason is that there so many models of each that they just grab one and ran with it. In both cases I had to hunt down the manual to get the right value. I wonder if this is an often enough asked question that they need to put special logic in there to not try to generalize.
Weirdly enough I had the same discussion with a relative yesterday. There's no quick answer, but some models are trained on feedback from users. A friend who does full time analytic "tool making", builds rapid code tools around all the major AI APIs and RAG. In other words he's not typing into a chatbot. But he believes in bluntly informing each model when it makes a mistake which it always does at some point or with some prompts.This isn't a rhetorical question per se- but it's also well off-topic, so I am not really trying to start a side-bar discussion on mechanics!
But I do wish to put my question on the table. What's the mechanism to correct erroneous information?
E.g., I googled the lug-nut torque spec for my car a few weeks back. Google's top-line AI answer was wrong.
View attachment 417744
From the manual (which - ahem - I had downloaded to the same computer... but I thought it'd be easier to just ask google. Silly me).
View attachment 417745
I went to see how ChatGPT handles power for amplifiers. I was very surprised and pleased to find this answer:
How much power does the AIYIMA A07 amplifier have?
ChatGPT said:
The AIYIMA A07 is a compact stereo amplifier utilizing the Texas Instruments TPA3255 Class D amplifier chip. Its power output varies based on the connected power supply and speaker impedance:
It's important to note that while the amplifier is theoretically capable of delivering up to 300 watts per channel into 4Ω speakers, achieving this requires a robust power supply, such as a 48V/12A unit, which is not typically included.
- With a 32V/5A Power Supply:
- Into 4Ω Speakers: Approximately 77 watts per channel.
Audio Science Review- Into 8Ω Speakers: Approximately 38 watts per channel.
Audio Science Review- With a 48V/5A Power Supply:
- Into 4Ω Speakers: Approximately 117 watts per channel.
Audio Science Review- Into 8Ω Speakers: Approximately 60 watts per channel.
Audio Science Review
Aiyima
The actual power output is influenced by the power supply's voltage and current capacity, as well as the impedance of the connected speakers.
The links don't work though: https://www.audiosciencereview.com/forum/index.php?threads/aiyima-a07-tpa3255-review-amplifier.18984/=&utm_source=chatgpt.com
It is sticking that suffix there. Have to see if I can fix that.
There is absolutely no need to talk like that. And is it totally irrelevant.... unaffected by the woke-mind virus. ...
I find it very relevant. How could it not be? If you want to use a different description you'll still be talking about the same issue. When certain things are deemed off limits or unacceptable it will effect the results from an AI. No different than bias effects people's results answering questions.There is absolutely no need to talk like that. And is it totally irrelevant.
Perplexity.com also supplied the correct answer. One thing I like about it for searches is it gives sources. One hopes one doesn't need to check sources, but with technical info you do. Obviously quoting specs requires little intelligence and yet almost anything I know very well, most of the AI I've tried (which is not any paid versions) messes up details. Which leaves me very untrusting of it to ask questions about things I don't know much at all about or nothing about.This isn't a rhetorical question per se- but it's also well off-topic, so I am not really trying to start a side-bar discussion on mechanics!
But I do wish to put my question on the table. What's the mechanism to correct erroneous information?
E.g., I googled the lug-nut torque spec for my car a few weeks back. Google's top-line AI answer was wrong.
View attachment 417744
From the manual (which - ahem - I had downloaded to the same computer... but I thought it'd be easier to just ask google. Silly me).
View attachment 417745
Yes, really. At least in people I know it seems like the best most succinct description. I try very hard to give the benefit of the doubt. I'll drop the discussion of this as I don't want to make it political or problematic.Woke = virus ? Really ???
Mod notice
Let's drop the whole woke/anti woke discussion please . It never goes anywhere good .
I was surprised to see the links. Maybe they are trying out seeing that it is buggy.Hmm. When I copied and pasted the question, the answer was the same, but it didn't include any sources.
I thiWoke = virus ? Really ???
There is absolutely no need to talk like that. And is it totally irrelevant.
May I suggest you're thinking of it as if it can think... it can't. Providing related information and topics to explore is pretty standard and doesn't mean it's pushing any kind of specific agenda. It's just to you it may seem that way. Climate change is one of those things that's backed up by large amounts of factual research data.when I ask ChatGPT anything related to that subject, unlike Glok, it always tries to shove some climate change agenda down my throat...
So far, the only AI chat I find reliable and unbiased is Elon Musk's Grok; at least, it's the only AI platform still unaffected by the woke-mind virus. Here is Grock's answer regarding the Microsoft's decision to hire Mustafa:
AI probably wouldn't have much liked my work email stream...This is why AI exclusively trained on your work email is effective. There are very few hallucinations.