• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

ChatGPT vs Microsoft CoPilot: Poor Execution or Bias?

lug-nut torque spec
CoPilot also got it right...

1735713244468.png


... but always take these AI assistants "advice" with a massive grain of salt. We know what trusted sources are, they often do not differentiate.


JSmith
 
Tire pressure is something they often get wrong. I remember trying to look up the number for both my tractor and Quad and in both cases, they were wrong. I think the reason is that there so many models of each that they just grab one and ran with it. In both cases I had to hunt down the manual to get the right value. I wonder if this is an often enough asked question that they need to put special logic in there to not try to generalize.
 
Tire pressure is something they often get wrong. I remember trying to look up the number for both my tractor and Quad and in both cases, they were wrong. I think the reason is that there so many models of each that they just grab one and ran with it. In both cases I had to hunt down the manual to get the right value. I wonder if this is an often enough asked question that they need to put special logic in there to not try to generalize.
I think it's because this kind of information is seldom available as plain text on the web. It's akin to trying to find the value of capacitor C426 in a Yamaha A-S2200 amplifier -details like these are typically buried in PDFs. Similarly, when I asked ChatGPT for the tire pressure of our cars, its response was based on information sourced from online user discussions.
It's nice that it disclosed the source though.
 
I went to see how ChatGPT handles power for amplifiers. I was very surprised and pleased to find this answer:

How much power does the AIYIMA A07 amplifier have?

ChatGPT said:


The AIYIMA A07 is a compact stereo amplifier utilizing the Texas Instruments TPA3255 Class D amplifier chip. Its power output varies based on the connected power supply and speaker impedance:

It's important to note that while the amplifier is theoretically capable of delivering up to 300 watts per channel into 4Ω speakers, achieving this requires a robust power supply, such as a 48V/12A unit, which is not typically included.
Aiyima

The actual power output is influenced by the power supply's voltage and current capacity, as well as the impedance of the connected speakers.


:)

The links don't work though: https://www.audiosciencereview.com/...view-amplifier.18984/=&utm_source=chatgpt.com

It is sticking that suffix there. Have to see if I can fix that.
 
Hmmm. Looks like it is grabbing the power ratings out of the graphs! But doesn't get them all correctly:

 
This is a case where Copilot is flagrantly at fault. And it gets worse.

Q: When was the Lyngdorf TDAI-3400 released?


ChatGPT
: The Lyngdorf TDAI-3400 was officially launched at the Integrated Systems Europe (ISE) exhibition in February 2018 and began shipping in early April 2018. ...


Copilot: The Lyngdorf TDAI-3400 was released in October 2020.

Q: If the Lyngdorf TDAI-3400 was released in October 2020, then how do you explain Amir on audiosciencereview.com already reviewed it in February 2019?

Copilot: It seems there might be a misunderstanding. The review by Amir on Audio Science Review (ASR) was actually for the Lyngdorf TDAI-1120, which was released earlier, not the TDAI-3400. The TDAI-3400 was announced later and released in October 2020.

And then it adds this link to the review on ASR.
 
This isn't a rhetorical question per se- but it's also well off-topic, so I am not really trying to start a side-bar discussion on mechanics!
But I do wish to put my question on the table. What's the mechanism to correct erroneous information?

E.g., I googled the lug-nut torque spec for my car a few weeks back. Google's top-line AI answer was wrong. :rolleyes:


View attachment 417744

From the manual (which - ahem - I had downloaded to the same computer... but I thought it'd be easier to just ask google. Silly me).

View attachment 417745
Weirdly enough I had the same discussion with a relative yesterday. There's no quick answer, but some models are trained on feedback from users. A friend who does full time analytic "tool making", builds rapid code tools around all the major AI APIs and RAG. In other words he's not typing into a chatbot. But he believes in bluntly informing each model when it makes a mistake which it always does at some point or with some prompts.

I was demonstrating this effect to my family. Ask a simple question and even if it gets it right, question its accuracy and you will get a different answer which will either be closer to the truth or further away.

If an AI is trained on a closed, restricted set (e.g. the transactions of a science faculty), it's actually very useful. This is why AI exclusively trained on your work email is effective. There are very few hallucinations.
 
I went to see how ChatGPT handles power for amplifiers. I was very surprised and pleased to find this answer:

How much power does the AIYIMA A07 amplifier have?

ChatGPT said:


The AIYIMA A07 is a compact stereo amplifier utilizing the Texas Instruments TPA3255 Class D amplifier chip. Its power output varies based on the connected power supply and speaker impedance:
It's important to note that while the amplifier is theoretically capable of delivering up to 300 watts per channel into 4Ω speakers, achieving this requires a robust power supply, such as a 48V/12A unit, which is not typically included.
Aiyima

The actual power output is influenced by the power supply's voltage and current capacity, as well as the impedance of the connected speakers.


:)

The links don't work though: https://www.audiosciencereview.com/forum/index.php?threads/aiyima-a07-tpa3255-review-amplifier.18984/=&utm_source=chatgpt.com

It is sticking that suffix there. Have to see if I can fix that.

Hmm. When I copied and pasted the question, the answer was the same, but it didn't include any sources.
 
So far, the only AI chat I find reliable and unbiased is Elon Musk's Grok; at least, it's the only AI platform still unaffected by the woke-mind virus. Here is Grock's answer regarding the Microsoft's decision to hire Mustafa:

The decision by Microsoft to hire Mustafa Suleyman as CEO of Microsoft AI has sparked a variety of opinions across the industry, reflecting both the opportunities and challenges associated with this move:

Arguments in Favor of Hiring Suleyman:
From a broad perspective, the hiring of Mustafa Suleyman seems to align with Microsoft's aggressive push into AI, aiming to cement its position as a leader in the field. However, the success of this move will largely depend on how well Suleyman's leadership style integrates with Microsoft's culture, how he navigates the ethical challenges of AI development and the broader impact on market competition. The public discourse around this hire shows a generally positive outlook but with an acknowledgment of potential risks that need careful management.
 
... unaffected by the woke-mind virus. ...
There is absolutely no need to talk like that. And is it totally irrelevant.
 
There is absolutely no need to talk like that. And is it totally irrelevant.
I find it very relevant. How could it not be? If you want to use a different description you'll still be talking about the same issue. When certain things are deemed off limits or unacceptable it will effect the results from an AI. No different than bias effects people's results answering questions.
 
Woke = virus ? Really ???
 
This isn't a rhetorical question per se- but it's also well off-topic, so I am not really trying to start a side-bar discussion on mechanics!
But I do wish to put my question on the table. What's the mechanism to correct erroneous information?

E.g., I googled the lug-nut torque spec for my car a few weeks back. Google's top-line AI answer was wrong. :rolleyes:


View attachment 417744

From the manual (which - ahem - I had downloaded to the same computer... but I thought it'd be easier to just ask google. Silly me).

View attachment 417745
Perplexity.com also supplied the correct answer. One thing I like about it for searches is it gives sources. One hopes one doesn't need to check sources, but with technical info you do. Obviously quoting specs requires little intelligence and yet almost anything I know very well, most of the AI I've tried (which is not any paid versions) messes up details. Which leaves me very untrusting of it to ask questions about things I don't know much at all about or nothing about.
 

Mod notice

Let's drop the whole woke/anti woke discussion please . It never goes anywhere good .
 
Hmm. When I copied and pasted the question, the answer was the same, but it didn't include any sources.
I was surprised to see the links. Maybe they are trying out seeing that it is buggy.
 
Woke = virus ? Really ???
I thi
There is absolutely no need to talk like that. And is it totally irrelevant.

I think Elon himself came up with this term


As a researcher, when I start working on a new project article, I usually brainstorm using ChatGPT 4o. I used Perplexity for the literature survey and Claude primarily for rephrasing long paragraphs because it can be pretty creative in expressing even simple ideas.

Recently, I've started using Grok 2 primarily for generating creative sketches and drawings but also to double-check whatever ChatGPT says because in my field (power electronics and renewable energy), when I ask ChatGPT anything related to that subject, unlike Glok, it always tries to shove some climate change agenda down my throat...
 
when I ask ChatGPT anything related to that subject, unlike Glok, it always tries to shove some climate change agenda down my throat...
May I suggest you're thinking of it as if it can think... it can't. Providing related information and topics to explore is pretty standard and doesn't mean it's pushing any kind of specific agenda. It's just to you it may seem that way. Climate change is one of those things that's backed up by large amounts of factual research data.


JSmith
 
So far, the only AI chat I find reliable and unbiased is Elon Musk's Grok; at least, it's the only AI platform still unaffected by the woke-mind virus. Here is Grock's answer regarding the Microsoft's decision to hire Mustafa:

Thanks,
 
Last edited:
Back
Top Bottom