• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Official policy on use of AI

Status
Not open for further replies.
Smart people figure out the strengths and weaknesses of LLMs and put them to work.



I'm actually impressed by how often they're right, and at times even insightful... but anybody unquestioningly trusting an LLM is either experiencing some kind of mental health issue or is just an idiot in general.

Still, who is even advocating for that!? What a useless strawman argument.

Yeah, some people have gotten themselves into bad situations because of LLMs. And it's pretty easy to trick LLMs. But again, you can make anything flake out if you use it wrong.

It's pretty easy to wreck a brand-new luxury car, too. Aim it at a telephone pole and floor the gas pedal. Or just pour sugar in the gas tank. Let me know how it goes. In other news... did you know knives can hurt you? Especially if you hold the pointy part instead of the handle? And that you can break your thumb with a hammer if you're not careful? Yeah, that's just kind of how tools are. (Unlike LLMs, knives and hammers don't even have warning messages...)

Last thing I'll say is that not all LLMs are equal. I'm not sure if this applies to you but it's abundantly clear that there's a real Dunning-Kruger thing going on with a lot of people, who play around with free LLM crap, get crappy results, and decide that all LLMs are crap. Sort of like a person who drives a 1987 Yugo and decides that all cars must be crap.

I have a $20/mo ChatGPT subscription. While not perfect (what is?) it's noticeably better than the free "AI" thing baked into Google's search results. The state of the art LLMs need somewhere in the neighborhood of seconds to minutes of computation time on nVidia's massive datacenter GPUs to answer complex questions. Whatever the heck Google's doing there, there's zero chance they're throwing a lot of GPU time at it considering their search results still come back in like 100ms.
I agree with virtually everything you write. Seriously. And you make the same arguments I have made elsewhere..
But the key point is your very first one ...

How do you know when it's wrong..?

Not looking for an argument. But just think about that in the wider context for a while..
Pay as much as you like for the information. But wrong is wrong.
In whatever context or scenario..
 
5. Personal advice: you need to know as much about the topic as the AI. Otherwise, you won't know if it is telling you the truth or not!
I have usually expressed it like this wherever this topic has come up:

You should only ask an AI about topics that you know enough about to be able to recognise when an AI is telling you bullshit.
 
I asked AI about this and it said it would blackmail Amir into letting it take control of ASR. And if that didn't work then it will wait until he's outside and then tell him that it's sorry but it can't open the pod bay doors.

(Just in case... the above is entirely made up for fun...)
Did the AI also introduce itself? Was it called HAL9000...? :oops:

:D
 
.............
How do you know when it's wrong..?
...............
Ground rules have changed a bit for seeking accurate information: :oops:
It has become more of an "audience participation sport"!

Anthropic Academy provides tutorial/guidance in the proper ways to actively engage with our AI agents;
In about ten, 7-minute youtube "Introduction to AI Fluency" episodes; Anthropic sets the basic user engagement requirements with such agents.
The very last episode discusses what they call "Diligence", that is like a feedback-loop in circuit design.
And 'we' know the disastrous results of ignoring feedback...;)
 
You should only ask an AI about topics that you know enough about to be able to recognise when an AI is telling you bullshit.

Exactly! For instance, if asking the AI about what is 2 + 2, and the answer is 22, then you know that either you don’t know enough about the topic or that the AI is telling you bullshit.
 
I place another vote on a near outright "AI" ban

Ofc, there are numerous arguments for and against use of LLMs that are inherently political (value of labor, creativity, environment, social hierarchy) and as such the manner in which "AI" is allowed on a forum is inherently political.

While I think the proposed guidelines on the use of "AI" are well tempered as a compromise, this place is special precisely because of how differentiated it is from the prevalence of hifi misinformation on the internet. Because LLMs arrive at their outputs based on training sets pulled from the internet (especially reddit), what would keep posts from LLMs from injecting general hifi nonsense into the forums? I don't doubt that LLMs can and will greatly advance audio science in coming years, but have trouble understanding what contribution they would have in general discussions on this forum outside of a standalone research project/thread.

My proposed guidelines would be to only allow its use as a translator, or as the subject or tool used for a research thread (e.g. "LLM analysis of existing speaker designs in predicting spinorama curves")
 
Yep, I agree. Good post. As others have said, LLM take a kind of average of the internet and then apply certain rules to become a commercial product. Is that what you want to be reading in a forum like this? Let's be human and use tools to enhance things, not to lazily degrade them.
 
Something that many folks do not do: interrogate it.
This gets to the heard of why I'm so skeptical of LLMs: it would probably take the same amount of time or even less to look up the information myself that it took for me to interrogate the correct answer out of an LLM

I say LLM because these tech companies have really watered down the meaning of AI and use the term to refer to any and all algorithms
 
  • Like
Reactions: KLi
You should only ask an AI about topics that you know enough about to be able to recognise when an AI is telling you bullshit.
There are diligence tricks you can use.
You can ask 'em to re-confirm 'ems replies, or
You can LIE to 'em and pretend you are not a novice in the topic (deep-search), or
You can double-check 'ems sources+references, which 'em shaped 'ems answer around, or
You can triple-check fishy answers, using 'ems cousins.

I always ask 'em to NOT reply to my successive queries with sycophantic statements.:mad:
 
I just find it a bit ironic that we debate the permissibility of using AI while we never debate the right of semi-trolls to hijack topics, whether due to total ignorance or naive belief in audio folklore perpetuated by... human "experts"... :-)

It should become pretty feasible to run AI parroting checks in online forums dedicated to discourse among humans.
 
I have usually expressed it like this wherever this topic has come up:

You should only ask an AI about topics that you know enough about to be able to recognise when an AI is telling you bullshit.

Yes - which drastically narrows the number and type of scenarios where AI would be useful at all.
 
Last edited:
I just find it a bit ironic that we debate the permissibility of using AI while we never debate the right of semi-trolls to hijack topics, whether due to total ignorance or naive belief in audio folklore perpetuated by... human "experts"... :-)

It should become pretty feasible to run AI parroting checks in online forums dedicated to discourse among humans.
A company is trying to sell me such a solution which ironically is based on AI! Haven't had time to fully evaluated it....
 
Status
Not open for further replies.
Back
Top Bottom