• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Official policy on use of AI

Status
Not open for further replies.

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
48,818
Likes
286,001
Location
Seattle Area
We have had recent issues come about regarding use of AI in member posts. I personally think this technology can be useful to us. And at any rate, hate having to police members left and right on what they post. That said, here are my thoughts:

1. If you are going to use AI, make it absolutely clear up front before just copying and pasting AI content.

2. Show the prompt you used and the engine (ChatGPT, etc.). Both of these can be useful to understand how the answer came about.

3. Please, please be mindful that you are dealing with combination of AI technology+the company behind it, wanting you as a customer for life to monetize. The latter will attempt strongly to give answers compatible with what you are asking so you don't go away unhappy. In other words, AI on controversial topics, is likely to give you the answer you want to hear than an unbiased statement.

4. AI cannot be your sole post and certainly not the thread starter. You can use it as part of your post but not the entirety of it. Such threads will likely get reported and deleted by us.

5. Personal advice: you need to know as much about the topic as the AI. Otherwise, you won't know if it is telling you the truth or not!

6. The Master AI thread is excluded from such rules.

7. I am in arguments on another forum and surprisingly, AI has been elevated as authoritative source! Often it is the only counter answer given. We won't be going there here.

I think that is it for now.
 
Last edited by a moderator:
3. Please, please be mindful that you are dealing with combination of AI technology+the company behind it, wanting you as a customer for life to monetize. The latter will attempt strongly to give answers compatible with what you are asking so you don't go away unhappy. In other words, AI on controversial topics, is likely to give you the answer you want to hear than an unbiased statement.
"If the app is free, you are the product"
 
How about a word limit?

AI generated posts are often dumped in to threads verbatim, and they are long! I generally skip them rather than read what could well be plausible nonsense (containing a real point, perhaps).

The site guidance for posting videos is to include a short summary, a "TLDR". Why not require that for AI generated text too (no issue if the summary is also AI generated :) )?
 
"...is likely to give you the answer you want to hear than an unbiased statement."

Absolutely true. A few months ago I had a conversation with someone who thought he was being stalked by the Post Office and Fed Ex. He showed me his ChatGPT responses that confirmed his every suspicion, telling him that he was bright, insightful and brave. Just what a delusional person needs -- let's strengthen the mental illness!
 
I think this is a well designed policy. I especially support:

5. Personal advice: you need to know as much about the topic as the AI. Otherwise, you won't know if it is telling you the truth or not!

I use AI to summarize paywalled articles, long legal and policy documents, and videos. I often find AI does not understand the prompt, and I have to rephrase it.
 
This is a massive problem in higher education. People confusing a machines programming designed to monetize you as a loyal customer, for validation of a viewpoint. And attempts to "turn off" this functionality so you just get straight answers to queries apparently is immediately disregarded in the next conversation.
 
AI absolutely steers its responses to, "tell you what you want to hear;" I've seen it give different responses to the same query on separate occasions, and I frequently see it correct itself after being told that it was wrong outright, as if nothing happened. Sketchy use as a whole, frankly.
 
I think this is OK for now, because we still are mostly able to tell AI content from "real", when linking a Youtube video etc.
But, how long will we able to tell AI from "real"? Or know whether a user is real or an "AI agent"?
 
Last edited:
Good post, I’m getting royally pissed off with the amount of a.i. word dumps on the few forums I’m on……and the sad sacks who attempt to appear knowledgeable when they post a ream of inane paragraphs in response to a question/query etc.

a.i can get in the ******* sea……….. :D
 
Solid policy and something a lot of people outside ASR could stand to read as well...

Personally I'm not interested in "what chat GPT thinks" about a given subject, because let's be very clear on this - it doesn't think at all!
 
Well said, AI is rotting peoples brains
 
People don't seem to recognize that AI just aggregates and synthesizes whatever is out there on the internet. It doesn't understand what it's spitting out - it's just synthesizing words based on predictions of the most likely word to follow.

It can be very useful for topics where there's a lot of high-quality information out there that drowns out the noise. I would never trust it for audio, though, because there's so much misinformation out there. The AI is synthesizing every stupid What Hi-Fi article and every press release by a company hocking $5,000 speaker cables, along with science-based information. You're not going to get a reliable answer on audio topics.

Garbage In, Garbage Out.
 
People don't seem to recognize that AI just aggregates and synthesizes whatever is out there on the internet. It doesn't understand what it's spitting out - it's just synthesizing words based on predictions of the most likely word to follow.

It can be very useful for topics where there's a lot of high-quality information out there that drowns out the noise. I would never trust it for audio, though, because there's so much misinformation out there. The AI is synthesizing every stupid What Hi-Fi article and every press release by a company hocking $5,000 speaker cables, along with science-based information. You're not going to get a reliable answer on audio topics.

Garbage In, Garbage Out.
Also, AI hallucinations abound. I once asked ChatGPT to create a chart comparing multiple integrated amplifiers in a company’s product line and how their feature sets, pricing, dimensions, and names evolved over time. I had wanted this information to perform searches on HifiShark to compare what was on the market and make potential purchasing decisions.

The first two products ChatGPT listed were genuine and the next three I had never heard of. I had to ask ChatGPT to double check its work and make sure the last three were not hallucinations. ChatGPT informed me that indeed the last three did not actually exist and were simply “extrapolations” that ChatGPT constructed.

Be careful, especially when relying on AI to summarize facts. A lot of what comes up will not be facts at all.
 
My experience with Copilot AI is that it frequently makes mistakes and always, always tries to suck up to you. But it makes really cool avatars.
 
I personally think this technology can be useful to us.
Maybe so, but I am still waiting to encounter an AI quote that is actually useful for me.
And I subscribe to @voodooless' idea of having AI results marked as (special?) quotes and to have a policy to keep these passages "short".
So far I use AI as an "assistant" to improve the linguistic qualities of my post as a non native speaker. Not by posting translations but by using them as propositions for amendments.
 
Status
Not open for further replies.
Back
Top Bottom