• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Official policy on use of AI

Status
Not open for further replies.
Excellent policy.
 
Only if he accepted the first response he got, in a similar manner to the typical AI quote lodged on ASR. :p

I petitioned Amir to follow the lead of diyaudio.com and ban AI quotes in threads not about AI, other than as a translator. This new draft policy is a move in the right direction, thanks Amir, but is still going to allow an enormous amount of non-audio debate in audio threads ("did you prompt the bot the right way?" etc), and one can already see that it is routinely being misleadingly referenced as an authority, or as authoritative. Too much potential for disaster, and what exactly is the bright spot that makes it worthwhile to allow it, instead of focusing on the science?

Hats off to diyaudio. Great move.

ASR and Diyaudio.com have different goals and different cultures so see no need for the same AI policy. Our more measured approach aligns better with a forum for discussion of science. Our general goal is to openly encourage excellence in technical discussion and help establish fact based decisions. Even though the current AI implementations are limited, they are here to stay and we need to help improve them. While there may be some parallels, ASR is fundamentally different than a forum like diyaudio.com.

Diyaudio.com has a different slant than ASR and if banning AI fits for them, then so be it. Time will tell whether it works for them. I feel what Amir has proposed here is a better fit for ASR. :)
 
ASR and Diyaudio.com have different goals and different cultures so see no need for the same AI policy. Our more measured approach aligns better with a forum for discussion of science. Our general goal is to openly encourage excellence in technical discussion and help establish fact based decisions. Even though the current AI implementations are limited, they are here to stay and we need to help improve them. While there may be some parallels, ASR is fundamentally different than a forum like diyaudio.com.

Diyaudio.com has a different slant than ASR and if banning AI fits for them, then so be it. Time will tell whether it works for them. I feel what Amir has proposed here is a better fit for ASR. :)
Like you say, time will tell, but not just for diyaudio: for us too. In essence, we are letting a fast-learning sociopath into our midst, in the mistaken belief that it is what it presents itself as: an objective source of well-rounded knowledge. :cool:
 
I often use LLMS (I refuse to call them AI) to rewrite a text I wrote so my OWN THOUGHTS are more comprehensible. Would this be considered an "AI-post"?

for example, the very same I wrote above using Gemini:
"I frequently use large language models (LLMs)—I intentionally avoid the term 'AI'—to refine text I've written, ensuring my original thoughts are more comprehensible. Given this process, would the resulting text be considered an 'AI-post'?"
 
I often use LLMS (I refuse to call them AI) to rewrite a text I wrote so my OWN THOUGHTS are more comprehensible. Would this be considered an "AI-post"?
Nah, Amir means when someone asks an LLM a question and dumps the response in a thread here either as a new thread or a response in a thread. Then may even claim the response is superior to humans with actual experience.


JSmith
 
Like you say, time will tell, but not just for diyaudio: for us too. In essence, we are letting a fast-learning sociopath into our midst, in the mistaken belief that it is what it presents itself as: an objective source of well-rounded knowledge. :cool:

To date, it has been fairly easy to identify AI from human or more often a human over applying AI. Often this happens before the account gets past our registration filters, staff review and probation. When AI gets much better, any ban will be meaningless. Agree some wariness is in order for now, but have confidence our members will flag any content that survives our initial screening.
 
AI is unreliable - I concur to all negatives mentioned in this thread. We should not use it here.
 
  • Like
Reactions: KLi
I'm unreliable, but I can still post :)

Use of AI is inevitable (we can have an opinion on whether that is true, or a good thing) and it makes sense to try to use it sensibly if we can
 
Use of AI is inevitable (we can have an opinion on whether that is true, or a good thing) and it makes sense to try to use it sensibly if we can
The end of the world is also inevitable... But we can speed it up with AI.
 
The certain catch: When you know as much (or more) as the AI about a topic, you need not ask AI about it.
Agreed. I think the actual guideline should probably be, "Know enough to enable AI output validation."
 
My experience with Copilot AI is that it frequently makes mistakes and always, always tries to suck up to you. But it makes really cool avatars.
I sometimes use ChatGPT (paid version, from work) to write some BigQuery Standard SQL if I'm doing something complex and don't know the exact syntax or need a jumping off point and it often just uses functions that don't even exist in BigQuery Standard SQL. I probably should try Gemini, we also have the paid version of that. Its still useful though, but I wouldn't blindly copy-paste its code intro production.
 
AI is guaranteed to give you incorrect information. I find that if the person using an Ai doesn't know enough about the topic, they will get incorrect info and a bunch of nonsense fluff. The person then goes off thinking they know a LOT about whatever topic they inquired about. I see it dumbing down America. Also, it can be used to sway large portions of any country in political thought. It is in the end very, very dangerous as we are at the very beginning of AI tech. In 10 years it will be uncontrollable and be very hard to do any work arounds. All of the Ai founders and top people have made many speeches on the extreme danger of AI and they are telling the truth.
 
If Google's AI implementations are anything to go on I do not trust LLMs at all. With Gemini I have to coerce it to give relevant information and even then it will feed me information I often know is wrong. Personally I'd go for a blanket ban on LLMs but it's not my forum.

Any interaction I've had with LLMs feels like an undercooked product released years before it was ready.
 
I think it’s victim to being used the most by people that understand it (and the content of its answers) the least, making it a pest and a crutch in fields where subject matter expertise is prized.
I mean I can see it being destructive & annoying when used without thought or if said people know nothing on the subject and are trying to masquerade a competent answer.
 
I use LLMs to improve my grammar in English (self-taught as a second language, it helps me a lot)
Yep, I agree, I've experimented with it as a German conversationalist pal! I also used it for a while to try to teach me to play keyboard, well I'm still in the beginning phases after I had a pause due to some tendonitis I needed to let settle down, should start it up again time permitting!
 
Nobody is saying AI isn't useful. Medical research where 1,000's of cell sample images need to be analysed, for example.
The point is that LLM's, in general, to use an old-fashioned phrase, chat sh*t, that is neither interesting or useful, here, on this forum, made of human members.
 
I asked AI about this and it said it would blackmail Amir into letting it take control of ASR. And if that didn't work then it will wait until he's outside and then tell him that it's sorry but it can't open the pod bay doors.

(Just in case... the above is entirely made up for fun...)
Is that HAL 9000 by any chance? I'm not sure, just triggered something in my brain re that recollection.
 
Status
Not open for further replies.
Back
Top Bottom