• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Official policy on use of AI

Status
Not open for further replies.
Except that it can't do maths and will spew nonsense at you instead.
Be sure to check the working. With many LLMs, even if it can set up the right calculation and arrive at the right answer it will likely tell you an incorrect value up front because it's making it up then retrospectively trying to justify it.
Gemini:
1000118191.jpg


I had a different sort of nonsense when trying DeepSeek on a puzzle, which tried a few incorrectly applied methods, got it wrong, tried to verify its answer, hit a dead end, then concluded "I know the answer because it's been published, therefore it's this. QED." I suppose at least it was the correct number?

I think in a later generation, basic maths will be a solved problem, but for now it's the thing I've had the least success with.
 
Be sure to check the working. With many LLMs, even if it can set up the right calculation and arrive at the right answer it will likely tell you an incorrect value up front because it's making it up then retrospectively trying to justify it.
Gemini:
View attachment 480114

I had a different sort of nonsense when trying DeepSeek on a puzzle, which tried a few incorrectly applied methods, got it wrong, tried to verify its answer, hit a dead end, then concluded "I know the answer because it's been published, therefore it's this. QED." I suppose at least it was the correct number?

I think in a later generation, basic maths will be a solved problem, but for now it's the thing I've had the least success with.
What I said.
 
I'm not sure what the ASR mods have seen, but as a mod of an audio-related subreddit.... we've had like ONE person trying to pass off AI slop as human.
We get a report every week or so about someone posting AI in a bothersome manner. The usage is increasing and so are the reports. Hence this thread.
 
7. I am in arguments on another forum and surprisingly, AI has been elevated as authoritative source! Often it is the only counter answer given. We won't be going there here.
People have been quoting AI as if it's an authoritative source for a while now, and it still surprises me how much trust people put in it.

LLMs are the drunk guy at the end of the bar - on the surface, what they say sounds like it makes sense, but the moment you dig into any of it you realise that they have no understanding of anything they're talking about. This is literally true: LLMs are just the evolution of autocomplete. Instead of predicting the next word based on the sentence you're writing and basic word probabilities, it's predicting the next three paragraphs a word at a time based on the question you wrote, an unseen prompt its unholy creator wrote, and the entire contents of the Internet, good and bad. It has no understanding of anything, not of concepts, not of sentences, not what individual words mean nor really what a word even is.
 
Even chatgpt says not to trust it as an authoritative source

1759490123225.png


But the people selectively quote AI. It's a problem not unique to AI, considering the below comic is many years back

1759490236460.jpeg
 
I am genuinely shocked how wrong ai can often be. But NEVER giving the impression it may be mistaken. Declaring rubbish as fact....basically it's the new fallible god to the gullible!
Just wait until such AI pursues a political career ;) (in some cases one could suspect it already)
 
Well, I can guarantee that you share most genes with Einstein :)
And after two or so years of all media calling LLMs AI, it's too late anyway.
 
Well, I can guarantee that you share most genes with Einstein :)
And after two or so years of all media calling LLMs AI, it's too late anyway.
I do accept that once LLM'S get really good they will be indistinguishable from AI to human plebs.
But they still will not be A I. Ever.
 
  • Like
Reactions: KLi
Maybe. But it doesn't change their "tragicomical" career...
"Mislabeling" is not new BTW, Orwell describes it quite well...
 
I accept that once LLMs get really good that they will be indistinguishable from AI to pleb humans. BUT they will Never be AI...
And yet again I'm dragging the thread away from purpose
 
I accept that once LLMs get really good that they will be indistinguishable from AI to pleb humans. BUT they will Never be AI...
And yet again I'm dragging the thread away from purpose
Apologies double post....blame the AI.. it made me do it!
 
Ai is programming so if people programming it are GREEDY FITH PIGS then no surprise of a twisted immoral action ?

Sure it`s all very complicated when you wander from unity ?

Ai google search engine is an excellent search engine as far as gathering all the information online including the countless rubbish the bullies have smeared all over our forums, blogs, news pages and.... so this appears not intelligent as the sense of the word might imply to most of us ?

Ask Google Ai a few questions and experience for yourselves how many times you are given answers that cancel each other out.

I still use Ai for search engine but only as an assistant, sometimes my assistant is a pure imbecile other times not completely useless and a bit helpful with a hint of imbecile.

Rewording the questions greatly helps.
 
  • Like
Reactions: KLi
AI is going to get fine! It'd not like we are all going to become a legion of super egos, high on a supply of endless compliments from an unquestioning lackey, but so cognitively sedentary that we are unable to remember how to put one foot in front of the other without paying Sam Altman to remind us :p
 
There a lot of humans who write nonsense even though they have "intelligence." So that is not test of whether current engines are "AI."

Reminds me of talking to our head of research at Microsoft some 20 years ago about voice recognition. At the time, it was being ridiculed as never becoming good enough. His answer was that it would be as good as a human in a few years. And that it didn't need to be better than that so perfection was not the goal. And right he was.
 
There a lot of humans who write nonsense even though they have "intelligence." So that is not test of whether current engines are "AI."

Reminds me of talking to our head of research at Microsoft some 20 years ago about voice recognition. At the time, it was being ridiculed as never becoming good enough. His answer was that it would be as good as a human in a few years. And that it didn't need to be better than that so perfection was not the goal. And right he was.
Almost anything ever only needs a certain degree of proficiency to achieve mass adoption because all it needs to do is be more convenient (and of course affordable) than without.
 
Almost anything ever only needs a certain degree of proficiency to achieve mass adoption because all it needs to do is be more convenient (and of course affordable) than without.
And as half the population are of below average intelligence that sets a pretty low bar for what might be considered intelligent. ;)
 
Status
Not open for further replies.
Back
Top Bottom