Smart people figure out the strengths and weaknesses of LLMs and put them to work.
I am genuinely shocked how wrong ai can often be. But NEVER giving the impression it may be mistaken. Declaring rubbish as fact....basically it's the new fallible god to the gullible!
I'm actually impressed by how often they're
right, and at times even insightful... but anybody unquestioningly trusting an LLM is either experiencing some kind of mental health issue or is just an idiot in general.
Still, who is even advocating for that!? What a useless strawman argument.
Yeah, some people have gotten themselves into bad situations because of LLMs. And it's pretty easy to trick LLMs. But again, you can make anything flake out if you use it wrong.
It's pretty easy to wreck a brand-new luxury car, too. Aim it at a telephone pole and floor the gas pedal. Or just pour sugar in the gas tank. Let me know how it goes. In other news... did you know knives can hurt you? Especially if you hold the pointy part instead of the handle? And that you can break your thumb with a hammer if you're not careful? Yeah, that's just kind of how tools are. (Unlike LLMs, knives and hammers don't even have warning messages...)
Last thing I'll say is that not all LLMs are equal. I'm not sure if this applies to you but it's abundantly clear that there's a real Dunning-Kruger thing going on with a lot of people, who play around with free LLM crap, get crappy results, and decide that all LLMs are crap. Sort of like a person who drives a 1987 Yugo and decides that all cars must be crap.
I have a $20/mo ChatGPT subscription. While not perfect (what is?) it's noticeably better than the free "AI" thing baked into Google's search results. The state of the art LLMs need somewhere in the neighborhood of seconds to minutes of computation time on nVidia's massive datacenter GPUs to answer complex questions. Whatever the heck Google's doing there, there's zero chance they're throwing a lot of GPU time at it considering their search results still come back in like 100ms.