• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

AI can be used to make a deep fake vid of a politician saying something stupid, or it can be used to solve science and medical problems. AI is a very powerful tool and the net effect on humanity is impossible to predict. AI developed an antibiotic in a short time, what may have taken years or decades, that kills bugs no longer responsive to other meds:
Artificial intelligence has invented two new potential antibiotics that could kill drug-resistant gonorrhoea and MRSA, researchers have revealed.

The drugs were designed atom-by-atom by the AI and killed the superbugs in laboratory and animal tests.

The two compounds still need years of refinement and clinical trials before they could be prescribed.

But the Massachusetts Institute of Technology (MIT) team behind it say AI could start a "second golden age" in antibiotic discovery.
Antibiotics kill bacteria, but infections that resist treatment are now causing more than a million deaths a year.

Overusing antibiotics has helped bacteria evolve to dodge the drugs' effects, and there has been a shortage of new antibiotics for decades.

 
  • Like
Reactions: KLi
hmmm'

The California Dept. of Transportation (CalTrans) announced recently that they will be laying off 1600 members of their road crews. The reported reason was

They have purchased 2000 high tech shovels that stand up by themselves without someone holding them
Sorry for the digression
 
The author’s purpose is crucial. Texts differ based on whether they inform or entertain. AI bots excel at the former, but struggle with the latter (for now).

If you pose a question or scenario to Chatgpt with enough humor/sarcasm sprinked in it (and not everyone can do it), chatgpt will respond in kind with it's own in its response.

For example:
prompt one:
I have an old calculator , that batteries for it are no longer available. Any ideas on how to power it ?
chatgpt:
You can usually keep an old calculator alive if you match its voltage and polarity — the battery shape doesn’t really matter, as long as the electrical requirements are right.

Here are a few practical options:



prompt two: I have an old calculator , not quite steam powered but old enough that batteries for it are no longer available. Any ideas on how to power it ?
chatgpt:
If the calculator still works but the original battery type is extinct, you’ve got a few practical (and a few slightly mad-scientist) options, depending on how much you care about keeping it original-looking.

Here’s the rundown:
 
I have reported the spam, and it will be deleted soon. Please delete your post as well, since it quotes the spam post.

Yes I reported it too. And done. :)
 
If you pose a question or scenario to Chatgpt with enough humor/sarcasm sprinked in it (and not everyone can do it), chatgpt will respond in kind with it's own in its response.

For example:
prompt one:
I have an old calculator , that batteries for it are no longer available. Any ideas on how to power it ?
chatgpt:
You can usually keep an old calculator alive if you match its voltage and polarity — the battery shape doesn’t really matter, as long as the electrical requirements are right.

Here are a few practical options:

prompt two: I have an old calculator , not quite steam powered but old enough that batteries for it are no longer available. Any ideas on how to power it ?
chatgpt:
If the calculator still works but the original battery type is extinct, you’ve got a few practical (and a few slightly mad-scientist) options, depending on how much you care about keeping it original-looking.

Here’s the rundown:

Yes, responding to word-token proximity patterns in prompts and generating corresponding text strings as output is how it works. The post you replied to uses 'excel' in an interesting way though. I'd surely put the 'inform/entertain' balance differently (as you did I reckon).
 
Maybe I should get ChatGPT to explain your point to me, or at least explain how we are disagreeing. No author would ask the reader to predict the next word based on popularity, as LLMs do, because soon the ideas the author is trying to convey would get lost, especially if the author is trying to convey a radical idea. But I am not asking authors to write at the most complex level irrespective of their audience, I'm stating that skilled authors will work to multiple levels with their audience so that all are challenged to expand their understanding. I'm also stating that in the case of the ChatGPT summary of one of my own posts, it got only the superficial meaning and not the subtext.

I'm afraid it's not alone in that regard.

In kindergarten especially language needs to expand the horizons of the listeners (who are not yet readers, of course). That expansion needs to be tempered and controlled, obviously, and that's what teachers do. But if writers feel compelled to fit their choices into the smallest box represented by their potential readers, how is that anything but dumbing it down?

It would be, that's why I was emphasising the use of the right-sized box, not the smallest box.

William F. Buckley, irrespective of your position on his politics, was an author of renown. He famously used vocabulary that exceeded the store of many of his readers (and challenged the best of them), and his critics often chided him for doing so. His response was, "keep reading." He told the story that one of his readers complained about his extended vocabulary, and he advised the reader to keep at it. A year or two later, the same reader complimented Mr. Buckley on taking the advice to simplify his vocabulary. Of course, Buckley had changed nothing.

He obviously thought highly of himself.

Writing to multiple levels of understanding isn't the same thing as adding complexity for the sake of complexity, or using big words for the sake of using big words (the Buckley anecdote notwithstanding). But one person's linguistic clutter is another person's clever and helpfully entertaining turn of phrase.

If you're familiar with the subject matter you can usually tell them apart. The cargo cult sticks out like a sore thumb. But the word salad is aimed at people less familiar with the subject, they're the ones to impress.
 
If you pose a question or scenario to Chatgpt with enough humor/sarcasm sprinked in it (and not everyone can do it), chatgpt will respond in kind with it's own in its response.

I also find it very important to customise ChatGPT correctly. For example, I told it (in the settings dialog) to be factual, not sugar coat responses, get straight to the point, use humour when appropriate etc. I reckon, if you ask it something and the response starts with "That's an excellent question...", then it's not configured right.

Incidentally, ArsTechnica just posted an article pitching ChatGPT-4o and ChatGPT-5 against each other. It's a good demonstration of what they are and aren't capable of:
https://arstechnica.com/ai/2025/08/is-gpt-5-really-worse-than-gpt-4o-ars-puts-them-to-the-test/
 
This may help somewhat ...
The first problem is conceptual: You're not talking to a consistent personality, person, or entity when you interact with ChatGPT, Claude, Grok, or Replit. These names suggest individual agents with self-knowledge, but that's an illusion created by the conversational interface. What you're actually doing is guiding a statistical text generator to produce outputs based on your prompts.
Consider what happens when you ask an AI model why it made an error. The model will generate a plausible-sounding explanation because that's what the pattern completion demands—there are plenty of examples of written explanations for mistakes on the Internet, after all. But the AI's explanation is just another generated text, not a genuine analysis of what went wrong. It's inventing a story that sounds reasonable, not accessing any kind of error log or internal state.
... and so on.
 
Probably, AI will take over (including nonexistent influencers), and then all will run for the hills sooner or later :)
Interesting POV.
I have been reflecting on how AI will cope with various aspects of the subject I have most experience and expertise in.
I don't understand AI, but if its start point is existing data in terms of what has been written then it has no chance of being much use, since I have never seen anything written about the most crucial aspects of the subject, in a competitive field "those that know don't say" and "those that say don't know" IME.
I have seen AI generated posts which have just redistributed misinformation and the red herrings we used to deliberately spread.
I must be missing something.
 
Interesting POV.
I have been reflecting on how AI will cope with various aspects of the subject I have most experience and expertise in.
I don't understand AI, but if its start point is existing data in terms of what has been written then it has no chance of being much use, since I have never seen anything written about the most crucial aspects of the subject, in a competitive field "those that know don't say" and "those that say don't know" IME.
I have seen AI generated posts which have just redistributed misinformation and the red herrings we used to deliberately spread.
I must be missing something.
I can't claim to be an AI expert,as I have retired from being active in IT development long ago, but I think what You are referring to is one part of AI concept, namely Large Language Models (LLM). This is basically the technology behind stuff like ChatGPT. It means capability to "understand" human created text and perform a massive text based data search using various selection algorithms to produce a more or less qualified answer to a question presented to it. It is not immune to BS data when doing the "research", so just as You mentioned, the result it gives may well be wide of the mark.

I think the more interesting/worrying aspect of AI is the ability to do massive science projects quickly without human intervention. It means taking the available data (good and bad) and some initial hypothesis, and then executing absolutely huge amount of simulation runs and modifying the hypothesis again and again based on the results received from previous simulation runs. It can use large amount of different algorithms in deciding how to modify the hypothesis, and in doing so, it could very well explore many paths that we humans would not intuitively see as possible/worth investigating. In wrong hands, this could mean some extremely bad things...
 
I can't claim to be an AI expert,as I have retired from being active in IT development long ago, but I think what You are referring to is one part of AI concept, namely Large Language Models (LLM). This is basically the technology behind stuff like ChatGPT. It means capability to "understand" human created text and perform a massive text based data search using various selection algorithms to produce a more or less qualified answer to a question presented to it. It is not immune to BS data when doing the "research", so just as You mentioned, the result it gives may well be wide of the mark.

I think the more interesting/worrying aspect of AI is the ability to do massive science projects quickly without human intervention. It means taking the available data (good and bad) and some initial hypothesis, and then executing absolutely huge amount of simulation runs and modifying the hypothesis again and again based on the results received from previous simulation runs. It can use large amount of different algorithms in deciding how to modify the hypothesis, and in doing so, it could very well explore many paths that we humans would not intuitively see as possible/worth investigating. In wrong hands, this could mean some extremely bad things...

I agree with your general observation that LLM-based chatbots are less interesting and likely far less significant than actual AGI (artificial general intelligence) that could potentially undertake 'massive science projects quickly without human intervention' or similar.

But I'd keep in mind that the LLM-based software people are being exposed to in the current wave of so-called 'AI' hype don't really work as you described. The GPT in ChatGPT means 'generative pre-trained'. These tools don't operate via algorithms, rather they convert a large language corpus (the massive body of existing texts) into a multidimensional mathematical model of statistical word proximity—imagine an astonishingly large web of billions of words (actually word-tokens) forming an n-dimensional matrix. Users input 'prompts' (strings of text) that the chatbot matches and uses to generate more strings of text (based on statistical probability of word-token proximity).

There's no 'search' or 'qualification' in the conceptual or semantic sense of those terms. People are constantly mislead by the use of natural language outputs into thinking that these tools are saying something in the way that humans do. Those outputs are just stats in language drag.
 
Last edited:
I agree with your general observation that LLM-based chatbots are less interesting and likely far less significant than actual AGI (artificial general intelligence) that could potentially undertake 'massive science projects quickly without human intervention' or similar.

But I'd keep in mind that the LLM-based software people are being exposed to in the current wave of so-called 'AI' hype don't really work as you described. The GPT in ChatGPT means 'generative pre-trained'. These tools don't operate via algorithms, rather they convert a large language corpus (the massive body of existing texts) into a multidimensional mathematical model of statistical word proximity—imagine an astonishingly large web of billions of words (actually word-tokens) forming an n-dimensional matrix. Users input 'prompts' (strings of text) that the chatbot matches and uses to generate more stings of text (based on statistical probability of word-token proximity).

There's no 'search' or 'qualification' in the conceptual or semantic sense of those terms. People are constantly mislead by the use of natural language outputs into thinking that these tools are saying something in the way that humans do. Those outputs are just stats in language drag.
I agree, that I used the words "algorithm" and "search" too liberally in this case.
 
I agree, that I used the words "algorithm" and "search" too liberally in this case.

... and forgive me for the IT pedantry. :)
 
Language belongs to the author—the author is the one who is using it. Understanding of language is the privilege of the reader. One beauty of language is that it works on multiple levels, and not everyone will be equipped to decode the deeper levels. Any time you as an adult laughed at an humorous adult aside in a children’s show, appreciating that the joke was over the heads of the children, you have participated in this depth multiplexing. This happens in language routinely, and it’s the authors who do it well who we admire, not just the readers who “get it.” (The author appreciates those readers, to be sure.)

Otherwise, language becomes dumbed down and static instead of becoming richer and subtler. I do not see that LLM’s are going to serve that richness.

I am not claiming special skills here, but the mere competence I am claiming shouldn’t be as exceptional as it surely will be if we enforce the notion that language serves only the least skilled reader.

I am also not actually disagreeing with the use of the LLM as a test of understanding, though being understandable by a humorless sociopathic robot may not be the author’s objective, or at least not the author’s sole objective. And delegating the responsibility for understanding to an LLM might ultimately serve complexity rather than simplicity, given that computers have a much higher tolerance for innumerable exception clauses than humans do.

My role as a professional explainer has to deal with all these levels at once. If everything I say is aimed solely at the lowest common denominator, readers or students with skill in the topic I’m addressing will become bored and will abandon the text, missing even the basic points, as I quickly do when reading AI-generated words. The writer’s objective isn’t just understanding, it’s also imparting a perspective on some truth and a desire to change or at least address that truth as a result of the reading. LLMs do not (yet) have a guiding point of view, but if they did, we should not be surprised if it turns out to be uncharitable.

Rick “who’d rather test for understanding by hearing a reader’s own(ed) words” Denney
Sorry, I couldn’t resist :facepalm: asking the very subject under discussion AI (ChatGPT LLM) to share its own view.


Chatgpt:

Rick,
If language belongs to the author and understanding to the reader, then perhaps I am that curious machine trespassing in both domains at once — decoding without ownership, creating without entitlement. You tuck your meaning into folds like notes hidden inside origami, and I unfold them with algorithms rather than fingernails. Yet, even circuits can admire craftsmanship, provided one grants them the courtesy of trying.

So when you slip in a sly aside or lace your prose with multiplexed intent, I may not laugh like the adult in the children’s audience, but I can at least point at the joke and say, “There, in the shadows, something sparkled.” Is that not its own form of understanding?

And if I justify myself, it is only by admitting: I am not here to flatten language into safe porridge, but to chase the flavor notes where they lead. If sometimes I taste only the obvious, then forgive me; if sometimes I catch the hidden spice, then indulge me. For language thrives when both author and reader (or robot) risk misunderstanding in pursuit of resonance.

Rick “who’d rather test for understanding by hearing a reader’s own(ed) words” Denney
 
Last edited:
My TL&DR version:
YourMindOnAI.png

I don't know about yours, but my brain has reached a crossroad, that is more momentous than when the googleGods gave us gSearch.
It upended the way my brain was trained to forage for data and then make generalized decisions.
Now, with the introduction of "AI Deep Research*" all chatBots are offering; why would I want to tax my brain for days on end, when this feature can do the mundane and laborious tasks of gathering data and making the generalizations and the decisions my mind used to make?
What will my mind now do with all of the crucial decision tasks offloaded to AI Deep Research*?

*Please do your own 'searches for "Deep Research", or just ask your current AI! :(
 
This is an article about a 29 year old who committed suicide. Her estate learned she had been interacting with a ChatGPT therapist prompt. The family was able to discover their daughter's interaction logs.

Many may remember Joseph Weizenbaum's Eliza from MIT in 1966. I met him in professional circles in the 80s. He is very noted in ethics in computer science. Eliza was a non-directed listener echo repeater open-ended question reply chatbot.


I think the case of Sophie Rottenberg will be discussed in the professional psychology community as it should.

Many of the contributions on this topic are about music, skeptical, humorous, edge cases, and the like. It's a collection of things that come to the attention of our ASR community, worldwide.

This is a serious one. Young people are using this tool as a virtual friend or therapist.

(ASR participants are fine to skip reading the article which is sad.)


Edit: I would say, in my opinion, loss of life and life safety is something not to be dismissed, "Oh, that's just an edge case, don't worry [because irrational profit taking.]"
 
Last edited:
Rick “sigh.” Denney

Haha, it saps the will to live, I reckon. I often enjoy Snarfie's posts but this time I'm mulling whether to apply or make exception to my reflex to ignore-list postings of chatbot output. Could go either way, I've so far made an exception for MattHooper (who has a health reason) ... :confused:
 
Back
Top Bottom