• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

That said, I don't understand your next post with the strikeouts.
Strikeouts were unintentional, now removed.

It's another instance showing the problems with anthropomorphism. The appearance of reasoning and understanding in newer LLMs tackling more complex problems is an illusion that holds only for problems close to those in the training data. For problems further removed from the training data the models' outputs are more likely to contain flawed logic or nonsense, expressed confidently, which may be superficially convincing but won't stand deeper inspection.
 
There is a lot that is wrong with that summary, and a lot that it misses.

This raises an interesting point: are you to be trusted as the arbiter of this? The disconnect may well be located between what you meant to say and what you actually wrote, rather than between what you wrote and how the AI summarised it.

I know from painful experience how hard it can be to express oneself clearly. Without discussing one's texts with peers or editors (nobody does that with forum posts), how can one be sure that readers understand them the way they were meant? Using AI summaries on one's own texts (especially more sprawling ones) could be a useful tool for uncovering how the texts might be understood by the readers.
 
Funny bit...

Had a nice family photo that otherwise had a very out of focus image of someone's head on the bottom 10% of the picture. Didn't want to crop it out so asked Photoshop (or was it Lightroom?) to use its AI to fix it. I said it was going to search around and do that. After about 10 seconds, it replaces that fully out of focus head, with a perfectly in-focus picture of a random person!!!
Did you tell it what was wrong and needed fixing? Should that have been obvious to PS? A family member's head out of focus is as much worth fixing as retouching an unwanted person out of a photo.

I find that AI (conversational or generative) can be a great GIGO (garbage in, garbage out) amplifier. The quality of the output definitely depends on the quality of the prompts.
 
Hopefully those same compounds won't destroy bacteria we rely on for normal, healthy functioning.
I’ve had c.diff twice, after antibiotics. Interestingly, some time after my first episode, doctors started asking about this when prescribing antibiotics.
 
Funny bit...

Had a nice family photo that otherwise had a very out of focus image of someone's head on the bottom 10% of the picture. Didn't want to crop it out so asked Photoshop (or was it Lightroom?) to use its AI to fix it. I said it was going to search around and do that. After about 10 seconds, it replaces that fully out of focus head, with a perfectly in-focus picture of a random person!!!
You have to be careful what you wish for when dealing with a genie.
 
This links to one of the best explanations of LLM I've come across. Effectively if you start with a word, e.g. "fraternal", there are a very large number of words which could follow, but only a subset of which make good sense: "milk" is statistically unlikely to turn up in sentences, but "support" is more likely, so we select support and get "fraternal support" and so on.

That's disturbing. My friendly chatbot will never understand that I like milk?

More seriously, that was good analysis.

I am baffled that the AI discussions seem to never move away from treating a function to generate sequences of words as something that resembles a human. Statements such as "an AI agent could become an insider threat so it needs monitoring" are simultaneously unsurprising (you have a randomized sequence generator fed into your shell, literally anything can happen!) and baffling (you talk as if you believe the dice you play with had a mind of their own and could decide to conspire against you).

We saw a few examples of the 'malevolent chatbot' projection upthread. Understanding the actual mechanics of that problem will always be better than assuming it's some kind of consciousness magic.
 
This raises an interesting point: are you to be trusted as the arbiter of this? The disconnect may well be located between what you meant to say and what you actually wrote, rather than between what you wrote and how the AI summarised it.

I know from painful experience how hard it can be to express oneself clearly. Without discussing one's texts with peers or editors (nobody does that with forum posts), how can one be sure that readers understand them the way they were meant? Using AI summaries on one's own texts (especially more sprawling ones) could be a useful tool for uncovering how the texts might be understood by the readers.
Not everyone gets everything, and sometimes writers work at multiple levels for different audiences. It’s easy to blame the transmitter for receivers that lack sensitivity.

AI as it stands now will tell me how a superficial reader will perceive what I wrote. And people that use it instead of reading only get a Cliff Notes version—all synopsis and little depth of understanding. That may meet their needs in a particular situation but it isn’t the author’s fault if they miss stuff.

Rick “a professional explainer” Denney
 
“Taylor Swift holding an enormous fish”

Meta GPT is grooming the children:


We've known from his statements early in Facebook's history Zuckerberg favours exploiting his software's users, and that he holds them in contempt. Accordingly, I don't use Meta products/services (or maintain accounts). His leadership and guidance hasn't changed apparently, but his reach and wealth have.

The details Reuters reports there are odious. Meta literally contemplate scenarios where children remove their clothes for Meta's chatbot products ... because that’s normal and healthy? FFS.

Edit: that story is getting a reaction, which is good ... https://www.theguardian.com/technology/2025/aug/15/meta-ai-chat-children
 
Last edited:
It’s easy to blame the transmitter for receivers that lack sensitivity.

It would also be justified. Language belongs to the listener (or reader). If the speaker (or writer) is unable to get their meaning across, then it is entirely their fault. I'm not talking about students being unable to follow the professor because they neglected to do their calculus exercises, of course. At the pure language level it is the duty of the speaker to choose the right vocabulary, terminology, jargon, similes, sentence structure etc. for the audience to be addressed.

AI as it stands now will tell me how a superficial reader will perceive what I wrote. And people that use it instead of reading only get a Cliff Notes version—all synopsis and little depth of understanding. That may meet their needs in a particular situation but it isn’t the author’s fault if they miss stuff.

I'm not advocating AI summaries as a filter or crutch between writer and reader, but as a tool for the writer to evaluate and, if necessary, rework their own texts. It is ultimately in the writer's best interest to get their meaning across. Now, some writers may take a view that it is the readers' loss if they can't follow what's been written, but I would regard that as a fairly arrogant stance.
 
Hopefully those same compounds won't destroy bacteria we rely on for normal, healthy functioning.
I would assume that this is something that is relevant and checked whether AI is used as a part of the development or not. AI may speed up and streamline this part too.

The use of AI within medicine is one of its uses that I reckon should be prioritised. The general population using it for random useless internet searches strikes me as just another waste of a potentially very beneficial technology. Of little benefit to humanity, and which does little more than waste energy and speed up global warming.
 
Last edited:
  • Like
Reactions: KLi
It would also be justified. Language belongs to the listener (or reader). If the speaker (or writer) is unable to get their meaning across, then it is entirely their fault. I'm not talking about students being unable to follow the professor because they neglected to do their calculus exercises, of course. At the pure language level it is the duty of the speaker to choose the right vocabulary, terminology, jargon, similes, sentence structure etc. for the audience to be addressed.



I'm not advocating AI summaries as a filter or crutch between writer and reader, but as a tool for the writer to evaluate and, if necessary, rework their own texts. It is ultimately in the writer's best interest to get their meaning across. Now, some writers may take a view that it is the readers' loss if they can't follow what's been written, but I would regard that as a fairly arrogant stance.
Language belongs to the author—the author is the one who is using it. Understanding of language is the privilege of the reader. One beauty of language is that it works on multiple levels, and not everyone will be equipped to decode the deeper levels. Any time you as an adult laughed at an humorous adult aside in a children’s show, appreciating that the joke was over the heads of the children, you have participated in this depth multiplexing. This happens in language routinely, and it’s the authors who do it well who we admire, not just the readers who “get it.” (The author appreciates those readers, to be sure.)

Otherwise, language becomes dumbed down and static instead of becoming richer and subtler. I do not see that LLM’s are going to serve that richness.

I am not claiming special skills here, but the mere competence I am claiming shouldn’t be as exceptional as it surely will be if we enforce the notion that language serves only the least skilled reader.

I am also not actually disagreeing with the use of the LLM as a test of understanding, though being understandable by a humorless sociopathic robot may not be the author’s objective, or at least not the author’s sole objective. And delegating the responsibility for understanding to an LLM might ultimately serve complexity rather than simplicity, given that computers have a much higher tolerance for innumerable exception clauses than humans do.

My role as a professional explainer has to deal with all these levels at once. If everything I say is aimed solely at the lowest common denominator, readers or students with skill in the topic I’m addressing will become bored and will abandon the text, missing even the basic points, as I quickly do when reading AI-generated words. The writer’s objective isn’t just understanding, it’s also imparting a perspective on some truth and a desire to change or at least address that truth as a result of the reading. LLMs do not (yet) have a guiding point of view, but if they did, we should not be surprised if it turns out to be uncharitable.

Rick “who’d rather test for understanding by hearing a reader’s own(ed) words” Denney
 
Most likely. Like anything that's even remotely useful. Like a shovel.
hmmm'

 
Language belongs to the author—the author is the one who is using it.

Nope. Authors must use the language of their intended audience, otherwise they risk failing as communicators. This does not at all mean dumbing down the language, but choosing the right language. A kindergarten show-and-tell will necessarily use very different language from a college graduation address or a technical review of a piece of audio electronics.

I am also not actually disagreeing with the use of the LLM as a test of understanding, though being understandable by a humorless sociopathic robot may not be the author’s objective, or at least not the author’s sole objective.

The author’s purpose is crucial. Texts differ based on whether they inform or entertain. AI bots excel at the former, but struggle with the latter (for now). Authors can try to blend both, for example writing a technical article garnished with little stories and witty remarks. Some succeed without sounding corny, but in most cases AI bots currently confuse or ignore this.

Assuming a text solely aims to inform or discuss an aspect of physical reality, AI bots can effectively separate information from noise. This can be beneficial because many authors also seek to impress readers by using convoluted sentences, contrived expressions and other linguistic clutter that makes the text difficult to read.
 
Nope. Authors must use the language of their intended audience, otherwise they risk failing as communicators. This does not at all mean dumbing down the language, but choosing the right language. A kindergarten show-and-tell will necessarily use very different language from a college graduation address or a technical review of a piece of audio electronics.
Maybe I should get ChatGPT to explain your point to me, or at least explain how we are disagreeing. No author would ask the reader to predict the next word based on popularity, as LLMs do, because soon the ideas the author is trying to convey would get lost, especially if the author is trying to convey a radical idea. But I am not asking authors to write at the most complex level irrespective of their audience, I'm stating that skilled authors will work to multiple levels with their audience so that all are challenged to expand their understanding. I'm also stating that in the case of the ChatGPT summary of one of my own posts, it got only the superficial meaning and not the subtext.

In kindergarten especially language needs to expand the horizons of the listeners (who are not yet readers, of course). That expansion needs to be tempered and controlled, obviously, and that's what teachers do. But if writers feel compelled to fit their choices into the smallest box represented by their potential readers, how is that anything but dumbing it down? Language is about art as much as about communication. No group of kindergarteners will all be at the same level, and some will need the most basic instruction while others--who also need to be challenged--can grasp more complex ideas. There may even be some adults in the room that need to learn something. Good teachers, just as good authors, work at all those levels.

William F. Buckley, irrespective of your position on his politics, was an author of renown. He famously used vocabulary that exceeded the store of many of his readers (and challenged the best of them), and his critics often chided him for doing so. His response was, "keep reading." He told the story that one of his readers complained about his extended vocabulary, and he advised the reader to keep at it. A year or two later, the same reader complimented Mr. Buckley on taking the advice to simplify his vocabulary. Of course, Buckley had changed nothing. Readers share the responsibility of effective communications, and that means being prepared to be a student of the language. And in the realm of ideas, the readers are presumably the parties who have to make a leap of understanding, and that may not always be easy.

Writing to multiple levels of understanding isn't the same thing as adding complexity for the sake of complexity, or using big words for the sake of using big words (the Buckley anecdote notwithstanding). But one person's linguistic clutter is another person's clever and helpfully entertaining turn of phrase. The best primary-school teachers of my observation never seem to talk down to their students, even in kindergarten, and the teachers that do so don't, I suspect, get the same outcomes. When I read the biographies of the most accomplished, I find over and over again stories of teachers who challenged them to exceed their boundaries rather than teachers who talked down to their current boundaries.

All that said, I frequently find myself in the role of ChatGPT--summarizing and explaining the work of others to those who don't understand it but should (archaic term alert!) hearken to it. Usually, it's because neither party has the language skills to bridge the gap, and work needs to get done. But in these cases, my summary and explanation might be different for different recipients based on my understanding of their level. That means I have to know enough about the work being explained, and enough about the recipients, to be able to communicate it at levels appropriate to each. But my hope is that once the gap is bridged, I will no longer be needed in that role, at least for that group of students or readers. Effective teachers and authors know more than their students and readers, which is why we read them, and LLM's, in the end, don't know anything.

Rick "language easy buttons may have unintended consequences" Denney
 
  • Like
Reactions: KLi
Back
Top Bottom