• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

Yeah I really do prefer my computers to be computer-y. The uncanny valley is a disturbing place. Full verisimilitude will be worse of course.

I'm sure you can instruct an AI chatbot to be more robotic in its responses. It's all in the prompt, they say.
 
I'm sure you can instruct an AI chatbot to be more robotic in its responses. It's all in the prompt, they say.

That's certainly been @pablolie's argument.

I think LLM-based chatbots or more precisely natural language input is an obviously interesting/useful advance. But generative output requires significant work before it produces reliable results so I'm not diving in yet. I did read an article this week that I can't put my finger on just now that mentioned frustration that prompting for less 'human' more concise/pared-back output style didn't seem to hold for long. I'll see if I can find it in order to be more specific.
 
Data point (and nothing more): Yesterday, I asked when Montres Ebel, S.A. acquired a portion of TAG-Heuer. The AI response was that my premise was wrong--it has never owned any part of TAG-Heuer, and that both were acquired by LVMH in 1999. At least the latter part is true. But very clearly stated in the (French) L'Impartiale newspaper (published in La Chaux-de-Fonds where Ebel was based) in April of 1988 was that Ebel, S.A. has acquired the portion of TAG-Heuer that Piaget had not sold to TAG, when Piaget was acquired by Cartier du Monde, who didn't want the Heuer holding. And later (1992) in the same publication--or perhaps it was LExpress in Neuchatel--it was reported that Ebel still owned a 20% stake in TAG-Heuer when Ebel was itself selling off some non-watch assets. I found all that online, and have myself written about it on several publicly available forums, so I don't know why the search AI engine couldn't find it.

Again I see that AI knows less than experts, and that's assuming I could be considered an expert (which, given that I have absolutely zero inside information from these companies, I couldn't possibly claim). The problem is the confidence it displayed when telling me I was wrong. Sure, we can say to check the results, but isn't the point of AI search engines and LLM's to be able to synthesize information we don't know?

Rick "well-expressed falsehoods are dangerous" Denney
 
Last edited:
Yes, I don't know if the presentation errors in the text are the original output or copy-paste-post things: missing space after initial period, incorrectly formed m-dash, returns after every sentence, then an extra line break to separate the conclusion. Nobody wants to see that sort of thing presented systematically.

But all that pales before the gratuitous language flourishes attempting to convey casual human conversation. I'd need anti-nausea pills. This quote from Dave Winer's Scripting News is about that stuff on an unfortunate Alexa product, but it could equally be deployed as a cardinal rule:
The text I posted was a copy of a copy, traversing a couple of websites. The original was grammatically correct. I did a bit of editing to remove clutter, and probably fumbled it. At the time i did the query, I had no intention of sharing it.

I originally went to traditional sites to find examples of tracks to help with identification, but had no luck.

I’ve been asking simple questions in AI browsers for about six months and am pleased with the results. The Brave browser provides links to its sources. The example I posted was grok. Like Marvin from Hitchhikers, it has a people personality.
 
Last edited:
The text I posted was a copy of a copy, traversing a couple of websites. The original was grammatically correct. I did a bit of editing to remove clutter, and probably fumbled it.

One can always trust a human to mess up a perfectly formatted and precisely formulated piece of machine-generated slop ;)
 
Good points. I've re-used exactly zero output from the chatbots, and entered exactly one prompt into a DDG search box, so I'm not fluent—cut+paste from PDF in ancient times delivered similar horrors, so I get the idea. But unless @Petrushka was aiming for a warts-and-all effect, surely you'd clean up the worst of the slop before presenting it to others?

As for dashes em and en, my hybrid education (high school in both US and Oz) sometimes leaves me betwixt and between. I'm almost horrified that I prefer the US habit.



Yeah I really do prefer my computers to be computer-y. The uncanny valley is a disturbing place. Full verisimilitude will be worse of course.
I wasn’t concerned with grammar or formatting. I was genuinely interested if it could make the identification.

I wasn’t planning to post it here, so I didn’t preserve the original text. I’m sill expecting copy and paste to be text only, but some websites attempt to preserve original formatting, and that can introduce errors. For, example one character was translated to an emoji.

That was why I edited it. An attempt to restore the original text.

I find it interesting that no one here is seriously concerned about the content of the identification, or the reasoning behind it.

There are lots of coyotes in my neighborhood, but no stray dogs or cats, other than bobcats. In six years I’ve only seen one fox.
 
To me it looks like tracks from a squirrel.
I was thinking the same. But the real takeaway is that whatever AI model was used, clearly wasn't trained well as a tracker.
And partially trained models are indeed a disaster ("tracks may provide evidence of chupacabra existence!"), and our expectation that LLM models are sufficiently trained to reliably answer every prompt is the real problem... sometimes AI is just as much of a charlatan as humans can be, unsurprisingly.
We are all collectively training AI.
 
The text I posted was a copy of a copy, traversing a couple of websites. The original was grammatically correct. I did a bit of editing to remove clutter, and probably fumbled it. At the time i did the query, I had no intention of sharing it.

I originally went to traditional sites to find examples of tracks to help with identification, but had no luck.

I’ve been asking simple questions in AI browsers for about six months and am pleased with the results. The Brave browser provides links to its sources. The example I posted was grok. Like Marvin from Hitchhikers, it has a people personality.
I wasn’t concerned with grammar or formatting. I was genuinely interested if it could make the identification.

I wasn’t planning to post it here, so I didn’t preserve the original text. I’m sill expecting copy and paste to be text only, but some websites attempt to preserve original formatting, and that can introduce errors. For, example one character was translated to an emoji.

That was why I edited it. An attempt to restore the original text.

I find it interesting that no one here is seriously concerned about the content of the identification, or the reasoning behind it.

There are lots of coyotes in my neighborhood, but no stray dogs or cats, other than bobcats. In six years I’ve only seen one fox.

Thank you for the detail and it's interesting to know which interface/mode and which model is used to produce a result. I should add that didn't know originally if you were intending to show us the horror of chatbot personality/expression or the delight of fox-print identification, or perhaps both (per your original 'better or worse'). ... :)
 
Last edited:
As far as coding is concerned, I don't use AI to code the overall logic or code algorithms I use. I may occasionally discuss suggestions of alternative options, optimizations etc, but I always ask it to discuss and not generate any code (in that respect, all the models I use handle the "do not" case properly, that was an issue with previous models.

One area where it helps is use interface generation. Typically user interfaces are my least worry and, when I end up my coding session, I am usually too exhausted mentally to bother implementing a user friendly interface. The progress in recent models is that I can feed them a code base, instruct them not to touch the core and generate an GUI.

This typically works very well, the interface is clean and it exposes the tunable parameters in a logical way. However, the models totally lack common sense...

I was recently working on a program that consist of very memory and computationally intensive tasks. The tunable parameters, the legends and other textual informations were accessible on the left panel, the result of the computations in a preview pane on the right. I of course wanted to test different parameters, update the results (large image) and add eventual comments when satisfied.

As soon as I started playing with the sliders, I noticed my computer went 100% CPU/GPU for a very long period of time. There was no way to move the slider without triggering multiple new computations... I suggested the problem to the model

Model:
Yes! The slider is triggering the update on every tiny movement. We can make it only update when you release the slider. Here's how:
Solution: Use mouse button release event instead of continuous updates


So far so good. But some sliders that were supposed to be inactive (their main section was inactive) we still active.

Me:
Again from a user interface responsiveness point of view, it seems that if I play with the sliders when the option is disabled, the computation still happens in the background. Could those two sliders have an inactive state when the option they give parameter to is disabled?

Model:
Excellent observation! Let's disable the sliders when their parent options are disabled. We need to...

But that wasn't the end. If you assumed the model would have fixed all the controls, you would be wrong

Me:
Oh, I just noticed that the problem I had with the sliders is general - changing the text leads to image regeneration for each deleted letter and each typed letter. Can we make all those options trigger only when the change is final? For the text I assume it would be when the mouse leaves the box?

Model:
Yes! Let's make all the text inputs only update when you're done editing.

Maybe I am a bit harsh for the model here, or optimistic about humans, but if I had had an human intern tasked with coding an interface around my code, I would have expected saying something like "Kid, don't restart the model compute hundreds of times when a parameter slider moves" to be enough for the guy to fix all those triggers.
And the model had everything in its context. We even had discussed some potential optimizations and I had received an encyclopedic (and mostly correct) lecture on the pluses and minuses of the optimizations. It gave the impression that it "knew" about the heavy compute, it even gave me a very good ballpark estimate of the potential performance gains. But it did not see a problem with calling the heavy part constantly.

It could be, as Steve Jobs once famously said, that "I am using it wrong" It could be that if, instead of going at it in a conversational way, I had offloaded the whole thing to a fully agentic / deep thinking pipeline of agents, they would have figured it out all by themselves. But the drawbacks would have been up to 100-1000 times more tokens (burn, baby, burn) and my complete disconnection from the code.

Amodei recently said that soon we wouldn't verify code as we don't verify compiler output (well, we do, but that's another topic, most users don't). That's a bit scary imho, even if the hallucinations/mistakes are solved, we might end up being satisfied monkeys passively watching code doing stupid things in the background.
 
I'm sure you can instruct an AI chatbot to be more robotic in its responses. It's all in the prompt, they say.
In the case of Anthropic' Claude; within its 'Settings' section, under the user 'Profile' panel, you can find:
What personal preferences should Claude consider in responses?
To set your preferences:

  1. Click your initials in the lower left corner.
  2. Select "Settings."
    • Your preferred approaches or methods
    • Common terms or concepts you use
    • Typical scenarios you encounter
    • General communication preferences
Any preferences you add here will be applied to all of your conversations with Claude.
I don't even have enough time to brush my teeth 3 times a day... let alone, to experiment with the preferences I entered.
 
Last edited:
To me it looks like tracks from a squirrel.
A two foot, 20 pound squirrel?

I should say that this was the first snow of the year, and the ground was not frozen before the snow. Thre was less than two inches of snow, followed by a light rain. So much detail is lost.

The known beasts are coyotes, raccoons, bobcats, foxes, bears. Of these, the only possible size matches are foxes and coyotes.
 
I'm sure you can instruct an AI chatbot to be more robotic in its responses. It's all in the prompt, they say.

Don’t know how accurate this video is as I’ve never used Ai/LLM’s but he discusses how to phrase a request/question/statement so you get a more informative and accurate response

 
...as I’ve never used Ai/LLM’s...
Are you sure about that?
I thought these things are like other invasive species and you can't avoid them; even wearing face-masks and/or gloves on the keyboard and/or protective glasses!
 
Last edited:
Are you sure about that?
I thought these things are like other evasive species and you can't avoid them; even wearing face-masks and/or gloves on the keyboard and/or protective glasses!
Exactly, all search engines leverage the tech these days. And that's their major (maybe only) revenue source. For the rest, there are many studies out there stating that 70% of AI projects never pay off.
 
Are you sure about that?
I thought these things are like other invasive species and you can't avoid them; even wearing face-masks and/or gloves on the keyboard and/or protective glasses!

Dunno but I've not actively used them.

I have the new iPhone 17 but have all the ai shit turned off, I use "-ai" in the tab in google so no ai there but I rarely use google anyway, ai is of no use to my existence so I'm perfectly happy to let it pass on by

duckduckgo doesn't use ai,
 
Last edited:
Back
Top Bottom