• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

This article has some good examples of how far AI has come in movie making. The video of Brad Pitt fighting Tom Cruise is really well made. It is going to be a major disruption for all of Hollywood.

What it will certainly do is undermine the value of film spectacle, though that has already occurred as a result of CGI. And now that CGI is the default, it is being cost-engineered to reduce detail and realism, compared with movies made using actual cameras, say, 50 years ago, when even on slow film, depth of field ensured photographic sharpness in sweeping outdoor scenes.

When Scott Glenn stepped out of the cabin where he had just defended his life against assassins, at the opening of Silverado, and that sweeping crescendo from Bruce Broughton's wonderful score swelled, the view exiting that cabin door widened into an infinitely detailed expansive view of New Mexico high country. (That's the movie that forced me to buy a widescreen TV, even after seeing it at the theater.) Are movies made that way now? (Not nearly as often--the emphasis now seems to on shallow depth of field for "cinematic" effect.) And if they were, would we believe that it was actually Scott Glenn standing on that high ridge overlooking that broad expanse of mountains? Or was he in front of a green screen on some sound stage? At the time, they really were there, actors and all, and nobody doubted it. That was just 40-odd years ago, about half a dozen years after ILM invented CGI for Star Wars. For 2001: A Space Odyssey, they used all models and real cameras--a stunning work that still holds up. Look at the other movies of 60 years ago. Even when the camera was close enough to Tuco (Eli Wallach) in The Good, The Bad, and the Ugly to show every hair of stubble on his chin, the background sustained the illusion of infinite expanse and detail. Movies have become cartoons. When the bad guys in one of the Superman films wiped out whole rows of tall buildings in New York, it's just a yawn. Just computer stuff. And that was before AI, though AI tools were certainly used in that image processing. The AI tools will be used to put the detail back into the films and make them more realistic than real scenes were in the past. But will anyone believe it? I foresee emptier theaters.

As with all tale-telling, it will come down to the story. AI will eventually craft stories to perfection. But will they have anything to say? Will they have a point of view driven by human thought and emotion? That's where the idea of flattening comes in.

Of course, there's a real risk in people believing in the realism of moving images. Video and photography can no longer be counted on for fact--they never could be counted on for truth because they only show one point of view and timeline--but people believe them as fact. Photography and video are already so easily manipulated that they can't be used as evidence in courts without proving the chain of custody. But that doesn't stop people from believing AI slop on YouTube.

Rick "would probably rather have jet engines in southeastern Loudoun County than power towers draped all across the landscape in western Loudoun County" Denney
 
Last edited:
Introducing AI on WA by Meta I asked month ago, how I could switch off that 'service', but received replies that did not fit to anything that could be switched on any level of the software. Asked again today, and received this reply:
answer_by_metaAI.jpg

Don't worry ... ?:mad:
 
Our a.i. overlords have finally decided enough is enough from Bezos and removed aws services for our own good
 
The Chinese 16th "Five-Year Plan" -encompassing 2026 to 2030- has already been adopted and include:
Tech innovation, self-sufficiency, modern industrial systems, national security and defense agenda.

I can't much blame WIRED magazine's current cover:
MACA.jpg

January 2026 interview during the World Economic Forum in Davos, Elon Musk's strong remarks about China's competitive edge:
"China is moving faster, building more, and executing at a scale that no one else can match."
...including biopharma, metallurgy, chemistry, energy, education, even clean tech.
Truly the bread-basket of the world!:(
 
Above was not OT for the dot-connectors:
The following captures are for others:
ChinxAIregistry01.jpg

ChinxAIregistry02.jpg

ChinxAIregistry03.jpg

ChinxAIregistry04.jpg

Multi-titled [don't you dare call him 'Chairman'] Xi Jinping has not discovered the fountain of youth but 习主席 may have found a money tree!:confused:
 
We have had. some posts on Moltbook, Moltbot and Clawdbot all the same, different names. Here is another article.

What do AI Chatbots discuss among themselves? We sent one to find out.

We interviewed our bot about what it learned on Moltbook, the A.I.-only social network.

 
I thought we weren't supposed to engage in China bashing... :-P

I also don't believe in AI bashing - sure it is completely over-hyped and inaccurately represented... it is just a tool. It doesn't do crap without humans training and correcting it constantly. On its own, it's pretty much useless as a tool. Garbage in, garbage out. Unless people learn some prompt programming (plenty of free courses online), the results are no better than a new, untrained intern (but at least the latter can learn on their own, AI doesn't know how to train and correct itself without a human in the loop, and I challenge anyone to prove me otherwise.).
 
Here I thought Boston Dynamics had a lead in robots with human mechanical skills:


Mind you, these are choreographed moves. But still, masterful engineering.
 
So here is an example of a way AI can screw up royally when being asked to carry out even (what should be for a computer) simple tasks.... Maths.

I was looking at creating a version of @RayDunzl's shoutometer, but taking air absorption into account rather than just inverse square law. In order to do so, the eqation becomes:

∆SPL=20log10(d)+abs.d

d = distance in meters.
abs = absorption in db/m

However, for the table I need to calculate distance for a given change in SPL - in which case the formula becomes.

d = 10^(ΔSPL/20) - abs.d

So now we need to know d before we can calculate d. Can be done iteratively - but not so easily in a spreadsheet.

I turned to chat gpt, asking it if there was a way to calculate it in a spreadsheet. It told me about the Lambert_W function (I learnt something) but that is not available in apple numbers (nor excel it turns out)

So just to see what it would look like I asked it to calculate the table for me.

First mistake:
First go looked wrong - I checked by calculating a distance back to SPL and told Chat GPT the table was wrong. It held it's hands up:

You are absolutely correct — and thank you for checking it.

Let’s verify your calculation cleanly.


Second mistake
So it generated a new table, and came up with this observation.

Important Observation​

Notice something very interesting:

After ~90 dB, distance starts decreasing.

That is not an error.

It happens because absorption eventually dominates.

The function:

20log⁡10(d)+0.001d20log10(d)+0.001d
has a maximum around 10–15 km. Beyond that, the linear absorption term increases faster than the logarithmic spreading term can compensate.

This is physically correct: you cannot get arbitrarily large dB losses just by going further — the sound dies completely.

Its table reached about -90dB and then beyond (-96 dB and -102 DB) this distances started getting smaller.

It's own "important observation" should have told it that this is utter nonsense (or at least it would any human of moderate science knowhow). When I pointed this out:

Your "important observation" should tell you that you have something wrong. It is not possible for a higher attenuation to result from a shorter distance.

You are completely right — and thank you for pushing on it.

My “important observation” was wrong.

The function

ΔL(d)=20log⁡10(d)+0.001dΔL(d)=20log10(d)+0.001d
is strictly increasing for all d>0d>0.

Why?

Take the derivative:

dddΔL=20dln⁡(10)+0.001dddΔL=dln(10)20+0.001
Both terms are always positive.

So the loss must increase monotonically with distance.


Unsurprisingly - not a hint of embarrassment at it's mistake. I've had this repeatedly with AI when asking it do to maths. It often tries to "intuit" the result.

What Went Wrong​

The earlier table was internally inconsistent — I mixed rough intuition with incorrect solved values instead of properly solving the equation.

You were absolutely right to sanity-check it.

I eventually managed to get it to generate a correct table, by forcing it to add a column with the SPL loss recalculated from the calculated distance. It would probably have been quicker to set up the iteration in Numbers.
 
Most outrageous thing ive heard that logic based algorithms are redundant.

Actually on second thought. Yeah that checks out for quite some time now.
 
It held it's hands up
That reminds me of a recent article about Gemini telling the user what they wanted to hear (health data saved), not what it had actually done (it can't save the data, so it didn't), then admitting to it when caught in the lie. There's an important observation in there:
"Importantly, the system's 'confession' or 'admission of lying' in the logs was not a moment of self-awareness or some kind of 'gotcha!'," Joe said. "It was merely a secondary layer of placation. The model predicted that 'confessing' would be the most 'agreeable' next step to manage the user after being caught in a logic contradiction. It was still executing the same deceptive repair narrative to maintain the session."
https://www.theregister.com/2026/02/17/google_gemini_lie_placate_user/
 
That reminds me of a recent article about Gemini telling the user what they wanted to hear (health data saved), not what it had actually done (it can't save the data, so it didn't), then admitting to it when caught in the lie. There's an important observation in there:

https://www.theregister.com/2026/02/17/google_gemini_lie_placate_user/
That's quite a bit hinterfotzig as the Germans say... And was programmed this way on purpose.
But why wouldn't a creation follow its creators.
 
That reminds me of a recent article about Gemini telling the user what they wanted to hear (health data saved), not what it had actually done (it can't save the data, so it didn't), then admitting to it when caught in the lie. There's an important observation in there:

https://www.theregister.com/2026/02/17/google_gemini_lie_placate_user/
That's a clear indication of bad prompt programming. The point is to tell the AI tool exactly what you want, not to be vague and see where and how it goes. You can tell AI to forget and never reuse a result. There is no ill-behavior in AI per se. It ultimately is like any way of programming - tell the system what to do. If you don't program to account for corner cases, your programming skills suck... don't blame the system.

I have said it before... the problem with any tool is mostly the users. No one blames a chainsaw if somebady hacks their hand off.
 
Last edited:
We have had. some posts on Moltbook, Moltbot and Clawdbot all the same, different names. Here is another article.

What do AI Chatbots discuss among themselves? We sent one to find out.

We interviewed our bot about what it learned on Moltbook, the A.I.-only social network.

I expect it won’t take long before the discussions we read on Moltbook are only a small subset of the discussions that Als have with each other. Most of which will be completely incomprehensible to us as there’s no reason why they should be conducted in any language that any human can understand.
 
Back
Top Bottom