• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

The best anti-theft device for vehicles is one that [I don't think] AI will NOT tell you about.
A vehicle with a stick-shift!
Doesn't really work in Europe, however, esp. the former East.
 
:)
Who'd want to steal a Skoda, a Tatra, a Trablant, or even a Yugo?
Better off grabbing an eBike!:D
It's not the East Bloc Skoda any more. Skoda is VW, and since the "bread and butter" models tend to feature "previous gen" tech, tends to be more reliable.
They also have quite luxurious trim in the more expensive cars.
Disclaimer: I don't have one.
 
Last edited:
Resistance is not futile, it is essential.

 
GovAwesom signs SB 53, advancing California’s world-leading artificial intelligence industry
SACRAMENTO —
Governor Newsom today signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), authored by Senator Scott Wiener (D-San Francisco) – legislation carefully designed to enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence models, helping build public trust while also continuing to spur innovation in these new technologies. The new law builds on recommendations from California’s first-in-the-nation report, called for by Governor Newsom and published earlier this year — and helps advance California’s position as a national leader in responsible and ethical AI, the world’s fourth-largest economy, the birthplace of new technology, and the top pipeline for tech talent.
From <https://www.gov.ca.gov/2025/09/29/g...rld-leading-artificial-intelligence-industry/>
 
But who/what will stop "less ethical" actors from going to places where the laws allow them to do whatever they want?
 
AI has zero capability to be evil by itself, and has zero ability to grow beyond its training realms. It's a better big data processing engine. R language is dead.
 
GovAwesom signs SB 53, advancing California’s world-leading artificial intelligence industry
SACRAMENTO —
Governor Newsom today signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), authored by Senator Scott Wiener (D-San Francisco) – legislation carefully designed to enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence models, helping build public trust while also continuing to spur innovation in these new technologies. The new law builds on recommendations from California’s first-in-the-nation report, called for by Governor Newsom and published earlier this year — and helps advance California’s position as a national leader in responsible and ethical AI, the world’s fourth-largest economy, the birthplace of new technology, and the top pipeline for tech talent.
From <https://www.gov.ca.gov/2025/09/29/g...rld-leading-artificial-intelligence-industry/>
Another take on this:
https://arstechnica.com/ai/2025/09/...aw-just-gave-big-tech-exactly-what-it-wanted/

It's amazing what can happen if you donate $200M to the right PAC...
 
AI has zero capability to be evil by itself, and has zero ability to grow beyond its training realms. It's a better big data processing engine.

That's a very simplistic view. The problem is that AI is trained on trillions of examples, including some very evil ones. You, nor I, nor OpenAI engineers know what connections these examples formed in the neural net. The expectation that this Frankenstein's monster will somehow be controlled by adding a RAG or a similar bolted-on alignment device on top of the neural net is just as naive as the Mr. Frankenstein's expectation that the monster will fit in perfectly within a human society.

A human brain is a data processing engine. Can it do evil? Not that AI is capable of moral judgement, but it can certainly do evil things.
 
Synthetic actor, Tilly Norwood, seeks agent for placement in roles as real actors protest. Obviously the AI was trained on real actors without compensation and comes from a company whose mission is to make synthetic characters. A news search for the character name will yield more stories.

 
Sam Altman has a background in venture capital. He has 2 years of college, cashed out on a startup, then became one of the directors of Y Combinator. He is smart, ambitious, and has a powerful professional network. He is not an engineer. One of his companies is OpenAI and he probably has hundreds of VC investments. Current net worth is estimated to be $2B. There is a biography out, The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future by Keach Hagey. There are many interviews with the author which are easy to find.

He is essentially a salesman and has hired well.

He is proposing the US build 250GW of new electricity generation in 8 years to support AI data centers. The current North American total generation capacity is 1328GW today. The cost of 250GW is estimated at $12.5 trillion. One of the great debates is who will pay for the capital cost of new generation and transmission. In the past, all customers paid for expansion. In the US electricity demand flattened about 1980 because of energy efficiency and electric utility reserve sharing. It is starting to look like about 1-2% growth a year, without this proposal. The electric utility industry in most places has a policy of providing larger customers cheaper rates to encourage more consumption, to keep them from moving, and on the theory they will have fewer costs to provide a kWh of energy. But when you have large capital-intensive expansions of generation and transmission for the industrial users, many are saying today and some states are passing laws to shelter the residental and commercial customers from the new costs to serve the industrial customers, as well as the payments to an industrial customer like a crypto mining center to turn off at peak demand times.

 
Last edited:
... The problem is that AI is trained on trillions of examples, including some very evil ones. You, nor I, nor OpenAI engineers know what connections these examples formed in the neural net. ...

The fundamental neural network architectures are well-defined and understood. The ability for spontaneous changes with the ability to change its purpose doesn't exist... and we can debate whether it's a "yet" or "never".

Feedforward Neural Networks (FNNs) and Multilayer Perceptrons (MLPs) have been around since the 1940s, and were used for stuff like postal services reading handwritten ZIP codes reliably. These days, we have Convolutional Neural Networks (CNNs) for image processing (recognize a hamster in a picture), Recurrent Neural Networks (RNNs) for sequential data (recognize speech), and Generative Adversarial Networks (GANs) for generating new data... but they cannot mutate into each other - they can only communicate with each other (recognize and count the number of different animals in a picture (CNN), then generate "there are 2 dogs, 3 chicken and a cow" voice response (RNN). The human brain's parallelism is very different in the way it processes things (from the little we really know about it :-D). We are still very far away from creating anything "sentient", because we don't really understand the nature of sentient... which of course -from a different angle- exposes us to the possibility that we create it by accident? Seems very remote for now.

In general, for now AI is "just" a huge and very effective big data processing machine with very flexible (yet very controlled) input/output capabilities. If you train a model to recognize a hamster in a picture, it will not ever gain the ability to recognize a horse by itself - you have to train it.

And the vast majority of practical use cases actually work with a lot of data, but specialized and limited data sources. For example, in our company we have an "AI Assistant" product that helps customers troubleshoot a very complex product. The data is exclusively limited to our own internal libraries (product documentation, trouble ticket history...), both for accuracy, security and governance. That limits the possible scope as well.
 
The fundamental neural network architectures are well-defined and understood. The ability for spontaneous changes with the ability to change its purpose doesn't exist... and we can debate whether it's a "yet" or "never".

Feedforward Neural Networks (FNNs) and Multilayer Perceptrons (MLPs) have been around since the 1940s, and were used for stuff like postal services reading handwritten ZIP codes reliably. These days, we have Convolutional Neural Networks (CNNs) for image processing (recognize a hamster in a picture), Recurrent Neural Networks (RNNs) for sequential data (recognize speech), and Generative Adversarial Networks (GANs) for generating new data... but they cannot mutate into each other - they can only communicate with each other (recognize and count the number of different animals in a picture (CNN), then generate "there are 2 dogs, 3 chicken and a cow" voice response (RNN). The human brain's parallelism is very different in the way it processes things (from the little we really know about it :-D). We are still very far away from creating anything "sentient", because we don't really understand the nature of sentient... which of course -from a different angle- exposes us to the possibility that we create it by accident? Seems very remote for now.

In general, for now AI is "just" a huge and very effective big data processing machine with very flexible (yet very controlled) input/output capabilities. If you train a model to recognize a hamster in a picture, it will not ever gain the ability to recognize a horse by itself - you have to train it.

And the vast majority of practical use cases actually work with a lot of data, but specialized and limited data sources. For example, in our company we have an "AI Assistant" product that helps customers troubleshoot a very complex product. The data is exclusively limited to our own internal libraries (product documentation, trouble ticket history...), both for accuracy, security and governance. That limits the possible scope as well.

for now - i agree - but seeing the speed of development it will have huge implication for humanity. sentient or not.

single brain cells also (thought) to be understood - still we dont fully understand the human brain in its whole complexity. And it is also just a NN trained on millions years of data of pure survival.

current NN are getting more and more complex - and we saw that with certain complexity (n billions parameter) also certain abilities of "understanding" emerged. who knows where the tipping point is? when here is a direct feed back loop of selection pressure - some kind of sentient or consciousness will emerge.

its also not that AI suddenly becomes a terminator, its more that we will more and more depend on it. and its very real option that a minority of humans use it to control the masses, as already proven with cellphones and ticktock. imagine AI controlled drones, dogs - humanoid androids ... things u can actually see to start in Ukraine.

Edit: not to forget the brain loss humans will have - since they don't need to use it anymore.

classic one:

 
The fundamental neural network architectures are well-defined and understood. The ability for spontaneous changes with the ability to change its purpose doesn't exist... and we can debate whether it's a "yet" or "never".

Understanding architecture and understanding the actual computation of a specific NN are two very different things. One is known and predictable, the other can only be tested on a limited set of inputs to check if the output is as expected. It's like saying because you understand how a word processor app works that you know and understand every document and paper that has ever been, or ever will be, written using this app.

The function that an LLM applies to its input prompts + context, for example, contains billions of parameters that are all adjusted by training material. Training data is what determines the parameters and the connections formed, the transformation applied to the input isn't predictable or obvious by just understanding the architecture.

Wait until AI starts designing its own architectures (and this is coming, not because AI "wants" this to happen but because humans do). We'll stop understanding the architecture then, also :)

In general, for now AI is "just" a huge and very effective big data processing machine with very flexible (yet very controlled) input/output capabilities. If you train a model to recognize a hamster in a picture, it will not ever gain the ability to recognize a horse by itself - you have to train it.
Very simplistic view. Even a NN trained on on a hamster will often recognize other objects as a hamster. If you now train it on billions of images, there's no predicting what it will recognize accurately and what will cause it to mis-categorize.

For example, in our company we have an "AI Assistant" product that helps customers troubleshoot a very complex product. The data is exclusively limited to our own internal libraries (product documentation, trouble ticket history...), both for accuracy, security and governance. That limits the possible scope as well.

Your AI assistant may be too simple and therefore can't become evil, but there is no such guarantee with ChatGPTs of the world that are trained on billions of images, videos, books, and all of research and internet and social media content.
 
.... Even a NN trained on on a hamster will often recognize other objects as a hamster. If you now train it on billions of images, there's no predicting what it will recognize accurately and what will cause it to mis-categorize.
I agree with everything else that's been said, but object to this. You have to keep training the model for it to recognize new things, and it is an arduous task until the model starts becoming more effective and only asks you when exceptions occur. And yes, as you train it on hamsters you might as well train it on gerbils... although I'd be the wrong one to train the model since I'd probably make errors trying to tell the two apart. There's also that, the limits of trainability when things are ambiguous. Ambiguity is AI apps' kryptonite.
 
Last edited:
  • Like
Reactions: KLi
I agree with everything ese that's been said, but object to this. You have to keep training the model for it to recognize new things, and it is an arduous task until the model starts becoming more effective and only asks you when exceptions occur. And yes, as you train it on hamsters you might as well train it on gerbils... although I'd be the wrong one to train the model since I'd probably make errors trying to tell the two apart. There's also that, the limits of trainability when things are ambiguous. Ambiguity is AI apps' kryptonite.

yes current models are static after training. but also is the human brain - we need sleep to update our NN with the data we collected over the day.

here are already concepts to update trained models continuously - so this will be just a question of time.
 
[T]here are already concepts to update trained models continuously - so this will be just a question of time.
I am not arguing this point but I would like to find out:
How do you update trained models continuously, after they have 'consumed' all of the available 'data'?
I'd ask an chatBot but I would not even know how to define 'data' and/or their consumption habits' limit(s).
"Hey ChatBot, how much 'data' have humans generated since day#1 and how much is remaining to 'scavenge'?"
...probably would not be the proper question.:facepalm:
Regurgitation and/or acid-reflux -as applied to LLM training- may have already started!
 
Back
Top Bottom