• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

Mem'ries may be beautiful and yet
What's too painful to remember
We simply choose to forget

So it's the low prices we will remember
Whenever we remember the way we were
The way we were

What memory is it that is expensive? I'm assuming it must be the more recent DDR5. Some DDR4 ram modules I bought in 2018 are still cheaper now than they were then.


EDIT - yes, the DDR5 equivalent of my DDR4 ram is about 55% more expensive.
 
Last edited:
Six states in the USA (Arizona, New Jersey, Ohio, Oklahoma, Texas, Washington) are trialing a 6-year program bringing 'prior authorization' to traditional Medicare.

The program relies on A.I. and language learning models to assess the need the for select procedures, including knee arthroscopy and implanting an electrical nerve stimulation device.


The URL posted here doesn’t provide any technical info on development/deployment of so-called generative AI or LLM-based* models by the agency. I followed a few links and saw a lot of non-technical policy fluff, eg:

How will existing models be modified?

The Innovation Center’s existing models will be modified to emphasize healthy living and ensure cost savings to protect federal taxpayers. Modifications to existing models will include embedding preventive care, as well as efforts to increase access to and uptake of primary, secondary, and tertiary prevention — including incentives for both people and providers and opportunities to engage in community-based initiatives

CMS appears to be using ‘model’ in a broader sense. Do you have anything that’s more informative or specific?

*LLM means large language model, not language learning model
 
Last edited:
Large language model won't get along without language learning on the long run...
 
Large language model won't get along without language learning on the long run...

:facepalm:

LLMs don’t (and don’t need to) learn language at all. Word token proximity and generative text output is a probabilistic statistical process.

If you mean that some hypothetical future AGI needs semantic rather than statistical language ability, that’s another story, and has almost nothing to do with existing LLMs.
 
Last edited:
And for this it need at first to understand what the genuine language meaning is .
 
And for this it need at first to understand what the genuine language meaning is.

People sometimes think so, but that’s absolutely not how LLMs work. Nor despite some early optimism (or disingenuous hype) is ‘understanding’ language an emergent properly of these models. To the contrary, you should understand what ‘statistical ‘ and probabilistic’ mean in this context.

A key reason LLM-based chatbots ‘hallucinate’ is that ‘genuine language meaning’ is not a quality or attribute of these models. Your description of LLM function is analogous to the magical narratives some audiophiles offer when they fundamentally misunderstand electronic engineering principles and practice.
 
Last edited:
People sometimes think so, but that’s absolutely not how LLMs work. Nor despite some early optimism (or disingenuous hype) is ‘understanding’ language an emergent properly of these models. You should understand what ‘statistical ‘ and probabilistic’ mean in this context.

Your description of LLM function is analogous to the magical narratives some audiophiles offer when they fundamentally misunderstand electronic engineering principles and practice.
If a LLM is an object to give a response of a probably next word to answer it should be capable of the language ... not?
 
If a LLM is an object to give a response of a probably next word to answer it should be capable of the language ... not?

No, as with audio, many narratives are possible. but you do need a basic technical understanding to match narrative with reality.

The appearance of natural language output might suggest a semantic understanding of language to a layperson. But LLMs are built on mathematical representation of word proximity in the training text corpus. There is no semantic function, despite appearance to the contrary.
 
Last edited:
You risk descended into cable-fetishist logic at this point: ‘if I perceive something it must be happening as I conceive it’. But you need a basic technical understanding to match narrative with reality.

The appearance of natural language output might suggest a semantic understanding of language to a layperson. But LLMs are built on mathematical representation of word proximity in the training text corpus. There is no semantic function, despite appearance to the contrary.
You risk to descending not agreeing into the main ragulatories of LLM .. and what is that sh**t about cables?
 
Last edited:
You risk to descending not agreeing into the main ragulatories of LLM .. and what is that sh**t about cables??? Leave that at your own.
Leave U hear @ yr own ..

I realised that I made the magical thinking point already and revised my post. But I’m ok with either version. The basic argument about and knowledge of how LLMs actually function is the thing you are apparently missing.
 
I don't know how AI works, but Claude 4.5 from Anthropic writes quite nice documented code that does what you ask. It is hard for a lay person (me) to understand how just guessing the next word has anything to do with writing code that works but I guess it does?
 
I don't know how AI works, but Claude 4.5 from Anthropic writes quite nice documented code that does what you ask. It is hard for a lay person (me) to understand how just guessing the next word has anything to do with writing code that works but I guess it does?
Programming languages are the most predictable, less ambiguous and most verifiable form of language. On top of that, the LLMs can run and verify the code they produce. In fact, for the user, the hardest part is probably specifying unambiguously what they want the program to achieve. Small stuff that everyone does is essentially a search for the LLM: it has already seen and learned (somewhat fuzzily) multiple versions of what it is asked.

Oh, and "code that works" is a huge can of worms. "Apparently works" and "Truly and reliably works" are different things. For example, yesterday, I got roughly 5000 lines of code that apparently and functionally did everything I asked. Nice plots, nice user interface, rock solid I/O, prisitine documentation. There was one major issue though: all the plane trigonometry it used should have been spherical :) (and about a dozen of smaller issues)
 
I don't know how AI works, but Claude 4.5 from Anthropic writes quite nice documented code that does what you ask. It is hard for a lay person (me) to understand how just guessing the next word has anything to do with writing code that works but I guess it does?
I have the same problem with conceiving how that can possibly work on a probabilistic basis. Especially when the code is so well structured and documented.


However, I also concede that I have about the same understanding of how LLM's work on a detailed level, as my three year old grandson does about the inner workings of the laptop I am using to write this.
 
I have the same problem with conceiving how that can possibly work on a probabilistic basis. Especially when the code is so well structured and documented.


However, I also concede that I have about the same understanding of how LLM's work on a detailed level, as my three year old grandson does about the inner workings of the laptop I am using to write this.

We are a few levels above simple probabilistic predictions these days (although the arguments about the core still being probabilistic and potentially unreliable are still valid).

At the simplest level, when you type "pr" it is almost 100% probable that you are going to type something like print("some string: ", some_value) that's basic autocomplete as we had it a few years ago.

Now if I type a comment such as # here we compute the area of the rectangle, it is easy for the LLM to predict what I want is

def ComputeRectangleArea ( width, height )
area = width * height
return area

In this case, it doesn't predict the next characters one by one, it predicts that you will want to use a code structure it has already seen. (this is why sometimes copyrighted non open source code appears verbatim, same mechanism as the one that leads to NYT article fragments being quoted, etc)

Putting a larger program together is just more of the same "query our database for the dimensions of rectangles we have in stock, compute their area and price, match with customer request, create a PDF with an offer" is just assembling standard blocks together.

Another step that may come into play is "reasoning" if in the above request I forgot to mention where the price per unit of area comes from, it may prompt you because it "realized" it could not complete a step such as "units of area * price per unit of area" that it has seen before / memorized.

Then, it will typically run the program/script you asked it to deliver in a virtual environment and deal with any error it encounters. If for example it gets the "you can't multiply a number with a string" message because it retrieved a string from the db, it will retrieve the pattern of converting strings to number from its training data, adjust the program with the missing conversion and deliver it to you when it is satisfied. And in doing so it may "notice" that such a query + conversion is usually accompanied by a sanity check (is the data available, can it be converted, etc...) and you suddenly get 200 lines of "clean" code with warnings, exceptions etc...

It is quite possible, in the example I gave above, that the probabilistic nature reared its ugly head in my 5000 lines of code that were build around plane trigonometry when it should have used spherical trigonometry. Whenever a user asks for stuff involving distances and angles, plane trigonometry is probably over-represented (more probable) in the training data. But what that also shows that the model doesn't have any deep understanding of what it is doing unless it has been explicitly prompted or has done extensive grounding such as externally checking that what I want him to do, even if I haven't explicitly stated so because it is obvious to me, usually happens on the surface of spheres.

In another example I had a script that ran an extremely complex model around which I wanted a user interface.
I first asked it to analyze the code and it correctly noted that I have some very heavy computations involving GPU and multiprocessing.
It did suggest valid optimizations (because it has seen hundreds of similar code segments, many of them better than mine).
Then it created a perfectly functional user interface around it (user interface code is the most boring, predictable and repetitive code possible).
The problem was that for each slider it added for parameters definitions, it implemented "live updates". Moving a slider from 0.8 to 0.9 would not only move granularly from 0.800 to 0.801 but also trigger a complete computation of the model. Again, even though the model noted that the computations were extremely heavy, it did not "understand" that updating the slider value live was a very bad idea. Again, this probably come from the over representation of the "slider with a live effect" in its training dataset - in other words its probabilistic nature predicted live updates was the most probable thing I wanted.

Now, if I am talking with a human who has no programming experience and tell him: in this step I do x billions operations in y minutes, he immediately understands that I will not be able to do 100x billions operations in 1/60 of a minute.

Talking with a LLM is totally different. It may immediately suggest a performance improvement that will allow me to do 1.5x billions of operations in y minutes and then, the next minute, attempt to to 100x in 1 second...
 
I don't know how AI works, but Claude 4.5 from Anthropic writes quite nice documented code that does what you ask. It is hard for a lay person (me) to understand how just guessing the next word has anything to do with writing code that works but I guess it does?
I'm not an AI expert, but I do have a pretty good idea how it works. However, I am an experienced programmer. My experience with AI generated code is that it's very helpful but not great for overall architecture and prone to ridiculous bugs that a human would never make. For example when refactoring a bunch of code, it simply left some of the original code in the refactored functions. Needless to say it didn't work. It also seemed to be unable to find and correct its mistakes even after I pointed out exactly what was wrong. It kept changing the code to make it even more nonsensical. In the end I just fixed it in a couple of minutes.

My conclusion: It's useful but a bit dangerous, and definitely not conscious.
 
People sometimes think so, but that’s absolutely not how LLMs work. Nor despite some early optimism (or disingenuous hype) is ‘understanding’ language an emergent properly of these models. ...

Well, it's also important to understand that "language" has many different meanings. There's the languages we speak, the languages we use to interact with different things in our daily lives (programming languages, the interactions with our pets and cars ...) etc etc. Narrowing it down to human spoken language is NOT appropriate when it comes to LLMs.
LLMs don’t depend on language as meaning. They depend on language as pattern. To an LLM, English, Python code, legal text, music notation - all look similar. They’re just structured sequences of symbols. "Language" is the interface - there's no real intelligence behind it. "Large" is because of the huge quantity of structured and unstructured data aquired in training; i.e. the model has been exposed and internalized a massive amount of "patterns".
 
Last edited:
What an exhilarating experience!
Oh fine! Just fine! :mad:
I should take it personally that another profession is now in the AI queue for the chopping block.
Conversely, -I presume- Hardware EEs are truly the cause of AI infiltration into most every facet of our lives.
Proof:
ElizaGoezClaude2.jpg

ElizaGoezClaude.jpg

:cool:
 
I'm not an AI expert, but I do have a pretty good idea how it works. However, I am an experienced programmer. My experience with AI generated code is that it's very helpful but not great for overall architecture and prone to ridiculous bugs that a human would never make. For example when refactoring a bunch of code, it simply left some of the original code in the refactored functions. Needless to say it didn't work. It also seemed to be unable to find and correct its mistakes even after I pointed out exactly what was wrong. It kept changing the code to make it even more nonsensical. In the end I just fixed it in a couple of minutes.

My conclusion: It's useful but a bit dangerous, and definitely not conscious.

Whoever claims LLM stuff or the entire AI industry has even a single remote element of consiciousness clearly doesn't remotely understand it at all.

And I find AI is good at maintaining elements of microcode, but indeed -like you said- needs a lot of work when it comes to architecting and CREATING new stuff. In general, it builds on connecting existing stuff, but fails when it comes to genuine innovation because it lives in status quo. The the true lightbulbs and Eureka moments still happen exclusively in human brains. Then, inevitably- AI consumes that new element and stores it in whatever model.
 
Last edited:
Back
Top Bottom