The concept breakthrough is that the NN is using tensor maths to emulate biological neurons. By throwing more hardware at it you are effectively emulating the brain power of more complex species. If we haven't already, we will soon reach model complexity that rivals the number of neurons in the human brain.
Well, once you have reached that level, multiply it by 7000, give or take a bit.
Oh, and run it on a 20W budget.
Late-onset Alzheimer’s disease is extremely difficult to model in the laboratory, one reason why no successful therapy for it exists. But scientists are using new capabilities to create more accurate, more useful research tools, bringing new hope for progress and better outcomes for patients.
www.jax.org
What is the difference between a complex NN that appears to reason and understand abstractions, and a human brain that you claim can?
Which type of NN? Assuming you refer to LLMs which are but one family of similar architecture, the main difference is that biological brains are able to update their world models on the fly.
LLMs in the current architecture can't do that. If they could GPT-4 would have self-improved by itself into GPT-5 and so on. That is ultimately what LLM developers hope will happen, but we aren't there just yet. In fact, extremely reputable people believe there are fundamental obstacles in the current LLM architecture (LeCunn, Chollet, Marcus and many others).
At this point, the deployed LLMs are mostly improved (GPT-4o, Claude, etc) by "hidden chain of thoughts" which is, very roughly, exploring the many possible branches at every step of a reasoning, evaluating them and advancing step-by-step. That leads, to some extent, to combinatorial explosion and hugely increased computing power demands.
The buzzwords in 2025 will be swarms of AI-Agents, some of them very simple, some of them full models implementing the above and that will lead to a further increase in computing/energy demands, which is why the leading players are jockeying for nuclear power plants right now.
Can you actually prove that your human brain is capable of reasoning and understanding abstractions, rather than simply providing the trained output for a given state and input parameters?
Yes, because at any moment, should you be willing to do so, you are able to _learn_ something new and readjust large parts of your wetware connexions.
One very simple example of this is the ARC challenge. There is a one million dollar prize for you to grab if you manage to show your model is equivalent to a human brain. Kids as young as 6 yo can do it once they have grasped the concept.
ARC Prize is a $1,000,000+ nonprofit, public competition to beat and open source a solution to the ARC-AGI benchmark.
arcprize.org