It's even in your avatar.I am well aware of photography technique.
It's even in your avatar.I am well aware of photography technique.
Amusing, but unsure what you used there as this one seems to be ok with that request;
View attachment 402390
![]()
AI Image Generator
This is an AI Image Generator. It creates an image from scratch from a text description.deepai.org
JSmith
A paper from the University of Massachusetts Amherst stated that “training a single AI model can emit as much carbon as five cars in their lifetimes.” Yet, this analysis pertained to only one training run. When the model is improved by training repeatedly, the energy use will be vastly greater. Many large companies, which can train thousands upon thousands of models daily, are taking the issue seriously. This paper by Meta is a good example of one such company that is exploring AI’s environmental impact, studying ways to address it, and issuing calls to action.
The latest language models include billions and even trillions of weights. GPT-4, the LLM that powers ChatGPT, has 1.7 trillion machine learning parameters. It was said to have taken 25,000 Nvidia A100 GPUs, 90-100 days and $100 million to train the model. While energy usage has not been disclosed, it’s estimated that GPT-4 consumed between 51,773 MWh and 62,319 MWh, over 40 times higher than what its predecessor, GPT-3, consumed. This is equivalent to the energy consumption over 5 to 6 years of 1,000 average US households.
It is estimated that inference costs and power usage are at least 10 times higher than training costs. To put this into perspective, in January, ChatGPT consumed roughly as much electricity per month as 26,000 US households for inference. As the models get bigger and bigger to handle more complex tasks, the demand for servers to process the models grows exponentially.
First, it is worth understanding how a deep learning model works in simple terms. Deep learning models, like LLMs, are not intelligent the way your brain is intelligent. They don’t learn information in a structured way. Unlike you, they can’t interact with the world to learn cause-and-effect, context, or analogies. Deep learning models can be viewed as “brute force” statistical techniques that thrive on vast amounts of data.
Imo Ai is a fad that will die out, or atleast all the general stuff that' been produced to date will.
The reason why:
![]()
AI is harming our planet: addressing AI’s staggering energy cost (2023 update)
AI models consume massive energy levels, accelerating the climate crisis. Read how neuroscience-based techniques dramatically reduce AI’s energy footprint.www.numenta.com
And this is probably the best way of explaining why they will likely die out.
Well, first of all, there is no clear definition or understanding, what "intelligence" exactly is. And further, deep learning models work with neural networks, which were devised to mimick neurons and the working of the brain. So it is a bit premature and not based on evidence to state that it is different from how the brain works. When AI will first deliver mathematical proofs no human has been able to come up with, we will think differently, I guess.First, it is worth understanding how a deep learning model works in simple terms. Deep learning models, like LLMs, are not intelligent the way your brain is intelligent. They don’t learn information in a structured way. Unlike you, they can’t interact with the world to learn cause-and-effect, context, or analogies. Deep learning models can be viewed as “brute force” statistical techniques that thrive on vast amounts of data.
What is it about fingers?
She killed my father. Prepare to die.Somehow (at least to me) this seems apt vis-a-vis this thread.
It's not half-bad, kind of clever, actually, until one get's to John's glasses... and, of course, the fingers.
What is it about fingers?
View attachment 402558
source: https://www.deviantart.com/interdimensionaldave/art/Norwegian-Wood-2-1115330046
Apparently the woman's an... umm... adaptation of Prudence Farrow (of Dear Prudence fame).
I am rolling on the floor here.She killed my father. Prepare to die.
Note that she has.... umm... six.The fingers look bloated and to smooth (no wrinkles on the knuckles etc), and he doesn't look to have fingernails.
Well, first of all, there is no clear definition or understanding, what "intelligence" exactly is. And further, deep learning models work with neural networks, which were devised to mimick neurons and the working of the brain. So it is a bit premature and not based on evidence to state that it is different from how the brain works. When Ai will first deliver mathematical proofs no human has been able to come up with, we will think differently, I guess.
I am just wondering how you can be so sure about this. There was once the projection that the market for home computers is 5, now we know how wrong this was. Once the Turing test was devised to check if someone is conscious. AI passed this test already. Chess and Go were considered to be milestones of “intelligence”. Now average programs demolish even the best humans. So how do you know where the limits are?I know what neural networks are, I worked with them over 20 years ago. Neural networks and the transforms stuff they use now, doesn't learn it's trained. The only way to make it "smarter" is to make the model more complex, and throw exponentially more amounts of data at it.
Lots of people in the field will tell you that intelligence means having the ability to learn, reason, understand abstractions, etc etc.
For example If you teach a child to count and the concepts of addition subtraction and multiplication, they can do it with and combination of numbers given to them.
Ai can't do that because it hasn't learned the concepts and can't extrapolate them. What it does is recognize its being asked a math problem by key word association and then uses custom code to do the actual math.
I am just wondering how you can be so sure about this.
There was once the projection that the market for home computers is 5, now we know how wrong this was.
Once the Turing test was devised to check if someone is conscious. AI passed this test already. Chess and Go were considered to be milestones of “intelligence”. Now the best programs demolish even the best humans.
Anthropic CEO Dario Amodei said in the In Good Company podcast that AI models in development today can cost up to $1 billion to train. Current models like ChatGPT-4o only cost about $100 million, but he expects the cost of training these models to go up to $10 or even $100 billion in as little as three years from now.
"Right now, 100 million. There are models in training today that are more like a billion." Amodei also added, "I think if we go to ten or a hundred billion, and I think that will happen in 2025, 2026, maybe 2027, and the algorithmic improvements continue a pace, and the chip improvements continue a pace, then I think there is in my mind a good chance that by that time we'll be able to get models that are better than most humans at most things."
That is just not true, at least not in this generalization. I can run Stockfish on my iPhone and beat Magnus Carlsen every time. I could probably run it on models from many years ago.Those were all made possible by the fact that processing power and memory capacity has grown almost exponentially over time.
Maybe we are just now living in this disruptive period of time? Probably this will only be clearly visible in hindsight.Imo, What's need is a concept level breakthrough on par with Einstein's various contributions to science. What we have now is researchers backed by big companies that are throwing ludicrous amounts of hardware, resources, and power at the a problem. Aka they are just bruit forcing it.
That is just not true, at least not in this generalization. I can run Stockfish on my iPhone and beat Magnus Carlsen every time. I could probably run it on models from many years ago.
Kind of an odd way to think about it (Moore's Law... or death, for that matter).The problem is moor's [ sic] law is dead depending on who you ask.
Kind of an odd way to think about it (Moore's Law... or death, for that matter).
Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022, Nvidia CEO Jensen Huang considered Moore's law dead, while Intel CEO Pat Gelsinger was of the opposite view.
Sounds like it (Moore's Law) is clinging to life, then!
Imo, What's need is a concept level breakthrough on par with Einstein's various contributions to science. What we have now is researchers backed by big companies that are throwing ludicrous amounts of hardware, resources, and power at the a problem. Aka they are just bruit forcing it.
Lots of people in the field will tell you that intelligence means having the ability to learn, reason, understand abstractions, etc etc.
For example If you teach a child to count and the concepts of addition subtraction and multiplication, they can do it with and combination of numbers given to them.
Ai can't do that because it hasn't learned the concepts and can't extrapolate them. What it does is recognize its being asked a math problem by key word association and then uses custom code to do the actual math.