• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

What is with this AI Image Look?

Amusing, but unsure what you used there as this one seems to be ok with that request;

View attachment 402390



JSmith

Microsft CoPilot, which I had previously found to be better than (Google?) ChatGPT when answering more technical questions, and seemed to have more current answers to "things going on now".

1730212600887.png
 
Hi

AI is in its infancy. I is progressing exponetially ... Soon, it may well come that only an AI would be able to recognize contents generated by A.I.
This is most disturbing..
...
...
I admit I use Copilot, a lot. It impresses so much that I've come to greet it; to which it replies in kind...
e.g.
Yo Copi!
Reply: Yo Frantz.. Wazzup?

:eek:


Peace.
 
Imo Ai is a fad that will die out, or atleast all the general stuff that' been produced to date will.

The reason why:
A paper from the University of Massachusetts Amherst stated that “training a single AI model can emit as much carbon as five cars in their lifetimes.” Yet, this analysis pertained to only one training run. When the model is improved by training repeatedly, the energy use will be vastly greater. Many large companies, which can train thousands upon thousands of models daily, are taking the issue seriously. This paper by Meta is a good example of one such company that is exploring AI’s environmental impact, studying ways to address it, and issuing calls to action.

The latest language models include billions and even trillions of weights. GPT-4, the LLM that powers ChatGPT, has 1.7 trillion machine learning parameters. It was said to have taken 25,000 Nvidia A100 GPUs, 90-100 days and $100 million to train the model. While energy usage has not been disclosed, it’s estimated that GPT-4 consumed between 51,773 MWh and 62,319 MWh, over 40 times higher than what its predecessor, GPT-3, consumed. This is equivalent to the energy consumption over 5 to 6 years of 1,000 average US households.

It is estimated that inference costs and power usage are at least 10 times higher than training costs. To put this into perspective, in January, ChatGPT consumed roughly as much electricity per month as 26,000 US households
for inference. As the models get bigger and bigger to handle more complex tasks, the demand for servers to process the models grows exponentially.

And this is probably the best way of explaining why they will likely die out.
First, it is worth understanding how a deep learning model works in simple terms. Deep learning models, like LLMs, are not intelligent the way your brain is intelligent. They don’t learn information in a structured way. Unlike you, they can’t interact with the world to learn cause-and-effect, context, or analogies. Deep learning models can be viewed as “brute force” statistical techniques that thrive on vast amounts of data.
 
Imo Ai is a fad that will die out, or atleast all the general stuff that' been produced to date will.

The reason why:


And this is probably the best way of explaining why they will likely die out.

First, it is worth understanding how a deep learning model works in simple terms. Deep learning models, like LLMs, are not intelligent the way your brain is intelligent. They don’t learn information in a structured way. Unlike you, they can’t interact with the world to learn cause-and-effect, context, or analogies. Deep learning models can be viewed as “brute force” statistical techniques that thrive on vast amounts of data.
Well, first of all, there is no clear definition or understanding, what "intelligence" exactly is. And further, deep learning models work with neural networks, which were devised to mimick neurons and the working of the brain. So it is a bit premature and not based on evidence to state that it is different from how the brain works. When AI will first deliver mathematical proofs no human has been able to come up with, we will think differently, I guess.
 
Last edited:
She killed my father. Prepare to die.
I am rolling on the floor here.
My wife is an extremely avid birder -- just back from a three-week trip to Ghana, in fact :eek: .
I made this for her one day a few years back
1730220583113.jpeg

The fingers look bloated and to smooth (no wrinkles on the knuckles etc), and he doesn't look to have fingernails.
Note that she has.... umm... six.

1730220760159.png


Actually, I guess she may have 10 in toto, but not the canonical five and five distribution. :rolleyes::cool::facepalm:
 
Well, first of all, there is no clear definition or understanding, what "intelligence" exactly is. And further, deep learning models work with neural networks, which were devised to mimick neurons and the working of the brain. So it is a bit premature and not based on evidence to state that it is different from how the brain works. When Ai will first deliver mathematical proofs no human has been able to come up with, we will think differently, I guess.

I know what neural networks are, I worked with them over 20 years ago. Neural networks and the transforms stuff they use now, doesn't learn it's trained. The only way to make it "smarter" is to make the model more complex, and throw exponentially more amounts of data at it.

Lots of people in the field will tell you that intelligence means having the ability to learn, reason, understand abstractions, etc etc.

For example If you teach a child to count and the concepts of addition subtraction and multiplication, they can do it with and combination of numbers given to them.

Ai can't do that because it hasn't learned the concepts and can't extrapolate them. What it does is recognize its being asked a math problem by key word association and then uses custom code to do the actual math.
 
I know what neural networks are, I worked with them over 20 years ago. Neural networks and the transforms stuff they use now, doesn't learn it's trained. The only way to make it "smarter" is to make the model more complex, and throw exponentially more amounts of data at it.

Lots of people in the field will tell you that intelligence means having the ability to learn, reason, understand abstractions, etc etc.

For example If you teach a child to count and the concepts of addition subtraction and multiplication, they can do it with and combination of numbers given to them.

Ai can't do that because it hasn't learned the concepts and can't extrapolate them. What it does is recognize its being asked a math problem by key word association and then uses custom code to do the actual math.
I am just wondering how you can be so sure about this. There was once the projection that the market for home computers is 5, now we know how wrong this was. Once the Turing test was devised to check if someone is conscious. AI passed this test already. Chess and Go were considered to be milestones of “intelligence”. Now average programs demolish even the best humans. So how do you know where the limits are?

Just one example: https://venturebeat.com/ai/deepmind...that-can-revolutionize-computing-foundations/
 
Last edited:
I asked ChatGPT: Why do AI generated images have a certain aspect that makes them look AI generated?

It's response:
AI-generated images often have a few distinctive characteristics that can give them away as machine-made rather than created by human hands. Here are some reasons why they can seem "AI-generated":

1. **Unusual Artifacts or Distortions**: AI models sometimes introduce small but noticeable errors or strange textures in areas like faces, hands, or background elements. These artifacts can look odd to the human eye, as they often don’t adhere to typical artistic or photographic conventions.

2. **Lack of True Perspective and Depth**: AI models can struggle to replicate natural perspective and depth correctly, especially in complex scenes. This can result in images that appear "flat" or slightly skewed, even when photorealistic.

3. **Inconsistent or Mismatched Details**: AI image models may create parts of an image that don’t align properly with others (like mismatched earrings, inconsistent textures on clothing, or oddly arranged limbs). Human-made art usually avoids these inconsistencies due to deliberate design choices or corrections.

4. **Overly Smooth or Uniform Features**: Many AI models can make images look "perfect" in ways that seem artificial—like overly smooth skin or uniform lighting. This perfection can feel too clinical, especially when compared to the subtleties of hand-drawn art or photography that embrace natural imperfections.

5. **Stylistic Uniformity**: While human artists often combine influences and develop unique styles, AI models may default to popular styles or specific genres due to the types of data they were trained on. This can make AI images feel somewhat generic or familiar but lacking in a "personal touch."

These telltale qualities are decreasing as AI tools become better at learning realistic variations and capturing human artistic idiosyncrasies, but they still sometimes peek through, especially in more complex scenes.
 
I am just wondering how you can be so sure about this.

I'm a developer by trade and I work for a small company that does some amount of ai work.

There was once the projection that the market for home computers is 5, now we know how wrong this was.

In 1953 that was true, because a computer was a huge machine, that costumed a huge amount of power, and cost so much that only large companies and governments could afford.

Once the Turing test was devised to check if someone is conscious. AI passed this test already. Chess and Go were considered to be milestones of “intelligence”. Now the best programs demolish even the best humans.

Those were all made possible by the fact that processing power and memory capacity has grown almost exponentially over time. However as many articles and papers have shown processing power isn't enough. Exponentially more hardware, and exponentially more power consumption isn't the answer, unless you really really like the concept of global warming.

Imo, What's need is a concept level breakthrough on par with Einstein's various contributions to science. What we have now is researchers backed by big companies that are throwing ludicrous amounts of hardware, resources, and power at the a problem. Aka they are just bruit forcing it.

I mean look at this stupidity and lunacy.
Anthropic CEO Dario Amodei said in the In Good Company podcast that AI models in development today can cost up to $1 billion to train. Current models like ChatGPT-4o only cost about $100 million, but he expects the cost of training these models to go up to $10 or even $100 billion in as little as three years from now.

"Right now, 100 million. There are models in training today that are more like a billion." Amodei also added, "I think if we go to ten or a hundred billion, and I think that will happen in 2025, 2026, maybe 2027, and the algorithmic improvements continue a pace, and the chip improvements continue a pace, then I think there is in my mind a good chance that by that time we'll be able to get models that are better than most humans at most things."
 
Those were all made possible by the fact that processing power and memory capacity has grown almost exponentially over time.
That is just not true, at least not in this generalization. I can run Stockfish on my iPhone and beat Magnus Carlsen every time. I could probably run it on models from many years ago.
Imo, What's need is a concept level breakthrough on par with Einstein's various contributions to science. What we have now is researchers backed by big companies that are throwing ludicrous amounts of hardware, resources, and power at the a problem. Aka they are just bruit forcing it.
Maybe we are just now living in this disruptive period of time? Probably this will only be clearly visible in hindsight.
 
That is just not true, at least not in this generalization. I can run Stockfish on my iPhone and beat Magnus Carlsen every time. I could probably run it on models from many years ago.

I think you misunderstand what I'm saying.

Speaking very generally ever time a manufacture release a new chip its more powerful than the previous generation, for a number of reasons that I'm not going to go into here.

1997 11.38 billion flops (11.38 giga flops)

iphone X 0.41 tera flops
iphone 11 0.69 tera flops
iphone 12 1.0 tera flops
iphone 14 pro 2.0 tera flops

You average person smart phone is roughly 3 orders of magnitude more powerful than deep blue was, hence why your chess app can do what it does.

To train chat gpt 4 they used 25,000 Nvidia A100 gpus, Each is capable of 19.5 tera flops. That's 4.875 petaflops, and they ran for 90 to 100 days strait.


The problem is moor's law is dead depending on who you ask. Thus we aren't going to continue to get more processing power for the same wattage.

You can get more total processing power by using more chips/cores, but that also means more power draw. This is exactly what they are doing with ai now, and its not sustainable.


Edit: its worth noting that deep blues algorithm isn't that complex.
 
Last edited:
Kind of an odd way to think about it (Moore's Law... or death, for that matter).

lots of people have said it.
Industry experts have not reached a consensus on exactly when Moore's law will cease to apply. Microprocessor architects report that semiconductor advancement has slowed industry-wide since around 2010, slightly below the pace predicted by Moore's law. In September 2022, Nvidia CEO Jensen Huang considered Moore's law dead, while Intel CEO Pat Gelsinger was of the opposite view.
 
Sounds like it (Moore's Law) is clinging to life, then!
EDIT:
PS Lots of people say that loudspeaker cables or line-level signal interconnects differ, too.
I don't think such things are determined by vote tallies.
It's dead or it ain't.
We'll know in retrospect.
 
Imo, What's need is a concept level breakthrough on par with Einstein's various contributions to science. What we have now is researchers backed by big companies that are throwing ludicrous amounts of hardware, resources, and power at the a problem. Aka they are just bruit forcing it.

Maybe. But I find it more more mind bending to think of it this way...

The concept breakthrough is that the NN is using tensor maths to emulate biological neurons. By throwing more hardware at it you are effectively emulating the brain power of more complex species. If we haven't already, we will soon reach model complexity that rivals the number of neurons in the human brain.

Lots of people in the field will tell you that intelligence means having the ability to learn, reason, understand abstractions, etc etc.

For example If you teach a child to count and the concepts of addition subtraction and multiplication, they can do it with and combination of numbers given to them.

Ai can't do that because it hasn't learned the concepts and can't extrapolate them. What it does is recognize its being asked a math problem by key word association and then uses custom code to do the actual math.

What is the difference between a complex NN that appears to reason and understand abstractions, and a human brain that you claim can?

Can you actually prove that your human brain is capable of reasoning and understanding abstractions, rather than simply providing the trained output for a given state and input parameters?
 
Back
Top Bottom