• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

The next generation will be rolling their proverbial eyes at our current infatuation with AI.

Some extracts from professor Cory Miller (UCSD-psychology)
The human brain runs on about 20Watts of power...
*Re: +10dB from your phone.
Thus far, the numbers show that it outperforms AI, in the types of intelligence we -humans- seek...
*Sentience, abstract reasoning, moral, ethical, intuition, sagacity, creativity, empathy, imagination, compassion...
To match the computational power of a single human brain, the leading AI system would require near the amount of power needed for the whole City of Dallas...
*...in the spring.
Professor Miller contends that "The first AI models were inspired by research on the visual system of monkeys, but this approach was abandoned in favor of the simpler models at the heart of today's AI."

He states that the 2013 bipartisan --Brain Research through Advancing Innovative Neurotechnologies (BRAIN)--initiative to map and understand the brain's 'neural circuits' is the most significant and successful scientific endeavor of recent history. He believes that research and understanding of human neural-networks, which drive biological-cognition, human thought, emotion and behavior are the true transformative tools of the future.

He assumes that to approximate true human 'neural networks' won't be answered by stacking more GPUs in 3D... especially since "The human brain remains the most powerful, adaptable, and efficient computing system on the planet."

"The country that unlocks the principles of biological-intelligence [cognition?] will shape the next-century of technology."

imo: AI is just doing stop-gap, hard-wired, tubed Class-A circuits, while we are in need of some efficient Class-D neural networks.:confused:
 
I agree. Seems to me it's currently kinda reminiscent of a human with a very, very good memory.
Mostly well spoken, and extremely knowledgeable.

But occasionally you get the distinct impression that at some point there's been a serious head injury, drug habit or other undisclosed trauma.

And you also become concerned about the possible lack of a moral compass. YMMV.
 
Last edited:
From what members have posted and small amount of news reports I have seen, it is not the creation of new tools for humanity* that is at issue, but the corporate/legislative management of it which is harmful.

*once humanity pays their subscription fee.
 
Isn't this the usual concern tho?

ie Nothing new here...
Same shit.
Different tool/power/technology.
 
Last edited:
The next generation will be rolling their proverbial eyes at our current infatuation with AI.

Some extracts from professor Cory Miller (UCSD-psychology)

Professor Miller contends that "The first AI models were inspired by research on the visual system of monkeys, but this approach was abandoned in favor of the simpler models at the heart of today's AI."

He states that the 2013 bipartisan --Brain Research through Advancing Innovative Neurotechnologies (BRAIN)--initiative to map and understand the brain's 'neural circuits' is the most significant and successful scientific endeavor of recent history. He believes that research and understanding of human neural-networks, which drive biological-cognition, human thought, emotion and behavior are the true transformative tools of the future.

He assumes that to approximate true human 'neural networks' won't be answered by stacking more GPUs in 3D... especially since "The human brain remains the most powerful, adaptable, and efficient computing system on the planet."

"The country that unlocks the principles of biological-intelligence [cognition?] will shape the next-century of technology."

imo: AI is just doing stop-gap, hard-wired, tubed Class-A circuits, while we are in need of some efficient Class-D neural networks.:confused:
AI is doing a digital emulation of an analog architecture, and as long as it remains digital, will be massively less efficient. The bit width required for parameters chews up power and processors.

Also, regardless of how it is implemented, intelligence will always be probabilistic and not entirely reliable. That includes humans.
I agree. Seems to me it's currently kinda reminiscent of a human with a very, very good memory.
Mostly well spoken, and extremely knowledgeable.

But occasionally you get the distinct impression that at some point there's been a serious head injury, drug habit or other undisclosed trauma.

And you worry about the possible lack of a moral compass. YMMV.
I like the brain injury bit. I think AIs are autistic savants. They know everything they’ve read, but can’t really reflect on it.
 
Yes. I also observe occasional indicators of classic psychopathy.
Military applications/weaponisation anyone. Lol
 
Last edited:
There is no generic AI. Specialized tools for different, focused purposes.

And when it makes mistakes -which is not uncommon- it is because the tool was trained poorly... by humans.
:)
 
The way I see it, the main issue is that we trainined it. Now we're using it widely, without fully understanding it's limitations and implications... or even it's obvious flaws.
Again, my 2c. YMMV.
 
Last edited:
T
The way I see it, the main issue is that we trainined it. Now we're using it widely, without fully understanding it's limitations and implications... or even it's obvious flaws.
Again, my 2c. YMMV.

Training is never over, though.

And there are use cases where anything less than 99.999% success is a liability. In many things, a 95% success rate will keep you close to 5 stars on Amazon. But that doesn't cut it for healthcare, banking, military etc etc.

To me the problem is we discuss AI as if it's a homogeneous thing, which it is not at all. Recognizing a hamster in a picture is not even the same as recognizing a squirrel, let alone detecting cancer, controlling a drone, detecting a trading pattern or establishing the root cause of equipment issues. AI doesn't plot to replace humans - it has no ambitions nor is it going to develop them anytime soon, probably ever (it'll just fake it based on our programming). Humans consistently invent new tools that benefit their enterprising spirit. AI is not truly more "intelligent" than voice switches were when human switchboard operators were replaced. It's a history that's repeated itself a million times.

That said, the impact and ethics of AI need attention indeed. It basically big data on steroids, which is powerful but also amplifies the issue of data privacy or intellectual property violations by ** 1000.
 
Last edited:
Agree.

It is just another tool with clear and great potential.
For good and bad.

Which is still in development/testing.

Apologies. Too much afternoon wine. I'm clearly rambling.
(And out. Ahem.)
 
Last edited:
And when it makes mistakes -which is not uncommon- it is because the tool was trained poorly... by humans.
Or because humans have picked an inappropriate tool for the job, although given the inflated claims from the companies with AI products it's hard to blame people for not knowing their limitations.
 
Or because humans have picked an inappropriate tool for the job, although given the inflated claims from the companies with AI products it's hard to blame people for not knowing their limitations.
This is an Advocatus Diaboli claim ... some recent incidences might proof this right ... not?
 
This is an Advocatus Diaboli claim ... some recent incidences might proof this right ... not?

We seem to discussing vastly different topics about the AI technology. There's the fact AI can and will replace (or "augment" in positive speak) some jobs. There's the suspicion we may end up in a Terminator sequel. There's the fact that ChatGPT 5 was launched just as people were keen to make it cough up hallucinations and find its weaknesses. And of course there is the fact we're living in an AI tech bubble and all these companies need to make major $$$ for their investors with the resulting PR noise and inflated claims (which feed inflated expectations).
 
We seem to discussing vastly different topics about the AI technology. There's the fact AI can and will replace (or "augment" in positive speak) some jobs. There's the suspicion we may end up in a Terminator sequel. There's the fact that ChatGPT 5 was launched just as people were keen to make it cough up hallucinations and find its weaknesses. And of course there is the fact we're living in an AI tech bubble and all these companies need to make major $$$ for their investors with the resulting PR noise and inflated claims (which feed inflated expectations).
You caught the string of thinking: it's a bubble while thrown into public (without need), but could be very useful if trained and then used in (technical) niches.
(Medical progress in findings of what available drugs might help beyond their basic indications is one example )
 
Investigation into Meta after a leaked document reportedly showed the tech giant's artificial intelligence (AI) was permitted to have "sensual" and "romantic" chats with children.

Meta deny everything despite Reuters having a copy of the leaked document.

https://www.bbc.co.uk/news/articles/c3dpmlvx1k2o

What a surprise, I am shocked….shocked I say. :rolleyes:

Remember the Cambridge Analytica scandal?, yeah…..what a surprise to see these utter fu***ng c**ts are still at it.


If you ever see or meet Zuckerberg, give him a kick in the nuts from me, then knock his fu***ng teeth out
 
Last edited:
You caught the string of thinking: it's a bubble while thrown into public (without need), but could be very useful if trained and then used in (technical) niches.
(Medical progress in findings of what available drugs might help beyond their basic indications is one example )

There are so many examples one can easily use to show even a $500B tool like ChatGPT 5 can't beat a smart human playing a game like the NYT Strands. The models think in very specialized ways and they can't grow outside those parameters.

In a nutshell, I think we need to supervise Apes more than AI when it comes to human doomsday discussions. :-)

PS:

1756847117141.png


Took ChatGPT 5 over 4 minutes. And it got it wrong.

The solution is this... (in 2.5 minutes):

1756847288546.png
 
Last edited:
Factoid for those all of us who are environmentally-concerned:
A basic query via generative AI platform -like ChatGTP- can use at least 10 times the amount of energy as a google search!:oops:

When it comes to AI's energy demands and costs; all of us will be on the hook for them... in addition to the authorized costs of the topical green paint!
 
  • Like
Reactions: KLi
Factoid for those all of us who are environmentally-concerned:
A basic query via generative AI platform -like ChatGTP- can use at least 10 times the amount of energy as a google search!:oops:

When it comes to AI's energy demands and costs; all of us will be on the hook for them... in addition to the authorized costs of the topical green paint!

Energy consumption is a genuine concern, but it is also a self-contained one.

Those making profits out of AI queries (no matter how frivolous) naturally have a huge vested interest in improving sustainability concerns. So that keeps the cost-benefit ratio in some rational check. And it is not an entirely new trend, data center power consumption has been a big topic for quite a few years, but now everything comes up under the "AI" banner. :-) Let's admit it, voice recognition on our smartphones has become slightly better, but not because it was magically renamed to fall under the AI hype banner. Or the photography app on your smartphone - was pretty good a few years ago even before it became an AI app. :)
 
... and in today's news; we have:

"Planned AI Centers Strain Energy Grid

…In some cases, the collective requests equal or surpass— by multiples—the existing electricity demand in a utility’s entire service region.

Take American Electric Power, a big utility that serves 11 states, and Sempra’s Texas utility Oncor. Combined, they have received requests to connect projects, many of them data centers, to the grid requiring almost 400 gigawatts of electricity...

That is an astronomical amount that represents more than half the peak electricity demand in the Lower 48 states on two hot days in July.

Part of the problem is the electricity needs of the same potential projects are being double, triple or quadruple counted by different utilities. Data-center developers and tech companies are peppering utilities around the country with requests for service while scouting locations where they can quickly construct massive data centers and connect to the grid...

“A lot of it is real, but how much?” asked Tom Falcone, president of the Large Public Power Council, a trade association for the nation’s largest not-for-profit electric utilities.

In Texas, Oncor had 552 requests from large customers such as data centers or industrial facilities in its interconnection queue by the end of June, up 30% from the end of March.

Its current system peak, or the power consumption at the moment customers require the most electricity, is 31 gigawatts...

The requests in the queue from data centers require about 186 gigawatts, while industrial firms have requested about 19 gigawatts...

But AEP has an additional 190 gigawatts of potential demand in line—roughly five times its current system size, and the equivalent power use of at least 48 million homes.

“We know not all of that is going to come online, but even a fraction of that is significant,” Trevor Mihalik, AEP’s chief financial officer, told analysts on the company’s recent earnings call...."
 
Back
Top Bottom