• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

... and in today's news; we have:

"Planned AI Centers Strain Energy Grid
…In some cases, the collective requests equal or surpass— by multiples—the existing electricity demand in a utility’s entire service region.

Take American Electric Power, a big utility that serves 11 states, and Sempra’s Texas utility Oncor. Combined, they have received requests to connect projects, many of them data centers, to the grid requiring almost 400 gigawatts of electricity...

That is an astronomical amount that represents more than half the peak electricity demand in the Lower 48 states on two hot days in July.

Part of the problem is the electricity needs of the same potential projects are being double, triple or quadruple counted by different utilities. Data-center developers and tech companies are peppering utilities around the country with requests for service while scouting locations where they can quickly construct massive data centers and connect to the grid...

“A lot of it is real, but how much?” asked Tom Falcone, president of the Large Public Power Council, a trade association for the nation’s largest not-for-profit electric utilities.

In Texas, Oncor had 552 requests from large customers such as data centers or industrial facilities in its interconnection queue by the end of June, up 30% from the end of March.

Its current system peak, or the power consumption at the moment customers require the most electricity, is 31 gigawatts...

The requests in the queue from data centers require about 186 gigawatts, while industrial firms have requested about 19 gigawatts...

But AEP has an additional 190 gigawatts of potential demand in line—roughly five times its current system size, and the equivalent power use of at least 48 million homes.

“We know not all of that is going to come online, but even a fraction of that is significant,” Trevor Mihalik, AEP’s chief financial officer, told analysts on the company’s recent earnings call...."

That's why data centers have been migrating around the globe to be closer to energy sources... or cooling sources, for that matter. Every bit counts. It's not fueled by AI, but amplified for sure.

Also that's why SMR (small nuclear reactor) companies may be a good investment for a bit.
 
That's why data centers have been migrating around the globe to be closer to energy sources... or cooling sources, for that matter. Every bit counts. It's not fueled by AI, but amplified for sure.

Also that's why SMR (small nuclear reactor) companies may be a good investment for a bit.

A conspiracy theorist may surmise that Small Modular Reactors are built close to A.I. data centres so that the operatives can engineer a meltdown to kill the A.I entity if it develops a consciousness
 
A conspiracy theorist may surmise that Small Modular Reactors are built close to A.I. data centres so that the operatives can engineer a meltdown to kill the A.I entity if it develops a consciousness
That statement deserves some sort of a smilie...;)
 
At the moment, no positive outcome of a.i. implementation has come into my life. Maybe google search summaries. Mostly they are accurate when I’ve searched something I know about.

I would not be able to tell if a google search ai summary was accurate if the subject was something I did not know a bit about.

If the ai companies are looking to make money from the general public using ai, it’s got to be more useful than generating images of Taylor Swift holding a fish across her bikini bust or generating other ai slop.
 
  • Like
Reactions: KLi
I was hoping a.i. would quickly be able to offer support for medical diagnosis.

As that seems a prime area where searching and collating knowledge widely would be effective.
 
I was hoping a.i. would quickly be able to offer support for medical diagnosis.

As that seems a prime area where searching and collating knowledge widely would be effective.

Medicine is one of those very interesting AI topics.

In a previous message (https://www.audiosciencereview.com/...nce-discussion-news-thread.62579/post-2390837) I shared an example where AI got things very wrong, even with all that GPU cluster power at its disposal. The issue is that AI is not good at solving new problems characterized by new patterns and what we could call a bit of semi-logic thrown in. But it's very good at solving repeatable, thoroughly investigated issues and combining existing stuff to solve a problem it was trained on for the millionth time.

While we humans get bored and negligent when we only solve the exact same repeatable task for the umpteenth time. But we are very good at dealing with semi-logic because we have something called intuition and we have experience and our neural patterns work very differently from the strict rule-based parallelism that characterizes AI processing.

Medicine is a classic use case for a grey area that defies completely linear logic. I am no MD, but it probably is a combination of knowledge *and* intuition that leads to optimal outcomes with so many factors at play.

Humans are not as replaceable as the media want you to think. My personal conspiracy theory is that this AI fear-mongering is to make knowledge workers accept sacrifices in compensation to keep their jobs in an increasingly automated world.

I have experience with creating AI solutions at work - I have helped train AI models for troubleshooting very complex, large networks. And even though it's award winning and effective, what is really does is asking you to cover the basics and gather information. Basically it performs somewhat above the level of one of those support people you'd call to report a problem, asking the obvious and wasting your time. Truly complex multi-factor software/firmware/hardware design bugs that raise their ugly head on occasion... AI is clueless, because it is not possible to train AI on what you *don't* know yet.

I think in this forum we should be thinking more about what AI can or can't do for our audio hobby. Based on my Spotify experience it is pretty good at suggesting new music I should listen to. It may be exploring music preferences like BPM and genre... or it may be creating lists of what people like me listen to... or both. And it's not inconceivable to think there's room for a device-service that you feed your current audio config into, play your fav reference songs, and it detects if you are or are not extracting the ideal performance your system is capable of.
 
Medicine is a classic use case for a grey area that defies completely linear logic. I am no MD, but it probably is a combination of knowledge *and* intuition that leads to optimal outcomes with so many factors at play
This conclusion is correct and was examined in a study, the result was: the more informations were gathered about the singular case, the moreAI missed the real diagnosis.
 
The manager of the 'Oakland Ballers' of the independent baseball Pioneer League is going to rely on A.I. this Saturday, September 6, 2025 to decide when to change pitchers and use pinch-hitting batters:

 
Watch this on the back-end infrastructure of AI:


It is scary what is involved there. The data centers will approach the power of an entire country in not too distant future! I heard the other day that things like generators have backlog of years due to AI boom. So even if you decided today to get into AI, you simply can't. The big players are racing to gather up their supplies faster than each other....
 
Would anyone else be interested in some AI 101 discipline?
[I graduated but I still waddle in the shallow part of this new pool, with new concepts, techniques and terminology.]
AI Fluency: Framework & Foundations
Anthropic has partnered with academic experts Prof. Joseph Feller (University College Cork) and Prof. Rick Dakan (Ringling College) to launch an AI fluency course that teaches practical skills for effective, efficient, ethical, and safe AI interaction. This course has something for everyone, whether you're new to Claude or a seasoned AI practitioner.

Course preview:
01 - Introduction to AI Fluency
02 - The AI Fluency Framework 2A / 2B
03 - Deep Dive 1: What is Generative AI? 3A / 3B
04 - Delegation
05 - Applying Delegation [link=MIA]
06 - Description
07 - Deep Dive 2: Effective Prompting Techniques
08 - Discernment
09 - The Description-Discernment Loop [link=MIA]
10 - Diligence
11 - Conclusion
From <https://www.anthropic.com/learn/claude-for-you>
AI101C02.jpg

and
Google's 9 Hour AI Prompt Engineering Course In 20 Minutes
From <
>
AI101g01.jpg


If you take just two things from these lessons, they should be:
"AI tools are not databases or vending machines.
They are interactive systems that can behave differently in different contexts much like humans…"
;)
 
Would anyone else be interested in some AI 101 discipline?
[I graduated but I still waddle in the shallow part of this new pool, with new concepts, techniques and terminology.]

View attachment 474369
and

View attachment 474368

If you take just two things from these lessons, they should be:

;)
Prompt engineering (basically learning what makes inquiries to LLM tools more effective) is a vital skill if you want to extract great results from interacting with AI tools. And you can do some fun stuff with it. As part of my prompt engineering class on Coursera, this was the sequence to format outputs and contain the tool to do exactly what you want:

Me:
".. imagine i am a british shorthair cat. i observe a fat bird on a tree branch. i want to catch that bird, but i cannot jump that high. i am considering a few approaches, please tell me the pros and cons of every approach. but please also feel free to suggest a better approach.
provide your answers in the following format:
***Approach:*** APPROACH
***Pros:*** PROS
***Cons:*** CONS
here are the approaches i am considering:
Approach 1: i ask my owner to built a catapult and launch me towards the bird
Approach 2: i ask my owner to build a model plane and teach me how to fly it. i'd also like a machine gun built into the plane i can easily operate.

ChatGPT:

ChatGPT said:​

Alright, Boli [ChatGPT internalized the name of my cat :-)], this bird-catching mission sounds thrilling! Let’s break down each approach (plus a couple of suggestions you might like).

Approach: Build a Catapult
Pros:
  • Excitement Level: High. Imagine soaring through the air towards your prey!
  • Direct Route: Straight line to the bird—no zigzagging needed!
  • Owner Bonding: Involves your human, enhancing the owner-cat relationship.
Cons:
  • Control: Once you’re launched, there’s no way to control your direction—might miss the branch or get stuck in the tree!
  • Safety Risk: Could end up in a bush, fence, or other undignified places if trajectory calculations are off.
  • Skill Required: Your human will need some engineering prowess, and it might take several tries to get it just right.

Approach: Flying a Model Plane with a Machine Gun
Pros:
  • Range: You’ll have a new level of freedom and reach all over the yard!
  • Intimidation Factor: A machine gun would make you a fearsome foe to any nearby prey (and could scare off rival birds)
  • Impressive: It would make you the most famous bird-hunting cat in the neighborhood.
Cons:
  • Learning Curve: Flying a model plane and managing a machine gun is a bit much, even for a clever cat like you.
  • Potential Damage: The machine gun might accidentally harm plants, furniture, or even friendly humans nearby.
  • Distraction Factor: Operating the plane may distract you from focusing on the bird itself.

Alternative Approach: Stealthy Tree Climb (Enhanced with Treats and Encouragement)
Pros:
  • Control: You stay in full control of your approach, inching closer to the bird without startling it.
  • Minimal Setup: No complex machinery needed, just your stealth and climbing skills.
  • Reward Potential: Your human might reward you with treats and extra petting for this more manageable (and adorable) hunt.
Cons:
  • Risk of Alerting the Bird: Any loud rustling could scare it off.
  • Height Limitation: You’ll need to stop climbing if you get too high for comfort.

Alternative Approach: Team-Up with Pablo for a Bird-Luring Strategy
Pros:
  • Resourceful: Pablo could help lure the bird lower with a bird feeder, bringing it closer to you.
  • Bonding Opportunity: Working with your human in this way is a more relaxing experience.
  • Less Risky: No heights, no machine guns, just strategic waiting and patience.
Cons:
  • Time-Consuming: Requires patience to let the bird come closer.
  • Bird Might Catch On: After a while, the bird may learn to avoid the feeder if it senses something is amiss.

In the end, the stealthy tree climb or a teamwork-based lure strategy might be the safest and most effective for you. Now go forth, Boli, and may the hunt be in your favor!

 
  • Like
Reactions: KLi
I asked Claude... about the new Claude EULA/privacy changes:
Based on the official Anthropic announcement and recent coverage, here are the key privacy changes to Claude's user agreement that you mentioned reviewing:

Major Privacy Policy Changes (Effective August 2025)

The Core Change:Anthropic is now giving users the choice to allow their data to be used to improve Claude and strengthen safeguards against harmful usage Updates to Consumer Terms and Privacy Policy \ Anthropic, but this represents a significant shift from their previous approach where they did not utilize consumer chat data for model training Claude Users Must Opt Out Of Data Training Before Sept 28 Deadline - Dataconomy.
Key Privacy Concerns:

  1. Extended Data Retention: Data retention is being extended to five years if you allow them to use your data for model training Updates to Consumer Terms and Privacy Policy \ Anthropic, compared to the previous 30-day retention period.
  2. Opt-Out Deadline: All Claude users must decide by September 28, 2025 whether they want their conversations used to train AI models TechCrunchMacRumors. If you don't actively opt out, your data will be used for training.
  3. Default Data Sharing: The new policy defaults to a system where user data is retained for up to five years unless users opt out Claude AI Rewrites the Rulebook: Opt-Out or Miss Out on Data Privacy by 2025! | AI News.
What Data is Affected:
  • All conversations and coding sessions on Claude Free, Pro, and Max plans
  • Feedback you submit about Claude's responses
  • This applies to new and resumed chats after you make your selection
How to Opt Out: You can opt out by going to Claude's Settings → Privacy → and toggling off "Help improve Claude," or through the popup notification that should appear in your account.
What's NOT Affected:The changes don't apply to Claude for Work, Enterprise plans, API usage, or commercial services.

These changes do represent a significant shift toward data collection for training purposes, with much longer retention periods than before. The privacy concerns you noted seem well-founded, particularly around the default opt-in approach and the five-year data retention period.
I still think (imo) Claude is the best horse in the glue-factory!;)
Claude's Windows desktop app... just works!
 

Venture capital unconstrained !!

Twas ever thus ... but we could solve actual problems at vast scale with that investment. You know, actual existential problems like decarbonisation of the means of production and distribution, etc. Instead we get fresh tunnels in the rabbit warrens of human boredom. Refer to what @pablolie has been posting (no offence) but why not just enjoy watching a real f*cking cat?

edit: I should add that there's nothing wrong (per se) with trivia/fun with imaginary cats :)
 
Last edited:
News of significant gender gaps are found both in creation and consumption of AI:
#1: Employment - The data shows a significant gender gap in AI engineering roles specifically, with women representing approximately 22-26% of AI professionals compared to men who make up 74-78%. This gap is even more pronounced at senior levels in AI, where women hold less than 14% of executive positions.
^^ClaudeKey Takeaway^^
and
#2: Usage - The authors of a newly released paper (meta-analysis based on 18 scholarly and practitioner studies of nearly 143,000 people globally) persistently showed that women were about 20% less likely than men to use generative AI. Of the 3,821 specific purpose AI tools that the authors studied, showed that women visited them only 34.3% of the time (re: Men).
[Data collection for above between August2022 and July2025]
Thus far, -imo- they seem conditioned to be patronizing, in their answers.:confused:

[Hope this is relevant to the topic.]
 
AI is a disaster representing the worst in us: racist, misogynist, fascist views spew out from it in a stream.
Please don't use it.
AI has nothing to teach you, it just wants to steal from your potential.
 
Back
Top Bottom