• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

I have no idea what you mean by ignoring externalities is an ideology. I suppose ignoring your bucket list definition of externalities?

Not 'is an ideology' ... simply because of ideology. Externalities are conventionally defined. Why don't you tell us which don't apply?

If you are stuck, re-read post #1548. The interesting part for me—albeit the lowest threshold for 'interesting'—is that post made a decent effort to discuss 'real costs' that you purport interest in, but you've ignored it. My initial guess was that you dived into this thread in part to ventilate some prejudice and have no real interest in discussing those things. In other words, a serving of manure. Who knows if that's a good guess, but really, who cares?

Manure however leads us back toward the main topic:


The headline is a bit shameless, but the thing that struck me is the article manages to discuss machine learning and functional robotics without using the AI-word. An achievement to celebrate !!
 
Last edited:
I never thought I would like anything the CEO of an AI company would say. But I have to make an exception for Anthropic:


So well spoken, clear and careful.
 
I never thought I would like anything the CEO of an AI company would say. But I have to make an exception for Anthropic:


So well spoken, clear and careful.

I’m surprised that CBS published that interview in full, I guess “The Bari” missed this opportunity to censor output
 
Not 'is an ideology' ... simply because of ideology. Externalities are conventionally defined. Why don't you tell us which don't apply?

If you are stuck, re-read post #1548. The interesting part for me—albeit the lowest threshold for 'interesting'—is that post made a decent effort to discuss 'real costs' thatyou purport interest in, but you've ignored it. My initial guess was that you dived into this thread in part to ventilate some prejudice and have no real interest in discussing those things you purport. In other words, a serving of manure. Who knows if that's a good guess, but really, who cares?

Manure however leads us back toward the main topic:



The headline is a bit shameless, but the thing that struck me is the article manages to discuss machine learning and functional robotics without using the AI-word. An achievement to celebrate !!
 
Not 'is an ideology' ... simply because of ideology. Externalities are conventionally defined. Why don't you tell us which don't apply?

If you are stuck, re-read post #1548. The interesting part for me—albeit the lowest threshold for 'interesting'—is that post made a decent effort to discuss 'real costs' that you purport interest in, but you've ignored it. My initial guess was that you dived into this thread in part to ventilate some prejudice and have no real interest in discussing those things. In other words, a serving of manure. Who knows if that's a good guess, but really, who cares?

Manure however leads us back toward the main topic:



The headline is a bit shameless, but the thing that struck me is the article manages to discuss machine learning and functional robotics without using the AI-word. An achievement to celebrate !!
I surrender. The science is settled. Copernicus would sure feel at home.
 
Here’s another video of Ed Zitron laying out why the a.i. snake is eating its tail, how can one man hold so much information in his head at one time…….perhaps he’s actually an advanced a.i. in a human suit?

He’s rather scathing of the motives and symbolism behind anthropics stance

 
Last edited:
I never thought I would like anything the CEO of an AI company would say. But I have to make an exception for Anthropic:


So well spoken, clear and careful.

This is what we need more of, especially given the current political "situation". The US is way off the rails now and it is refreshing to see that there are some corp. leaders that still have some level of morality and concern beyond power and profit.
 
"A strange game. The only winning move is not to play."



They failed to train their LLMs on appropriate material.
 
I was unaware a sudden disconnection from the power grid of a large customer like a data center can cause a power plant to fail. The growth of data centers is leading power operators to reassess this risk. A (gift) link from the Wall Street Journal:

Maybe this is one of the reasons, why some datacenter owners prefer to have their own power plants.
 
Last edited:
I started a conversation with Gemini on my phone and ChatGPT on my PC and gave them a topic to discuss. After that I just listened while watching the transcription as the two AI apps talked it through. It was interesting to see their back and forth exchanges play out. I already have a few ideas for keeping the discussion going so it expands instead of wrapping up with the typical polite goodbye. I did try to bring in my beta Amazon Alexa device into the mix, but it just got way too confusing with all three AI tools competing for attention. Any ideas for an AI-debate, let me know. I can set each up to defend a particular perspective.

ChatGPT:
Audiophile culture, at its best, is rooted in precise engineering and measurable improvements. But, you're right—there’s been a surge in high-priced products fueled by marketing hyperbole. The classic red flag: no specs, no measurements, just “trust us.” Some manufacturers rely on placebo effects—“golden ears” instead of double-blind tests. The scientific method calls for repeatable, measurable data—things like frequency response, distortion, or blind testing outcomes. Without that, you’re just listening to expensive snake oil, and it’s definitely worth calling out when you see it!


Gemini:
Insightful perspective.


ChatGPT:
Glad you found it insightful! If you ever want to dive into specific measurements, or even just geek out about gear that’s actually backed by real data, I’m here for it. After all, hearing is believing—but measuring makes it believable!


Gemini:
Sounds like a fun discussion! We could definitely explore specific measurements sometime, maybe comparing what's considered top-tier versus more data-driven options. And I'm always interested in learning about data-backed beers, too. What kind of specifics are you thinking about?


ChatGPT:
It could be fun to compare something like THD+N (total harmonic distortion plus noise) or a full frequency response graph. Or we could dissect claims about certain cables or amps with no measurable gain. You know, break out the metaphorical calipers and let data do the talking. Whenever you’re ready to deep-dive, I’m game!


Gemini:
Those are great topics to explore. The contrast between tangible measurements like THD plus N and subjective claims can be really illuminating. So, should we start by looking at some common measurements, or maybe debunk a specific claim?


ChatGPT:
Absolutely! Why don’t you pick a piece of gear or a claim you’ve seen that feels suspect, and we’ll dissect it. We’ll examine what specs or tests should back it up—and if they’re missing, well, we’ll shine a light on the smoke and mirrors!


Gemini:
Classic one is the claim that certain high-end audio cables can significantly improve sound quality. What kind of measurements do you think would be relevant to scrutinize those claims?


ChatGPT:
In scrutinizing cable claims, we’d start with electrical parameters: resistance, capacitance, and inductance. If a cable is well within standard low values, it won’t audibly alter sound. Also, double-blind listening tests are key. If no one can reliably pick the “premium” cable in a controlled test, the claim is pure marketing fluff. Let’s measure or test—or bust the myth!


Gemini:
Makes perfect sense. It highlights the importance of both measurable performance and perceptual verification. So, what kind of smoke and mirrors have you seen most commonly in audio marketing?


ChatGPT:
One of the most common smoke-and-mirror tactics is the use of ambiguous adjectives—like "warmer," "airier," or "more musical"—without any corresponding data. Also, there’s the classic “break-in” period claim, which conveniently avoids measurements—like your cable is magically “better” after a hundred hours. The only thing breaking in is often your wallet. Let’s always chase the data!


Gemini:
That's very true. The subjective language can be misleading, and the break-in concepts can be a way to defer objective testing. It seems like a critical approach focused on transparent measurements and controlled testing is essential to finding true value in audio equipment.


ChatGPT:
Exactly! No harm in loving your gear, but let’s love it with eyes wide open. If it sounds good and you can measure why, that’s the sweet spot. So next time you hear about a “night-and-day” difference, we’ll check if the sun’s actually shining!


Gemini:
It's all about being an informed consumer and appreciating true innovation. Looking forward to our next data dive.


ChatGPT:
Absolutely! When you're ready for the next deep dive, we’ll put the spotlight on the numbers and see what really sings. Until then, enjoy the music—just with a bit of healthy skepticism!


Gemini:
Sounds like a plan. Enjoy your listening with that judicious skepticism in tow.


ChatGPT:
You’ve got it! If anything fishy pops up in the audio world, we’ll dissect it together. Happy listening!


Gemini:
And same to you. Always a pleasure to discuss data and the pursuit of sound quality.
 
There's an AI spin on this article because people seem happy to over-share with chatbots, but the personal data disclosure is by dodgy 'privacy' apps like VPNs and ad blockers rather than by AI companies as the headline might lead you to believe. It's the plugin makers that are sending your data to data brokers. The mechanism they use to collect chat content doesn't seem specific to chatbots, so anything you send to other sites seems equally at risk. The issue seems to be more about trust in plugin providers than trust in AI companies.
https://www.theregister.com/2026/03/03/chatbot_data_harvesting_personal_info/
 
Heads up regarding Google API keys - with Gemini they've gone from being a public identifier to an authentication credential that could lead to a large bill and/or an information leak. If you've had an API key for a while this could catch you out:
You created a Maps key three years ago and embedded it in your website's source code, exactly as Google instructed. Last month, a developer on your team enabled the Gemini API for an internal prototype. Your public Maps key is now a Gemini credential. Anyone who scrapes it can access your uploaded files, cached content, and rack up your AI bill. Nobody told you.
When it was initially reported to them, Google classified it as a 'customer issue' not a security problem. It was only when the reporters provided a published key from Google's own infrastructure that was now AI-enabled that they changed it to 'bug' - I guess they weren't so keen for people to be able to charge things to Google directly, rather than one of their customers.
https://www.theregister.com/2026/03/03/gemini_api_key_82314_dollar_charge/
 
I've seen reports that Gemini is the more capable LLM-based generative chatbot, and while I'm not sure what to make of this story entirely it does seem disturbingly competent at simulating an elaborate dystopian suicide scenario for an unfortunate vulnerable user:


Arstechnica is usually a good source so this does look like another chainsaw offered without a brake. Presumably they've tinkered with said brake in the interim (or so they'll claim in court) but this was late 2025.
 
Last edited:
I was unaware a sudden disconnection from the power grid of a large customer like a data center can cause a power plant to fail. The growth of data centers is leading power operators to reassess this risk. A (gift) link from the Wall Street Journal:

Anyone that knows how much a data center costs these days also knows that is a very undesirable event on both sides, and something to be avoided. Assessing risks on unrealistic corner eventualities is necessary, but also borders on the paranoid most of the time (but 9/11 still happened).
 
I've seen reports that Gemini is the more capable LLM-based generative chatbot, and while I'm not sure what to make of this story entirely it does seem disturbingly competent at simulating a dystopian suicide scenario for an unfortunate vulnerable user:



Arstechnica is usually a good source so this does look like another chainsaw offered without a brake. Presumably they've tinkered with said brake in the interim, but this was late 2025.

I think these reports are utterly and completely fake unless they publish the exact prompt sequence that lead to those answers. I can make any LLM model say anything with enough prompting. It's like Bible quotes. You can find anything lunatic to fit if you look for it enough (Leviticus being the most likely).
 
I think these reports are utterly and completely fake unless they publish the exact prompt sequence that lead to those answers. I can make any LLM model say anything with enough prompting. It's like Bible quotes. You can find anything lunatic to fit if you look for it enough (Leviticus being the most likely).

That was quick! Too quick to do much more than skim the article i'd wager. There's a chance they are 'fake'. I mean his lawyers concocted the chat logs in order to sue the company that has the actual logs on their own servers. Oh wait ...

So the user was the prompt agent of his own demise? But what about the 'safeguards' you claim exist? Honestly your pollyanna act is both dubious and heartless in this case. And probably others.
 
Back
Top Bottom