• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

I got Suno to write a song about ASR.

My brief was simple: "Make a blues song about a bunch of audio geeks on a website called ASR"

I can't upload an MP3 here so here's a link: https://dnsbits.net/downloads/Audio-Geeks-Blues.mp3

 
Social impact.

“By January 2025, the family says he began discussing methods of suicide with ChatGPT.

Adam also uploaded photographs of himself to ChatGPT showing signs of self harm, the lawsuit says. The programme "recognised a medical emergency but continued to engage anyway," it adds.”


There's a lot more detail in the Ars coverage, both of the deeply dodgy nature of ChatGPT's responses and of the reasons why the system prompt based guardrails on AI can fail in longer chats. They also had some more general coverage of why AI is 'moving fast and breaking people."
https://arstechnica.com/tech-policy...uicide-after-safeguards-failed-openai-admits/
https://arstechnica.com/information...-it-is-helping-people-when-they-need-it-most/
https://arstechnica.com/information...-big-tech-is-moving-fast-and-breaking-people/
 
Those of you with bad grammar should find the below interesting:

Chatbots ignore their guardrails when your grammar sucks, researchers find:


“Logit-Gap Steering" (Palo Alto blogpost):


Logit-Gap Steering: Efficient Short-Path Suffix Jailbreaks for Aligned Large Language Models” (PDF of pre-pub paper on the above).
 
Last edited:
skynetaware.jpg
 
Refer people to real-world resources.
If someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help. In the US, ChatGPT refers people to 988 (suicide and crisis hotline), in the UK to Samaritans, and elsewhere to findahelpline.com⁠(opens in a new window). This logic is built into model behavior.”




This did not happen In this instance and is likely PR nonsense.
 
What you're buying when you use Spotify:

"Founded in 2021, Helsing sells software that uses artificial intelligence technology to analyze large amounts of sensor and weapons system data from the battlefield to inform military decisions in real time. Last year, the startup also began manufacturing its own line of military drones, called HX-2."

https://www.cnbc.com/2025/06/17/spotifys-daniel-ek-leads-investment-in-defense-startup-helsing.html



https://www.audiosciencereview.com/...to-get-good-recommendations-on-spotify.63109/
 
Fascinating experiment to see if AI can detect AI from human:


The last response from Claude was very insightful.
 
Fascinating experiment to see if AI can detect AI from human:
The simple statement that used to be "Opinions are like 'rectums..." has become complex and it came to me at about the 5th minute of your yt link.

My biggest concerns are AIs' pitfalls, such as their privacy and security foundations.
I like Anthropic's statement about "AI research and products that put safety at the frontier".
"AI Sleeper Agents: How Anthropic Trains and Catches Them"
Testing for:
AI Deceptive 'Behavior'
AI Deceptive 'Thinking''
AI Model 'Poisoning'
AI Model 'Organisms'
AI 'Sleeper Agents'
AI 'Misalignment'
Ai 'Backdoors'
AI Backdoor 'Triggers'
 
AP News story on people finding financial success creating music via A.I.


I was unaware of the A.I. - generated band, The Velvet Sundown, but see that I can stream their albums on Tidal Music

 
Acceleration of hacking due to a.i. code writing usage. It will even suggest the most pressuring ransom psychology and ransom amounts.

Are there no safeguards built into these a.i. systems?

 
Are there no safeguards built into these a.i. systems?

Would you trust the creators of these A.I systems with your house keys?, f***ing sure I wouldn’t.

Theil/Altman etc can all get in the f***ing sea,
 
Are there no safeguards built into these a.i. systems?
There are supposed to be, but people keep finding trivial ways around them. If challenged on this the AI companies will tell you those uses are against their terms of service, but what are those terms worth if there's no enforcement? It'll be interesting to see how long it takes for victims to take the AI companies to court over their contribution to fraud etc.
 
  • Like
Reactions: KLi
You, too, can make big bucks from viral GPT slop videos:

Interesting article - though as one of the creators says -

"But he still expects to be made obsolete in short order, saying he believes that “in a year from now, pretty much everything is going to be very easy for the average person” to make.

His bigger fear, beyond how he’ll pay his mortgage when that time comes, is a more existential one: about what all this limitless creation is doing to our brains.
"

Maybe the next big thing will be individually targeted AI-generated film adverts and political messaging in social media? Horror.
 
Something tells me we won't be waiting long for either.

... Like, they are probably already a thing.
Ho hum.
 
Investigation into Meta after a leaked document reportedly showed the tech giant's artificial intelligence (AI) was permitted to have "sensual" and "romantic" chats with children.

Meta deny everything despite Reuters having a copy of the leaked document.

https://www.bbc.co.uk/news/articles/c3dpmlvx1k2o
 
This is the Reuters article, likely leaked by a meta employee uncomfortable with the content:

The meta guidance document does show from the excerpts that levels of misinformation, bigotry, sexploitation using image generation has all been considered and has levels set. Meta know what they are doing with regard to tolerating social misuse.

 
Last edited:
  • Like
Reactions: KLi
Back
Top Bottom