• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

My TL&DR version:
View attachment 470758
I don't know about yours, but my brain has reached a crossroad, that is more momentous than when the googleGods gave us gSearch.
It upended the way my brain was trained to forage for data and then make generalized decisions.
Now, with the introduction of "AI Deep Research*" all chatBots are offering; why would I want to tax my brain for days on end, when this feature can do the mundane and laborious tasks of gathering data and making the generalizations and the decisions my mind used to make?
What will my mind now do with all of the crucial decision tasks offloaded to AI Deep Research*?

*Please do your own 'searches for "Deep Research", or just ask your current AI! :(

This one has weird meme plus definitive chatbot-boosting advice, so it's right out.

I gave up Alphabet shite a while back. And I'd no sooner "Google" a web search than "Kleenex" my nose. So I reckon I can continue to give these chatbots an entire miss. Even if it takes some effort to disable default offerings and insertions. :p
 
This is an article about a 29 year old who committed suicide. Her estate learned she had been interacting with a ChatGPT therapist prompt. The family was able to discover their daughter's interaction logs.

Many may remember Joseph Weizenbaum's Eliza from MIT in 1966. I met him in professional circles in the 80s. He is very noted in ethics in computer science. Eliza was a non-directed listener echo repeater open-ended question reply chatbot.


I think the case of Sophie Rottenberg will be discussed in the professional psychology community as it should.

Many of the contributions on this topic are about music, skeptical, humorous, edge cases, and the like. It's a collection of things that come to the attention of our ASR community, worldwide.

This is a serious one. Young people are using this tool as a virtual friend or therapist.

(ASR participants are fine to skip reading the article which is sad)


Yes, there are more than a few serious/damaging repercussions happening now.
 
The YT post doesn’t mention whether the artists licensed their likenesses for the production, so I haven’t watched it, obviously.
 
In the near future how will we know if any footage, especially of elected people of power, are genuinely saying something or being simulated?

There is a gap for a verification method, or legislation to prohibit.

In the uk such legislation is moving closer to getting signed off. The law is often a bit behind.

 
  • Like
Reactions: KLi
Edit: I would say, in my opinion, loss of life and life safety is something not to be dismissed, "Oh, that's just an edge case, don't worry [because irrational profit taking.]"
See #554 for another example causing health problems. I'm sure we'll be hearing more of them.
 
The YT post doesn’t mention whether the artists licensed their likenesses for the production, so I haven’t watched it, obviously.
What? Really...
 
What? Really...
It's probably a public announcement to point out the potential copyright issue as to encourage others to not watch it.

I watched it, thought the AI was pretty decent.
 
In the near future how will we know if any footage, especially of elected people of power, are genuinely saying something or being simulated?

There is a gap for a verification method, or legislation to prohibit.

In the uk such legislation is moving closer to getting signed off. The law is often a bit behind.

It's recently been shown that AI generation watermarks for still images can be removed in general:
https://www.theregister.com/2025/07/24/ai_watermarks_unmarker/
That breaks one of the proposed detection methods, although that wouldn't have stopped people running their own instance of a non-wartermarking generative AI anyway. It's not been shown yet that the same applies to video, but it seems likely if you extend the same reasoning to include temporal changes.

Legislation doesn't stop people doing bad things if the benefits outweigh the risks to the miscreant, as we can see for fraud using faked voice calls:
https://arstechnica.com/security/20...acks-work-and-why-they-can-be-hard-to-detect/
There's not a lot of down side for foreign entities using fake video, as we've seen with doctored footage, or footage from video games, being claimed to be real.
 
Was watching a vid of Google ai the other day that created a threedimensional world out of words the youtuber SmashJT typed in. He was all in awe. I was not impressed though. While looking real, the worlds were shallow and lacking any meaning. It was as tiresome as ai music. But it fitted in in our superficial Barby world, where only look matters. Don`t know, not a fan of all of this. Need no Siri 2.0.
 
Last edited:
What? Really...

Of course.

There’s no upside. If they are artists I’m not so interested in like the example posted, then I don’t want to see their creative output, or ‘AI’ slop that resembles it. If they’re artists I am interested in, then I want to see their creative output, not ‘AI’ slop that resembles it.

I don’t know why people develop a taste for slop, but I hope more artists take the Johansson approach. Monetising other people’s likenesses (or imitating their work) without license or consent is bullshit.
 
There’s no upside. If they are artists I’m not so interested in like the example posted, then I don’t want to see their creative output, or ‘AI’ slop that resembles it.
You can stand in front of a train or beside it. Either way, it will keep going with no concern.

The purpose of this thread is educate everyone with where AI has been, is now and will be going. That video is an example of such. If you are not interested in it, then this not the thread for you.

Answering your original concern, the creator stipulates in the video description that everything is artificially generated. He is using real footage of the talent in his creation. Whether that crosses fair use, is something to be determined. On the vocal front, there has been litigation in US against people mimicking the voice of famous people in product commercials. Whether this falls in that category or not, I don't know.

Personally, I see it almost the same as someone painting a picture of a famous person with a brush, paint and canvas. Such a painting would be creative art and have value on its own. In this case, we are watching the talent of the creator combined with power of AI to create truly entertaining content. I think this aspect is good as it unleashes so much more creativity.

Just as ASCAP and BMI exist today, I suspect long term there will have to be a cut that is collected and distributed to artists just as is done when radio/TV play commercial music. Not allowing such creation to happen is not a good idea.

Looking longer term, I think it would be wonderful to have artists that are no longer with us, come to life with new content. Beatles singing rap for example. :D
 
I don’t know why people develop a taste for slop, but I hope more artists take the Johansson approach. Monetising other people’s likenesses (or imitating their work) without license or consent is bullshit.
What I post is not slop. A ton of effort has gone into synthesizing so many elements. I suggest watching something before (mis)judging it.
 
1755654669481.png
 
Fun reading, and through the linked articles and other stuff from there. Pffft to Spotify, of course (shameless shite-bags). The Velvet Sundown concoctions, supporting imagery and social fluff appear comically bland.

As it happens, Apple's recent 'all time' summary of my Music app listening (since 2016) shows an AI-generated track at the top of my list (I'm using AI here in the technically incorrect vernacular).

But the opposite end of the AI-as-music-tool spectrum from the other one one: Venezuelan artists Arca—using her own material as the training corpus—employed generative music tool Bronze to produce 100 variations of the track Riquiqui (the resulting track names are quite long and it looks like the summary algorithm truncates and counts them all as the same track, so playing that series a few time gives a winning count). These explorations are quite interesting I reckon (intellectually and musically) much like Brian Eno's experiments with generative music (which started back in the analog days with tape loops and echoes).

Gaming social media and playlist/track promotion is another experiment at best, but the statements so far from the recently revealed Velvet Sundown creator don't appear to reflect an artistic/creative process (or even a coherent provocation) so much as post-facto justification.
I have listened to ai music. (Youtube then filled my feed with similar channels.) It got boring real quick. There was nothing original there, just ever repeating patterns. You could even hear snippets of real jazz. I stopped listening and blocked all of the slop channels. (Look for new channels after 2023 with no real credentials.) What a waste of space and time.
 
Do current AI models (as of December 2024 in this paper) 'see' optical illusions? Not the way we do, certainly. Here the response was checked with well known illusions, images similar to those but without any illusion (described in the paper as an illusion-illusion), and control images to check they could recognise the properties or objects that the illusion relies on. The response was checked with the different images, and whether or not the prompt said it was an illusion ("which line is longer?" vs. "in this illusion, which line is longer?")
https://www.theregister.com/2025/08/19/vision_language_models_see_illusions/
https://arxiv.org/pdf/2412.18613
 
Few (if any) people alive today will remember the introduction of electricity. But if you take any of today's horror stories about AI being the harbinger of doom, the work of the devil or simply the end of civilisation as we know it and replace "AI" with "electricity", you can get a feeling for what it must have been like.
 
Few (if any) people alive today will remember the introduction of electricity. But if you take any of today's horror stories about AI being the harbinger of doom, the work of the devil or simply the end of civilisation as we know it and replace "AI" with "electricity", you can get a feeling for what it must have been like.
This is an interesting point and one I've thought a lot about.
My years (decades) of working in financial markets has taught me that when people say of the latest fad, "this time it's different", it usually isn't.
Yet, to me at least, AI does feel different and it worries me greatly.
I guess another thing I've learnt (or I'm trying to learn) is that it's ok to not know the answer. Living with uncertainty is ok
 
Electricity did not create novel and highly dangerous neurotoxins overnight.

(Done easily on a laptop with AI. Most likely easily by anyone, with minimal specialist knowledge. Mentioned several times in the thread previously.)

A more accurate comparison of the current situation with AI, and electricity is: if there were zero safety regulations relating to the use of electricity.

ie. Once the dangers of electricity where recognised: regulations to limit the obvious hazards where implemented.
A good thing, I would suggest.
 
Last edited:
Back
Top Bottom