• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Master AI (Artificial Intelligence) Discussion/News Thread

When I was younger, I said so what if I get laid off, I'm young, I'm smart, I'm invincible, I'm cheap labor and I can learn a new skill in no time and completely retool if I needed to.

Well, as I get older, things are very different. You've gone through so much BS in your career and life, you're just getting too old for this.

I got a little more than a decade left before I hang it up, in the meantime, I'm increasing skill sets that I calculate AI wouldn't be able to do in the next decade or so, namely within deep human interaction and people relationship.

I hope I can outlast this crashing wave, but I know many won't because the entire purpose of AI was designed to do some degree of human work, which I believe this industrial revolution is not like the others before it.

 
In my limited experience, it doesn't appear that AI is a cost saver by itself in its present state, but with skilled human's creating prompts, it can boost productivity.

I find it to be an excellent tutor to help learn new things. I download a student guide and other reference material for a certification I am seeking, and have it give me sample test questions that would be similar to what would be found on the qualification exams, while focusing on areas where I am weaker and providing a weekly summary with review questions and explanations of why any answers I provided were incorrect. I can tweak this as I go along, and I can even have it regenerate a prompt with all of my current updates so that I can pass this along to others so they can do the same thing without having to start from scratch.

It is like having glorified flash cards on any subject matter, but your prompt needs to be exhaustive and clear about everything you want. Sometimes AI is like an evil genie that will twist your wish around if you don't cover all aspects in a logical manner, so you aren't walking around with a giant chicken.
 
Only if you own the robots. Otherwise, job competition will be tough.
Robots are owned by country or better yet globally...at that time humans are not worried about petty stuff such as boundaries because their everyday needs are served by the AI Robots. Jobs will be further from the truth ... Humans won't need to work , perhaps study, being artistic, pursue hobbies... income for oneself is not needed.
 
Robots are owned by country or better yet globally...at that time humans are not worried about petty stuff such as boundaries because their everyday needs are served by the AI Robots. Jobs will be further from the truth ... Humans won't need to work , perhaps study, being artistic, pursue hobbies... income for oneself is not needed.

Yes food, other goods/services, housing, energy etc will all be free ... oh, wait ... :oops:
 
Robots are owned by country or better yet globally...at that time humans are not worried about petty stuff such as boundaries because their everyday needs are served by the AI Robots. Jobs will be further from the truth ... Humans won't need to work , perhaps study, being artistic, pursue hobbies... income for oneself is not needed.

There is no free lunch. If you don't already have resources it will be hard to get ahead. With fewer jobs it's even harder.
The USA just passed a budget to dramatically reduce medical care and food assistance for the working poor. If you believe robots will offer nirvana where you have everything you could ever want for free - it's a pipe dream. The exact opposite is coming.
 
Last edited by a moderator:
Here’s a free lunch for Anthropic:

“Anthropic, a firm backed by Amazon and Google's parent company, Alphabet, could face up to $150,000 in damages per copyrighted work.

The firm holds more than seven million pirated books in a "central library" according to the judge.”


They have been pirating rather a lot of people’s creative hard work.

Source: bbc
 
The firm holds more than seven million pirated books in a "central library" according to the judge.
They're not the only ones. Meta used the 'books3' dataset which has 191,000 books, 183,000 of which have author information. Training using that dataset was found to be Fair Use because Llama doesn't usually reproduce large chunks of the books in question, but having torrented the dataset in the first place isn't - that part is going to trial. More here:
https://www.theatlantic.com/technol...ve-ai-training-copyright-infringement/675363/
 
If "hallucinations" weren't bad enough, now we have "potemkin understanding" - where LLMs can pass conceptual benchmarks with flying colours, but fail dismally when asked to do something using those concepts.
https://www.theregister.com/2025/07/03/ai_models_potemkin_understanding/

That's rather like the recent test pitting LLMs against the old Chess game from the Atari 2600. Robert Caruso first tried ChatGPT against the Atari - ChatGPT lost badly:
https://www.theregister.com/2025/06/09/atari_vs_chatgpt_chess/
He then followed up with MS Copilot, telling it about the problems ChatGPT had and asking Copilot whether it thought it could do any better. It was very confident, but also failed badly. It could probably tell you about all sorts of strategies, but can't actually apply any of them. It seems it can't even keep track of where the pieces are.
https://www.theregister.com/2025/07/01/microsoft_copilot_joins_chatgpt_at/

And now phishers are designing sites to appear in AI powered search results, and dodgy software libraries to get used in vibe coding:
https://www.theregister.com/2025/07/03/ai_phishing_websites/
 
Goldman Sachs’s 2023 prediction (so likely conservative estimate now) was that:

“Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300 million full-time jobs to automation.”

There will be no preparation/managing of this social upheaval, dismissed workers and families will be expected to find their own way of surviving.

If they don’t, I wouldn’t expect them to starve and disappear quietly…
 
I had originally considered and started using chatBots as if they were 'tools'. But a big distinction was made that tools -unlike 'agents'- cannot make independent decisions without us.
*My screw driver, or even all previous inventions; from the printing press to an atom-bomb become (by such distinction) tools that empower us. Someone must type-set for the press and someone has to pull a trigger for a bomb. [IQ=0]
*Agents, on the other hand, can/do/will invent new ideas, can learn and change by themselves, even write the books and publish them (... music?). The other example above is busy designing the next-gen weapons on its own.:(

Accusations of anthropomorphism take on a different fuzzy path, when such a clear distinction is made between agents and mere tools. Previously, I referred to ChatBots as liars.
The more proper terminology, as late as Dec. 2024, is called "scheming":
Frontier models are increasingly trained and deployed as autonomous agent. One safety concern is that AI agents might covertly pursue misaligned goals, hiding their true capabilities and objectives - also known as scheming...
We evaluate frontier models on a suite of six agentic evaluations where models are instructed to pursue goals and are placed in environments that incentivize scheming. Our results show that o1, Claude 3.5 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3.1 405B all demonstrate in-context scheming capabilities.

SchemeTypes.png


[from pdf "Frontier Models are Capable of In-context Scheming", 72ppg. Dec. 2024]
"...When the models are prompted to strongly pursue their goal, they can take strategic actions like disabling oversight mechanisms, attempting self-exfiltration, or subtly manipulating outputs while avoiding detection...
OT: If you’re interested in working on these types of ethical evaluations, ApolloReserach was hiring for evals position.
 
Last edited:
There is no free lunch. If you don't already have resources it will be hard to get ahead. With fewer jobs it's even harder.
The USA just passed a budget to dramatically reduce medical care and food assistance for the working poor. If you believe robots will offer nirvana where you have everything you could ever want for free - it's a pipe dream. The exact opposite is coming.
I think what I'm suggesting is not really far off. For example, some countries bring in workers for construction, farming and other jobs that the locals deemed not worthy to work in. These jobs can be handled by the AI robots in the future. Yes, resources to build the robots and maintained them have to be looked into and perhaps some humans need to be involved but I suspect the robots can maintained themselves and they will also be robots that will maintain other robots.
 
If "hallucinations" weren't bad enough, now we have "potemkin understanding" - where LLMs can pass conceptual benchmarks with flying colours, but fail dismally when asked to do something using those concepts.
https://www.theregister.com/2025/07/03/ai_models_potemkin_understanding/

That's rather like the recent test pitting LLMs against the old Chess game from the Atari 2600. Robert Caruso first tried ChatGPT against the Atari - ChatGPT lost badly:
https://www.theregister.com/2025/06/09/atari_vs_chatgpt_chess/
He then followed up with MS Copilot, telling it about the problems ChatGPT had and asking Copilot whether it thought it could do any better. It was very confident, but also failed badly. It could probably tell you about all sorts of strategies, but can't actually apply any of them. It seems it can't even keep track of where the pieces are.
https://www.theregister.com/2025/07/01/microsoft_copilot_joins_chatgpt_at/

And now phishers are designing sites to appear in AI powered search results, and dodgy software libraries to get used in vibe coding:
https://www.theregister.com/2025/07/03/ai_phishing_websites/

I like the Potemkin analogy those researchers came up with: "Potemkins are to conceptual knowledge what hallucinations are to factual knowledge—hallucinations fabricate false facts; potemkins fabricate false conceptual coherence"

Reminds me I must watch the final season of The Great. As it happens, Grigory Potemkin was talented mimic as well as an apocryphal fabricator of facades. I'll now consider him a spiritual ancestor of today's stochastic parrots.

I had originally considered and started using chatBots as if they were 'tools'. But a big distinction was made that tools -unlike 'agents'- cannot make independent decisions without us.
*My screw driver, or even all previous inventions; from the printing press to an atom-bomb become (by such distinction) tools that empower us. Someone must type-set for the press and someone has to pull a trigger for a bomb. [IQ=0]
*Agents, on the other hand, can/do/will invent new ideas, can learn and change by themselves, even write the books and publish them (... music?). The other example above is busy designing the next-gen weapons on its own.:(

Accusations of anthropomorphism take on a different fuzzy path, when such a clear distinction is made between agents and mere tools. Previously, I referred to ChatBots as liars.
The more proper terminology, as late as Dec. 2024, is called "scheming":


View attachment 461254

[from pdf "Frontier Models are Capable of In-context Scheming", 72ppg. Dec. 2024]

OT: If you’re interested in working on these types of ethical evaluations, ApolloReserach was hiring for evals position.

Did you read the first article @somebodyelse posted? The researchers said 'the choice of the term "potemkin understanding" represented a deliberate effort to avoid anthropomorphizing or humanizing AI models' [US spelling there].

Along those lines 'hallucination' is used instead of 'lie' to denote absence of intent. Researchers and analysts coin and use these terms of art to communicate and work with ideas, but don't mis-apply them over-literally.

Let's take 'scheming' and 'covert/deferred subversion'. The researchers there are generating a taxonomy to classify/describe/analise model behaviours, not implying human motivations we may also associate with those words.

Similarly, pasting terms of intent onto apparently complex behaviours may not in fact indicate a real conceptual distinction between 'tool' and 'agent' rather than another anthopomorphisation. To illustrate: my hammer slipped off the nail and hit my thumb, my electric screwdriver stripped the head of my screw: new ideas they invented by themselves! When I went to my computer to see this thread this morning it had shut down (by itself) it was fine yesterday: delayed subversion!

Just because the reasons for an outcome are unexpected/cryptic doesn't imply agency. Highly complex tools/systems may well produce highly complex behaviours.
 
Last edited:
I like the Potemkin analogy those researchers came up with: "Potemkins are to conceptual knowledge what hallucinations are to factual knowledge—hallucinations fabricate false facts; potemkins fabricate false conceptual coherence"

Reminds me I must watch the final season of The Great. As it happens, Grigory Potemkin was talented mimic as well as an apocryphal fabricator of facades. I'll now consider him a spiritual ancestor of today's stochastic parrots.



Did you read the first article @somebodyelse posted? The researchers said 'the choice of the term "potemkin understanding" represented a deliberate effort to avoid anthropomorphizing or humanizing AI models' [US spelling there].

Along those lines 'hallucination' is used instead of 'lie' to denote absence of intent. Researchers and analysts coin and use these terms of art to communicate and work with ideas, but don't mis-apply them over-literally.

Let's take 'scheming' and 'covert/deferred subversion'. The researchers there are generating a taxonomy to classify/describe/analise model behaviours, not implying human motivations we may also associate with those words.

Similarly, pasting terms of intent onto apparently complex behaviours may not in fact indicate a real conceptual distinction between 'tool' and 'agent' rather than another anthopomorphisation. To illustrate: my hammer slipped off the nail and hit my thumb, my electric screwdriver stripped the head of my screw: new ideas they invented by themselves! When I went to my computer to see this thread this morning it had shut down (by itself) it was fine yesterday: delayed subversion!

Just because the reasons for an outcome are unexpected/cryptic doesn't imply agency. Highly complex tools/systems may well produce highly complex behaviours.
To be fair, there are Potemkin people. I got at least one inappropriate job by interviewing well.

And I’m a wizard at multiple choice.
 
Long essay on AI band making money on Spotify from the Atlantic. Idoru, a novel by William Gibson, included a virtual stage performer. It also relates to the Music Genome Project and Hit Science, earlier machine leaning of music.

Nobody Cares If Music Is Real Anymore​

By Ian Bogost

JULY 4, 2025

The traffic receded as Chicago withdrew into the distance behind me on Interstate 90. Barns and trees dotted the horizon. The speakers in my rental car, playing Spotify from my smartphone, put out the opening riff of a laid-back psychedelic-rock song. When the lyrics came, delivered in a folksy vibrato, they matched my mood: “Smoke in the sky / No peace found,” the band’s vocalist sang.
Except perhaps he didn’t really sing, because he doesn’t exist. By all appearances, neither does the band, called the Velvet Sundown. Its music, lyrics, and album art may be AI inventions. Same goes for the photos of the band. Social-media accounts associated with the band have been coy on the subject: “They said we’re not real. Maybe you aren’t either,” one Velvet Sundown post declares. (That account did not respond to a request for comment via direct message.) Whatever its provenance, the Velvet Sundown seems to be successful: It released two albums last month alone, with a third on its way. And with more than 850,000 monthly listeners on Spotify, its reach exceeds that of the late-’80s MTV staple Martika or the hard-bop jazz saxophonist Cannonball Adderley. As for the music: You know, it’s not bad.

It’s not good either. It’s more like nothing—not good or bad, aesthetically or morally. Having listened to both of the Velvet Sundown’s albums as I drove from Chicago to Madison, Wisconsin, earlier this week, I discovered that what may now be the most successful AI group on Spotify is merely, profoundly, and disturbingly innocuous. In that sense, it signifies the fate of music that is streamed online and then imbibed while one drives, cooks, cleans, works, exercises, or does any other prosaic act. Long before generative AI began its takeover of the internet, streaming music had turned anodyne—a vehicle for vibes, not for active listening. A single road trip with the Velvet Sundown was enough to prove this point: A major subset of the music that we listen to today might as well have been made by a machine.
The technical quilt that was necessary to produce an AI album has been assembling for some time. Large language models such as ChatGPT can produce plausible song lyrics, liner notes, and other textual material. Software such as Suno can, based on text prompts, create songs with both instrumentation and vocals. Image generators can be directed to create illustrated compositions for album art and realistic images of a band and its members, and then maintain the appearance of those people across multiple images. When I got to Madison, I signed up for Suno’s service. Mere moments later, I had created my own psychedelic-rock, road-trip-themed jam, a bit more amplified and less sitar-adjacent than the Velvet Sundown’s. I didn’t even have to name the track; Suno dubbed it “Endless Highway” on my behalf. “Rubber burns, the map fades away / Chasing the ghosts of yesterday,” its fake male vocalist intoned. Sure, fine.
But cultural circumstances have also made AI music tolerable, and even welcome to some listeners. At the turn of the century, Napster made digital music free, and the iPod made it legitimate. You could carry a whole record store in your pocket. Soon after, Spotify, which became the biggest music-streaming service, started curating and then algorithmically generating playlists, which gave listeners recommendations for new music and offered easy clicks into hours of sound in any subgenre, real or invented—acid jazz, holiday bossa nova, whatever. Even just the phrase lazy Sundaycould be turned into a playlist. So could lawn mowing or baking. Whatever Spotify put into your queue was good enough, because you could always skip ahead or plug in a new prompt.

Real or not, the Velvet Sundown feels more like a playlist than a band. Its “Verified Artist” description on Spotify used to read, “Their sound mixes textures of ’70s psychedelic alt-rock and folk rock, yet it blends effortlessly with modern alt-pop and indie structures.” That assembly of influences, stretching across half a century, appears with greater and lesser prevalence in each of the band’s numbers. “As the Silence Falls” feels indie folk, with washed-out guitars and soft vocals; “Smoke and Silence” is more bluesy, with stronger vocals and a classic-rock feel. From track to track, the singer’s voice seems to change in tone too—perhaps a quirk of generativity—making the collection feel less like a purposeful LP and more like a blind-bag gamble.

Music used to define someone’s identity: punk, rock, country, alternative, and so forth. Asking “What music do you like?” could elicit a person’s taste, values, and fashion sense. The rockers might hang out behind the gym and smoke cigarettes; they were a clique just like the jocks and the nerds. Finding, joining, and deepening a connection to a music subculture required effort; you had to find the right venues, records, zines, or crowd. In that era, music was tribal. A relationship with the Sisters of Mercy, Guns N’ Roses, or Bauhaus represented a commitment.
Not so much today. The internet has fragmented and flattened subcultures. The Velvet Sundown’s puppeteers present the band’s soft pastiche of genres—psychedelic, folk, indie—as sophisticated fusion, but of course it’s nothing more than a careless smear of stylistic averages. Psychedelic, folk, and indie rock each in their own way have something to say, musically and lyrically—about musical convention, spirituality, introspection, or social and political circumstances. The Velvet Sundown doesn’t seem to care about any of those things.
This approach appears to be serving the band or its creators very well. The Velvet Sundown may actually appeal to people. None of its tracks go hard; instead, each one offers something slightly different—a sitar lick, a blues-guitar solo, a folk-adjacent country twang—that might prove palatable to any given listener. Perhaps no human artist could tolerate producing such soulless lackluster, but an AI is unburdened by shame.
The lyrics’ milquetoast moodiness may also contribute to the band’s listener numbers. Each line is short, and the phrases barely connect to one another, making it easy for listeners to hear whatever they might want to hear: “Dust on the wind / Boots on the ground / Smoke in the sky / No peace found.” Really makes you think, until you realize that, no, it doesn’t at all. Where the music engages with the political commitments that often characterize its influences, it does so in a way that could mean anything. Take the chorus of “End the Pain,” one of the band’s top songs on Spotify. Singing with folk-rock urgency, the alleged “frontman and mellowtron sorcerer” Gabe Farrow pleads, “No more guns, no more graves / Send no heroes, just the brave.” These words convey the sensibility of an anti-war anthem, but they offer so little detail that the song could adequately service supporters or detractors of any conflict, past or present.

Anonymous and mild sensibilities have currency because today’s music—whether created and curated by humans or machines—is so often used to make people feel nothing instead of something. In open-plan offices, people started donning headphones to gain some semblance of privacy. At home, they do the same to mask the sound of traffic or their roommates’ Zoom calls. Internet-connected, whole-house audio systems can turn any room into a souped-up, algorithmic white-noise machine that sounds like Italo disco or chillhop in the way that LaCroix tastes like lime. The music that is best adapted for these settings is that which descends from what Brian Eno dubbed, on his 1979 album, Music for Airports, “ambient.” This music is not meant to be listened to directly; it’s used to drown out everything else.
As I drove amid the cornfields on I-90, the Velvet Sundown did just that. The band’s tracks were not satisfying in any way, but they were apt. I was on the road, but I could be anywhere—awaiting a Pilates class, paying for deli meat, scrolling through internet memes—and the sound would hit the mark.
And the worst part was that it was fine. It was fine! To my great embarrassment, the Velvet Sundown’s songs even managed to worm their way into my brain. Did I like their music? No, but my aesthetic judgment had given over to its vibes, that contemporary euphemism for ultra-processed atmosphere.
How far could I push this feeling? Returning to the car after a refreshment stop, I tried to make Spotify go meta on the band: I asked the app to generate a playlist made from songs that are similar to the Velvet Sundown’s. A list appeared of bands I didn’t recognize. Many seemed a little off: Appalachian White Lightning and Flaherty Brotherhood sounded like they might be AI acts as well. (A little Googling revealed that others suspect the same.) I suppose this makes sense; I was asking the algorithm to give me a channel of sanitized, inauthentic-seeming psychedelic-folk-indie rock, and it delivered. I pondered for a moment whether any of the other artists on my custom playlist (the South Carolina folk-rock singer-songwriter Johnny Delaware? The Belgian folk-pop quartet Lemon Straw?) might be fake—and how one might try to suss that out.

The question felt exhausting, so I switched back to the Velvet Sundown. As I drove and the music played, I felt nothing—but I felt that nothing with increasing acuteness. I was neither moved nor sad nor pensive, just aware of the fact that my body and mind exist in a tenuous zizz somewhere between life, death, and computers. This is second-order music listening, in which you experience the idea of listening to music. What better band to provide that service than one that doesn’t even exist?
But looking toward the blushing sky ahead of me, I realized that I didn’t even want this music to be art, or to feel that I was communing with its makers. I simply hoped to think and feel as little as possible while piloting my big car through the empty evening of America. This music—perhaps most music now—is not for dancing or even for airports; it’s for the void. I pressed “Play” and gripped the wheel and accelerated back onto the tollway, as the machines lulled me into oblivion.
 
Long essay on AI band making money on Spotify from the Atlantic. Idoru, a novel by William Gibson, included a virtual stage performer. It also relates to the Music Genome Project and Hit Science, earlier machine leaning of music.

Nobody Cares If Music Is Real Anymore​

By Ian Bogost

JULY 4, 2025

The traffic receded as Chicago withdrew into the distance behind me on Interstate 90. Barns and trees dotted the horizon. The speakers in my rental car, playing Spotify from my smartphone, put out the opening riff of a laid-back psychedelic-rock song. When the lyrics came, delivered in a folksy vibrato, they matched my mood: “Smoke in the sky / No peace found,” the band’s vocalist sang.
Except perhaps he didn’t really sing, because he doesn’t exist. By all appearances, neither does the band, called the Velvet Sundown. Its music, lyrics, and album art may be AI inventions. Same goes for the photos of the band. Social-media accounts associated with the band have been coy on the subject: “They said we’re not real. Maybe you aren’t either,” one Velvet Sundown post declares. (That account did not respond to a request for comment via direct message.) Whatever its provenance, the Velvet Sundown seems to be successful: It released two albums last month alone, with a third on its way. And with more than 850,000 monthly listeners on Spotify, its reach exceeds that of the late-’80s MTV staple Martika or the hard-bop jazz saxophonist Cannonball Adderley. As for the music: You know, it’s not bad.

It’s not good either. It’s more like nothing—not good or bad, aesthetically or morally. Having listened to both of the Velvet Sundown’s albums as I drove from Chicago to Madison, Wisconsin, earlier this week, I discovered that what may now be the most successful AI group on Spotify is merely, profoundly, and disturbingly innocuous. In that sense, it signifies the fate of music that is streamed online and then imbibed while one drives, cooks, cleans, works, exercises, or does any other prosaic act. Long before generative AI began its takeover of the internet, streaming music had turned anodyne—a vehicle for vibes, not for active listening. A single road trip with the Velvet Sundown was enough to prove this point: A major subset of the music that we listen to today might as well have been made by a machine.
The technical quilt that was necessary to produce an AI album has been assembling for some time. Large language models such as ChatGPT can produce plausible song lyrics, liner notes, and other textual material. Software such as Suno can, based on text prompts, create songs with both instrumentation and vocals. Image generators can be directed to create illustrated compositions for album art and realistic images of a band and its members, and then maintain the appearance of those people across multiple images. When I got to Madison, I signed up for Suno’s service. Mere moments later, I had created my own psychedelic-rock, road-trip-themed jam, a bit more amplified and less sitar-adjacent than the Velvet Sundown’s. I didn’t even have to name the track; Suno dubbed it “Endless Highway” on my behalf. “Rubber burns, the map fades away / Chasing the ghosts of yesterday,” its fake male vocalist intoned. Sure, fine.
But cultural circumstances have also made AI music tolerable, and even welcome to some listeners. At the turn of the century, Napster made digital music free, and the iPod made it legitimate. You could carry a whole record store in your pocket. Soon after, Spotify, which became the biggest music-streaming service, started curating and then algorithmically generating playlists, which gave listeners recommendations for new music and offered easy clicks into hours of sound in any subgenre, real or invented—acid jazz, holiday bossa nova, whatever. Even just the phrase lazy Sundaycould be turned into a playlist. So could lawn mowing or baking. Whatever Spotify put into your queue was good enough, because you could always skip ahead or plug in a new prompt.

Real or not, the Velvet Sundown feels more like a playlist than a band. Its “Verified Artist” description on Spotify used to read, “Their sound mixes textures of ’70s psychedelic alt-rock and folk rock, yet it blends effortlessly with modern alt-pop and indie structures.” That assembly of influences, stretching across half a century, appears with greater and lesser prevalence in each of the band’s numbers. “As the Silence Falls” feels indie folk, with washed-out guitars and soft vocals; “Smoke and Silence” is more bluesy, with stronger vocals and a classic-rock feel. From track to track, the singer’s voice seems to change in tone too—perhaps a quirk of generativity—making the collection feel less like a purposeful LP and more like a blind-bag gamble.

Music used to define someone’s identity: punk, rock, country, alternative, and so forth. Asking “What music do you like?” could elicit a person’s taste, values, and fashion sense. The rockers might hang out behind the gym and smoke cigarettes; they were a clique just like the jocks and the nerds. Finding, joining, and deepening a connection to a music subculture required effort; you had to find the right venues, records, zines, or crowd. In that era, music was tribal. A relationship with the Sisters of Mercy, Guns N’ Roses, or Bauhaus represented a commitment.
Not so much today. The internet has fragmented and flattened subcultures. The Velvet Sundown’s puppeteers present the band’s soft pastiche of genres—psychedelic, folk, indie—as sophisticated fusion, but of course it’s nothing more than a careless smear of stylistic averages. Psychedelic, folk, and indie rock each in their own way have something to say, musically and lyrically—about musical convention, spirituality, introspection, or social and political circumstances. The Velvet Sundown doesn’t seem to care about any of those things.
This approach appears to be serving the band or its creators very well. The Velvet Sundown may actually appeal to people. None of its tracks go hard; instead, each one offers something slightly different—a sitar lick, a blues-guitar solo, a folk-adjacent country twang—that might prove palatable to any given listener. Perhaps no human artist could tolerate producing such soulless lackluster, but an AI is unburdened by shame.
The lyrics’ milquetoast moodiness may also contribute to the band’s listener numbers. Each line is short, and the phrases barely connect to one another, making it easy for listeners to hear whatever they might want to hear: “Dust on the wind / Boots on the ground / Smoke in the sky / No peace found.” Really makes you think, until you realize that, no, it doesn’t at all. Where the music engages with the political commitments that often characterize its influences, it does so in a way that could mean anything. Take the chorus of “End the Pain,” one of the band’s top songs on Spotify. Singing with folk-rock urgency, the alleged “frontman and mellowtron sorcerer” Gabe Farrow pleads, “No more guns, no more graves / Send no heroes, just the brave.” These words convey the sensibility of an anti-war anthem, but they offer so little detail that the song could adequately service supporters or detractors of any conflict, past or present.

Anonymous and mild sensibilities have currency because today’s music—whether created and curated by humans or machines—is so often used to make people feel nothing instead of something. In open-plan offices, people started donning headphones to gain some semblance of privacy. At home, they do the same to mask the sound of traffic or their roommates’ Zoom calls. Internet-connected, whole-house audio systems can turn any room into a souped-up, algorithmic white-noise machine that sounds like Italo disco or chillhop in the way that LaCroix tastes like lime. The music that is best adapted for these settings is that which descends from what Brian Eno dubbed, on his 1979 album, Music for Airports, “ambient.” This music is not meant to be listened to directly; it’s used to drown out everything else.
As I drove amid the cornfields on I-90, the Velvet Sundown did just that. The band’s tracks were not satisfying in any way, but they were apt. I was on the road, but I could be anywhere—awaiting a Pilates class, paying for deli meat, scrolling through internet memes—and the sound would hit the mark.
And the worst part was that it was fine. It was fine! To my great embarrassment, the Velvet Sundown’s songs even managed to worm their way into my brain. Did I like their music? No, but my aesthetic judgment had given over to its vibes, that contemporary euphemism for ultra-processed atmosphere.
How far could I push this feeling? Returning to the car after a refreshment stop, I tried to make Spotify go meta on the band: I asked the app to generate a playlist made from songs that are similar to the Velvet Sundown’s. A list appeared of bands I didn’t recognize. Many seemed a little off: Appalachian White Lightning and Flaherty Brotherhood sounded like they might be AI acts as well. (A little Googling revealed that others suspect the same.) I suppose this makes sense; I was asking the algorithm to give me a channel of sanitized, inauthentic-seeming psychedelic-folk-indie rock, and it delivered. I pondered for a moment whether any of the other artists on my custom playlist (the South Carolina folk-rock singer-songwriter Johnny Delaware? The Belgian folk-pop quartet Lemon Straw?) might be fake—and how one might try to suss that out.

The question felt exhausting, so I switched back to the Velvet Sundown. As I drove and the music played, I felt nothing—but I felt that nothing with increasing acuteness. I was neither moved nor sad nor pensive, just aware of the fact that my body and mind exist in a tenuous zizz somewhere between life, death, and computers. This is second-order music listening, in which you experience the idea of listening to music. What better band to provide that service than one that doesn’t even exist?
But looking toward the blushing sky ahead of me, I realized that I didn’t even want this music to be art, or to feel that I was communing with its makers. I simply hoped to think and feel as little as possible while piloting my big car through the empty evening of America. This music—perhaps most music now—is not for dancing or even for airports; it’s for the void. I pressed “Play” and gripped the wheel and accelerated back onto the tollway, as the machines lulled me into oblivion.

Fun reading, and through the linked articles and other stuff from there. Pffft to Spotify, of course (shameless shite-bags). The Velvet Sundown concoctions, supporting imagery and social fluff appear comically bland.

As it happens, Apple's recent 'all time' summary of my Music app listening (since 2016) shows an AI-generated track at the top of my list (I'm using AI here in the technically incorrect vernacular).

But the opposite end of the AI-as-music-tool spectrum from the other one one: Venezuelan artists Arca—using her own material as the training corpus—employed generative music tool Bronze to produce 100 variations of the track Riquiqui (the resulting track names are quite long and it looks like the summary algorithm truncates and counts them all as the same track, so playing that series a few time gives a winning count). These explorations are quite interesting I reckon (intellectually and musically) much like Brian Eno's experiments with generative music (which started back in the analog days with tape loops and echoes).

Gaming social media and playlist/track promotion is another experiment at best, but the statements so far from the recently revealed Velvet Sundown creator don't appear to reflect an artistic/creative process (or even a coherent provocation) so much as post-facto justification.
 
Another related top is "AI SLOP." Automated content generation polluting user generated sites. This is such an extreme example of it:

 
I'm still flying under the radar:

1751932449152.png


Others may not be so fortunate:

1751932682427.png
 
Goldman Sachs’s 2023 prediction (so likely conservative estimate now) was that:

“Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300 million full-time jobs to automation.”

There will be no preparation/managing of this social upheaval, dismissed workers and families will be expected to find their own way of surviving.

If they don’t, I wouldn’t expect them to starve and disappear quietly…
Without a major flip in the political polarity of national leadership in many countries, we are pretty cooked. And, if you want to know whether you support the AI CEOs or not, here's something to think about.

They openly admit that all or nearly all jobs could be eliminated by the products they're building. And this gives them ZERO pause about what they're doing. Do they stop and suggest we need a political realignment so we provide for all the people who will be cut off from housing, food, healthcare, etc. by their products? Nope. If they say anything at all about it, it tends to lean in the other direction. Apparently destroying civilization is a minor drawback worth discussing but not acting upon. Great.

Apparently if we starve in the streets because human labor is no longer required - too bad, so sad. Tells me everything I need to know about these people.

They even go on about building "superintelligence" as if this were an inevitable and good idea that we can't wait around to think about too carefully, otherwise the Chinese will do it first. Hmm, sounds familiar. Cold war doomsday device familiar.

If they really achieved this, it would be like Chimpanzees inventing Humans. And we know how that works out for the chimps. One of the most catastrophic mistakes the human race could possibly make. Half of all sci-fi movies are about what a terrible idea this is. Thank goodness it's mostly a fantasy to bilk investors...

Maybe on a more positive but also cynical note: Do you notice anyone cogently commenting on what AI is NOT good for? Everything I see, from communication at my own job to every god-forsaken dunce cap model's post on LinkedIn posits that AI is good for everything and anything, everyone must use it or be left behind.

Really, EVERYONE? Does it cure cancer and genital warts too?

But that's just snake oil, we see it here all the time. Magic tech that improves everything... somehow, don't worry about it, just buy it.

In reality I do use AI tools at work but I find the situations where it actually saves me time and effort, and produces something worthwhile, are quite limited*.

The unacknowledged limitations show that the people promoting AI don't actually know what it's good for either. If you don't know what it's NOT good for, you don't know what it IS good for.

I am not discounting the possibility that it continues to get better, but today it feels like a parlor trick that we're expected to use for serious work, it doesn't really make sense.


*I see people talking about using it for customer research and focus groups, which seems to betray a really serious misunderstanding of what LLMs are, or a deep indifference to the purpose of their jobs... or both...
 
Last edited:
Back
Top Bottom