• Welcome to ASR. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

OH, NO! AMIR IS UNDER ATTACK!!!

Status
Not open for further replies.
I dunno, I find AI can be quite creative in it's approach when you're trying to solve problems, it seems pretty creative to me. I mean I agree with you that it's not always right, but I do find it quite creative.
It's not "creating". It describes itself as "generative". It uses its provided training data to answer prompts. If you ask it "why did my car stop?" it can't go beyond what it was programmed with in training. It will not come up with a single original response on its own on why the car stopped. And if for some reason "check gas tank isn't empty" wasn't in its original training, it will not go discover it as a possible cause on its own.
 
The issue isn't whether LLMs need human inputs vs AI inputs in order to "learn." That is an understandable question to pose, but in my opinion it's not the core issue even though it quite reasonably seems to be.

The more important issue, IMHO, is that from what I understand, LLMs "learn" but they do not think. They are always reliant on, and their knowledge is always very shapable by, the "wisdom of the crowd."

Of course I understand that there is no lack of examples of we humans being mistaken, influenced by others, biased by social pressure and "what everyone else is saying." But my understanding of human cognition is that we are not as fully or simply directed or shaped by these factors as LLMs are.

In this regard I also think it's important to note that LLMs are a type of AI, but "LLM" is not synonymous with "AI." I'm sure there are other types of AI that might very well be capable of human-style thought. But to my knowledge that's not what LLMs do and it's not what they can do.

However, I'd have no problem believing that LLMs might be capable of communicating in ways that at least some humans might experience as indistinguishable from how a thinking human might communicate. I don't think they're there yet - I think the main way they might "fool" us presently is by conning us: that is, sounding convincing in situations where we're not thinking to pay attention or (like in advertising) we don't care because the nature or context of the communication isn't important to us or we hold real human communication in that context in low regard already.

Part of my job is teaching writing at the college level, and while I am fortunate to be at an institution where student use of AI for essays is minimal, it is not zero. So far every time a student has used AI in a paper, my "spidey sense" has immediately been triggered. I have cross-checked my suspicions by running the student's work through 4 different AI checkers (using different back ends), and also by running a half-dozen other students' same assignments through the same checkers. So far, the student paper I've instinctively thought was AI has always been AI, and when I've broached the subject, the students have always admitted it without protest or denial. And the ones I had no suspicion of and was using as controls have always come up as human generated.

One thing I have found interesting - and which has given me further confidence in the results - was that while the 4 AI checkers have agreed on what's AI, the ones that tell you what percentage of the text is AI-generated have not always agreed - one might say 75% while the other might say 66% and the other might say 50%. So they clearly are detecting it using different algorithms, thresholds, or standards, and yet they're all spitting out the same conclusion.

Pride cometh before the fall so I'm sure the day will come when I'm fooled. But so far I have been more struck by how easy it is to detect it, not by how difficult it is.
 
Last edited:
Yeah, please, there's enough tiresome discourse about AI everywhere, don't need more right here.

As for Audiogon -- delusional feebs/creeps have an online audio forum? Alert the media.
 
It's not "creating". It describes itself as "generative". It uses its provided training data to answer prompts. If you ask it "why did my car stop?" it can't go beyond what it was programmed with in training. It will not come up with a single original response on its own on why the car stopped. And if for some reason "check gas tank isn't empty" wasn't in its original training, it will not go discover it as a possible cause on its own.
I've found it to be creative in it's approach at viewing & solving problems. If a person had come up with those ideas I would have described them as creative. But yeah, we don't have to discuss it more in this thread.
 
When I read something like this Thread at audiogon, I ask myself once again why there is so much controversy and resentment about the sense of hearing of all things.
Really, I don't understand it.

Are you familiar with optics forums where people accuse each other of looking through the wrong glasses? Are there arguments about the right way to measure or represent the color blue?

Are there forums where people accuse each other of having measured the smell incorrectly or that the smell of cheese can only be measured at 20 degrees Celsius?

So why are people arguing about all these things in connection with hearing?
The goal is the construction of a grandiose self-image and social status, where high fidelity music reproduction is not a goal but the conduit to validate your worthiness; this conduit can be manipulated at discretion to achieve your goals regardless any factual true. Any attempt to destabilize this deluded construction will trigger a backlash; after all, accepting facts will prove you are just a fool, not an easy pill to swallow for anyone.
 
Last edited:
I never visit anymore. but my recollection of Audiogon was that the forums were never particularly active, and a lot of the exchanges were super flat and basic. I also wasted time on AVSforums, more active but equally stupid overall.
 
Yes, I wouldn't use ChatGPT for Hifi research
Case in point:
 
OH, NO! AMIR IS UNDER ATTACK!!!

This heading should be changed. It's not about a physical attack on our host, but about completely irrelevant opinions in a completely irrelevant forum.
 
Who ever got physically attacked in an online forum?
That's exactly why I wrote: It's not about a physical attack

Precisely to emphasize that the headline is exaggerated in every respect.
Capital letters
Content
Three exclamation marks

Like a message for the country's emergency services.

But it's just an exchange of opinions in an irrelevant forum
 
It looks like we no longer get to 'enjoy' the marvel of spouted nonsense as well in the OP as in the comments by some rude commenters (not Amirm but 3 or 4 outspoken individuals that clearly were lacking any technical knowledge)

Schermafdruk van 2025-09-28 08-01-36.png
 
Last edited:
Any idea as to why it got removed? Was Amir too convincing?
 
Any idea as to why it got removed? Was Amir too convincing?

I would guess so - To much clearly and calmly stated, actual scientific truth being posted on their own forum - bound to make them nervous. :facepalm:
 
Status
Not open for further replies.
Back
Top Bottom