The issue isn't whether LLMs need human inputs vs AI inputs in order to "learn." That is an understandable question to pose, but in my opinion it's not the core issue even though it quite reasonably seems to be.
The more important issue, IMHO, is that from what I understand, LLMs "learn" but they do not think. They are always reliant on, and their knowledge is always very shapable by, the "wisdom of the crowd."
Of course I understand that there is no lack of examples of we humans being mistaken, influenced by others, biased by social pressure and "what everyone else is saying." But my understanding of human cognition is that we are not as fully or simply directed or shaped by these factors as LLMs are.
In this regard I also think it's important to note that LLMs are a type of AI, but "LLM" is not synonymous with "AI." I'm sure there are other types of AI that might very well be capable of human-style thought. But to my knowledge that's not what LLMs do and it's not what they can do.
However, I'd have no problem believing that LLMs might be capable of communicating in ways that at least some humans might experience as indistinguishable from how a thinking human might communicate. I don't think they're there yet - I think the main way they might "fool" us presently is by conning us: that is, sounding convincing in situations where we're not thinking to pay attention or (like in advertising) we don't care because the nature or context of the communication isn't important to us or we hold real human communication in that context in low regard already.
Part of my job is teaching writing at the college level, and while I am fortunate to be at an institution where student use of AI for essays is minimal, it is not zero. So far every time a student has used AI in a paper, my "spidey sense" has immediately been triggered. I have cross-checked my suspicions by running the student's work through 4 different AI checkers (using different back ends), and also by running a half-dozen other students' same assignments through the same checkers. So far, the student paper I've instinctively thought was AI has always been AI, and when I've broached the subject, the students have always admitted it without protest or denial. And the ones I had no suspicion of and was using as controls have always come up as human generated.
One thing I have found interesting - and which has given me further confidence in the results - was that while the 4 AI checkers have agreed on what's AI, the ones that tell you what percentage of the text is AI-generated have not always agreed - one might say 75% while the other might say 66% and the other might say 50%. So they clearly are detecting it using different algorithms, thresholds, or standards, and yet they're all spitting out the same conclusion.
Pride cometh before the fall so I'm sure the day will come when I'm fooled. But so far I have been more struck by how easy it is to detect it, not by how difficult it is.