JustAnandaDourEyedDude
Addicted to Fun and Learning
I can imagine many theses and dissertations being written on research into how users think of LLMs or AI Agents. Lots of this research would be funded by AI companies, of course. Does the proportion of users who think of LLMs as a gendered person and prefers a male or a female voice for the LLM to reply with (i'm not talking sex robots here) depend on whether the user's native language is strongly or weakly gendered? How does it depend on the personality of the user?I haven't figured out why -when talking about a AI/chatBot- I keep referring to it as a "she".
Could this tendency be construed as some kinky kind of misogyny?![]()
Will AI Agents be provisioned with a default name that the user can override, or will users always refer to LLMs and Agents as "you"? How many users currently name their multiplicity of agents "Agent Smith"? How will Agents refer to other Agents when speaking with the user? What proportion of users give a name to an AI Agent? Probably more likely to assign names when using multiple agents for different functions, and first name would be easier to remember functionality than referring to them by number. I would guess it would be much more likely for people to name a humanoid robot or animal-resembling robot than an AI Agent.
Will agents be granted degrees by universities if they pass the requisite exams, for a fee natch, and will some users refer to such agents as "Dr." or "Professor" so-and-so? Will users employ honorifics such as "right honorable", "officer", "your highness" or "your excellency" when interacting with specialized AI agents, or just go with "hey you" or "mate"?
Will some people who are dating (other people) judge whether their partner is marriage material by how the partner treats their AI Agents? Will some bosses become less harsh on employees because the bosses have AI Agents they can abuse, assuming the employees have not yet been replaced by AI Agents? Will some people snoop on their partner's interaction logs with AI Agents or LLMs to learn their deepest secrets that the partner would never share with a human? Will solving the hallucinations problem results in some users rage-switching to a more sycophantic LLM, because their current LLM repeatedly tells the user that the user is wrong and proceeds to prove why this is true?
Last edited: