@ngaylinn @DrewKadel
Actually we as a species have had to deal with that before.
We call them grifters.
@bornach @DrewKadel I dunno. A grifter still wants something from you. They're hiding their intent, but it's something you can imagine, understand, and detect if you're paying attention.
LLMs are unreadable and unpredictable because they have no intent. They may switch between friend and grifter depending on what sounds right in the flow of the conversation, without any conception of what's good or bad for you or for them.
On the other hand, if a grifter asks an LLM to write them script to achieve something specific, that's another thing entirely...
@ngaylinn
LLMs are never your friend
They are always in what can best be described as "the grifter" mode. The entire training regime of a generative AI chatbot is geared towards getting one thing, an upvote from a human rating the quality of the conversation
Admittedly this is an over-simplification. Reinforcement Learning with Human Feedback involves training a reward policy - a 2nd neural network that is ultimately responsible for rewarding the chatbot for giving "good" responses.