@iam_jfnklstrm No, an LLM is just statistics and combinatorics, that's why you have a learning set where they have to steal lots of books and stuff from people, basically the LLM is just an engine putting out the most likely next part of a sentence to follow the one it has.
You can think of it as a more complex resource hungry and powerful markov-chain.
There is no goal, it's only what's the most likely thing to come next based on this huge amount of statistics that is built up from having taken in and analysed tons of tons of text.
As far as can summise the only other thing that really can "guide" the LLM to not show text is either to not put something into the training sets (This is what they use really cheap labour from africa for doing, that and tagging stuff so that they can build better statistics) or basically using other statistics that runs over it afterwards and basically replaces the response with a "friendly message that says that the result wasn't good".