@daringfireball LLMs are actually naturally deterministic. An overly simplistic way to think about it would be that the LLM predicts the next word in an output over and over again. By default, it will always pick the most likely next word. Randomness is added in by increasing the "temperature", which gives the model a higher chance of selecting the non-most likely next word.