@siracusa hello! On the last ATP you suggested LLMs were deterministic, but in practical terms that’s unfortunately not true. And def not true of GPT-4 (which is where all my professional experience lies). Even a etting a temp of 0 and using a seed value doesn’t lead to set output from the same input. Folks think it might be things like variability in FP operations, or because GPT-4 uses MoE - Mixture of Experts.