@knowmadd I tried to reproduce the result with Gemini and ChatGPT. Either the AI has learned something new, or there is another reason for this. Neither fell for the trick question and even responded with irony in some cases.
@roblen@weizenspreu@knowmadd don't waste your time on fact checking a joke. With the right system prompt you'll be able to have any LLM say wild things. The point of the joke is to not trust their output, and it's been well made imho.
@iwein@roblen@knowmadd But it‘s still a nice learning possibility. I often see people saying that their LLM answered differently - applying the deterministic assumption that the responses will the same each time.