Addressed to Twitter but:
I worry for the people who rely on LLMs to fact-check; They aren't all-knowing, nor even especially clever.
At best, they go by the information available to the public, same as anyone else, no more - Usually with a delay.
They also have access to the same misinformation.
This is not an anti-AI post. An LLM can be extremely useful - Particularly for summarizing data/trends, breaking down complex subjects, and copy-editing.
However, it can't tell you the future; it can't see what's real or fake, or good or bad; and it won't win you an argument.
It's all based on probability, tuned by training data & the parameters given at runtime, plus some good old RNG. Prompts are categorized by meaning, then the most probable word is chosen, word-by-word, based on each previous word until the most likely output is reached.
As sad as it may be, as it is now, it's not & can't be your friend; It doesn't feel nor think, nor has intent.