LLMs are mansplaining as a service, but more specifically that type of mansplainer who googles your question and replies authoritatively with the first result that comes up, despite having zero understanding himself.
@Tattie now there's a worse type of mansplainer: the type who puts your question into ChatGPT and replies authoritatively with the first answer that he gets without questioning it or having zero understanding himself.

@daniel_bohrer I work with a few of those people. They'll plop an AI answer in to the chat (sometimes that's only vaguely tangentially related to the actual question), and the number of times they'll come back a few minutes later with "ooops, it lied" is astounding.

@Tattie