LLMs are mansplaining as a service, but more specifically that type of mansplainer who googles your question and replies authoritatively with the first result that comes up, despite having zero understanding himself.
@daniel_bohrer I work with a few of those people. They'll plop an AI answer in to the chat (sometimes that's only vaguely tangentially related to the actual question), and the number of times they'll come back a few minutes later with "ooops, it lied" is astounding.