At this point it will surprise no one, but I asked #ChatGPT to define bullshit and to cite its sources.

It provided definitions from the Cambridge English Dictionary and the Merriam-Webster Dictionary.

The definitions it provided were entirely reasonable, but they were decidedly not from the sources it claimed.

This highlights the fact that ChatGPT and other LLMs are not knowledge models, they are themselves engines trained to produce convincing bullshit.

Below: ChatGPT, CED, MW.

@ct_bergstrom

How about putting #ChatGPT on the stand, and ask the question:

"Why are you good at generating convincing bullshit?"

@SpaceLifeForm Good idea.

I get denial.

@ct_bergstrom

Thank you for testing.

If you are still in the same session (i have reason to believe you are), try this:

Objection! Unresponsive. Assumes facts not in evidence.