At this point it will surprise no one, but I asked #ChatGPT to define bullshit and to cite its sources.

It provided definitions from the Cambridge English Dictionary and the Merriam-Webster Dictionary.

The definitions it provided were entirely reasonable, but they were decidedly not from the sources it claimed.

This highlights the fact that ChatGPT and other LLMs are not knowledge models, they are themselves engines trained to produce convincing bullshit.

Below: ChatGPT, CED, MW.

@ct_bergstrom
Could you elaborate on "convincing bullshit"? Is GPT marketed as a knowledge model? I think it's supposed to be taken at face value: it's a model that spits out convincing text. And most of the times it gets it right, at least for simple things.
@MrHedmad #Galactica was marketed as an interface to humanity's knowledge or some such bullshit; you can google the exact text. ChatGPT is much more careful in its presentation
@ct_bergstrom
I noticed that if you ask gpt what it is, it gives you very careful answers, and underlines that it is not a general intelligence, or that it has feelings or real knowledge. I wonder if these answers were not "directed" somehow during training.
@MrHedmad @ct_bergstrom judging from the (extremely easily bypassed) "safeguards" OpenAI has put in ChatGPT when you ask it about a lot of things, i find that to be very likely.
those "safeguards" trip up so inconsistently that they seem counterproductive in practice...
@MrHedmad @ct_bergstrom I believe ChatGPT gives preprogrammed responses into certain input, like what it is, current events, or asking about making dangerous things. People have been able to bypass the restrictions pretty easily though (or you can just pay for the full GPT that doesn't have them).