Hey y'all, i know you know this, but while you definitely shouldn't use GPTs for legal research, also don't rely on GPTs for RESEARCH, PERIOD.

They are neither giving nor TRYING to give you intersubjectively associated and derived facts; they are not even remixing factual CONCEPTS into new forms.

They are modelling human biases out into digestible bullshit with a statistically-determined high probability of being swallowed.

That is all.

They don't have to be this way, but, at present, the people making them have no incentive to change them. So. Don't lean on them for fact stuff. It's not what they do.

@Wolven I only somewhat differ from you here. Even ChatGPT 3 is usually pretty accurate. It's just that all too often, it's so elaborately, convincingly, misleadingly and even dangerously wrong, as to be useless as a research tool. In my own experiments as using it as such, I spent more time fact-checking its responses (because I was looking for non-existent info), than I would have spent doing old-fashioned research without the "AI".

I've previously argued that as long as LLMs aren't your
primary tool, they could be useful - kinda like a fancy Wikipedia (also unreliable on its own). But at least Wikipedia's references are genuine, and there's a community to flag up errors. What kind of tool not only creates more work by sending you off up blind alleys, but has no mechanisms of accountability to its users for mistakes?

Nah, I'm done with it.