A map of Northern Ireland, as horked up by ChatGPT. (See alt text for the true horror.)
Do not trust ChatGPT to do your geography homework!
That is all.
A map of Northern Ireland, as horked up by ChatGPT. (See alt text for the true horror.)
Do not trust ChatGPT to do your geography homework!
That is all.
A map of Northern Ireland, as horked up by ChatGPT. (See alt text for the true horror.)
Do not trust ChatGPT to do your geography homework!
That is all.
@tedmielczarek @fskornia Latest elsenet is that RFK Jr. used an LLM to write a whole chunk of healthcare policy claiming vaccines are poisonous and there's a childrens' health crisis caused by (waves hands).
We're all going to die to pad out Sam Altman's bank account.
I have never seen a more clear example of how ChatGPT & co are, at root, a distillation of
a short-attention-span American teenager bullshitting his way through a college undergrad presentation
Philomena Cunk would *never*
Are you telling me ARRMAGH isn't a real place? Do better Ireland smh
I just didn't know Londongrl was in Unster, is all.
https://www.youtube.com/watch?v=PSi4CDANuUY
@cstross Looks perfectly accurate to me.
Here's what Claude (Sonnet 4) produced for me. It's both much better and much worse.
@tautology @gjm @cstross no idea what those rivers are doing, but at least "Here be dragons" is correct!
(I was born in one of those cities and grew up in another - they're the right cities but in the wrong order , to paraphrase someone from aforementioned dragon territory)
Credibly attributed top Upton Sinclair:
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
My first impression was that Unster wasn't bad but then I found Northern Ireland which is indeed somewhat doubtful.
Points for Londogrl. Neither one thing nor the other as Churchill remarked on a very different occasion. Which is more or less ChatGPT's superpower.
Not bad for a fantasy novel frontispiece, needs more mountains. Or cowbell, I don't know.
@cstross When I show stuff like this to "AI" boosters, their response is, "Sure, right, that's terrible, but you're asking it to do something it's not good at! Let's focus on what it *is* good at!"
To which I respond: The thing it's "good" at is lying with confidence. Which isn't, like, actually *good*.
@orionkidder But it's not "lying". To lie you need a model of truth and falsehood. LLMs don't have an internal model of the external world to provide context, all they've got is a probability distribution of one fragment of text being followed by another, entirely divorced from context. They have no means to determine whether the text they're producing in response to a prompt is a valid answer or just an answer-shaped string.
There is no intellect here, just text.
LLM are at best agnostic, but in real terms reflect biases and ideology of their creators.
@geoglyphentropy to be frank I did not expect Lem to be the Sf author I read when young who‘d be the so helpful to understand the world we built.
I probably should have.
Yes, of course you are technically correct, the 'best kind' of correctness - but this is unhelpful. As you well know language uses shortcuts and 'lying' is close enough.
To be particularly useful an LLM should be able to cite sources for each section of its responses, but then I suppose it wouldn't be an LLM and it'd be even more obvious there's plenty of A, and no I.
@ubersoft @syllopsium @cstross No, I hear you. The word "lie" has a *punch* to it that very few other words do. That's why I initially used it. It's why journalists are so unwilling to use it to describe what powerful people say.
I don't have a good rhetorical solution for this situation.