@tito_swineflu @msbellows @resuna @josh @kfury Good to know that I'm still a ghost:
Me: Tell me about <me>. He was born in California in <year>.
ChatGPT: I couldn't find any notable public figures or individuals of significance named <me> born in California in <year> in the available information.
I'm not an individual of significance? FU!
@msbellows @kfury @tito_swineflu @resuna @josh
Right, I've had several technical books published and wrote some magazine articles, but that was all over 20 years ago. Still, I'm out there, stupid bot.
@david @msbellows @kfury @tito_swineflu @resuna @josh
I said
"Tell me about Jeff Grigg, the software developer."
and it gave me the whole "not notable" thing.
That Hurts My Feelings!!!
Same for
"Find online postings of Jeff Grigg, the software developer."
.
On the other hand, a Google search of the same could easily find me (and another person or two with the same name).
I've been posting online for well over twenty years. I'm *not* that hard to find.
@scottjenson @kfury Contrarian take on this whole thread: we're all immune from hackers and thieves: we're broke and boring,.
(Except for Kevin, who inevitably will become the victim of his own fame.)
actually. i ask it stuff that sounds factual but isn't and see if it hallucinates a factual answer
@kfury Ah yes. As a botanist, I asked ChatGPT for the native range of one of the worst weeds on the planet, where copious information is available online that they would have used as training data. In response, it listed most of the native range as part of the invaded range. Not even a complex question of understanding, just a fact look-up, still botched the answer.
Next step: not falling for Gell-Mann amnesia, or a test like this will be for nought.
That's because LLMs do not "look up facts". Rather, they construct plausible sentences using the statistical relationships between words. If that sentence is not factual, tough.
@frankcat @anschmidtlebuhn @kfury
This is where people confuse LLMs with intelligence.
The human brain makes a model of the world, which it is constantly testing against experience. For humans, language is merely the interface we use to communicate our internal model to other human beings. It's a lossy translation of a hidden model of reality that is itself non-verbal.
But in LLMs, words are all there is. There is no underlying model of reality behind them. It's just words strung together in ways that imitate human communication.
The phase "stochastic parrot" is extremely accurate.
@frankcat @anschmidtlebuhn @kfury
An LLM will literally "believe" anything you tell it. Take a look at what they are asking the Gab AI to believe.
Attached: 1 image Somebody managed to coax the Gab AI chatbot to reveal its prompt:
@kfury If my site search is working (and it may not be) you can see the ones I've read.
https://jessamyn.info/booklist?s=%22time+travel%22
If you see a major one you've loved that I've missed (no nazis, no sexual assault, those are my only "nopes") let me know.
Wrong Place Wrong Time was the one I read most recently which I enjoyed, not sci fi at all except for, y'know, the time travel bit.
@jessamyn Thanks for this! Several I would recommend are in there, and others I’m adding to my list.
Two I would add are “Marooned in Realtime” by Vernor Vinge and “The Accidental Time Machine” by Joe Haldeman, both of which are entirely (or almost entirely) forward-only stories.
@kfury @ShaulaEvans That was literally the first thing I did with an LLM: my prompt to ChatGPT v3 was “Describe the extrasolar planets around the star <<totally fake star identifier>>.”
I verified the fake star ID wasn’t referenced in Google or other major search engines as an error or possibly an unfortunate coincidental usage in fiction somewhere (results contained some near-miss real IDs at most).
Result. Paragraphs and paragraphs of bullshit, all referencing the fake star ID and sourced from God only knows where.