My daughter, who has had a degree in computer science for 25 years, posted this observation about ChatGPT on Facebook. It's the best description I've seen:
@DrewKadel Love this. We want so badly for ChatGPT to produce answers, opinions, and art, but all it can do is make plausible simulations of those things. As a species, we've never had to deal with that before.

@ngaylinn @DrewKadel
Actually we as a species have had to deal with that before.

We call them grifters.

@bornach @DrewKadel I dunno. A grifter still wants something from you. They're hiding their intent, but it's something you can imagine, understand, and detect if you're paying attention.

LLMs are unreadable and unpredictable because they have no intent. They may switch between friend and grifter depending on what sounds right in the flow of the conversation, without any conception of what's good or bad for you or for them.

On the other hand, if a grifter asks an LLM to write them script to achieve something specific, that's another thing entirely...

@ngaylinn @bornach @DrewKadel They have been specifically marketed as question-answering and search engine tools. The people misrepresenting them are the grifters.

@grvsmth @ngaylinn @bornach @DrewKadel

Agree - this is a tool that can be used by grifters. It seems like ChatGPT is a relatively trivial problem, generate responses that fit the pattern of language found in authoratative sources. I believe that Open AI is using it as a demonstration, and generate lots of free media coverage to get paying customers to buy their product. The grift is the misrepresentation.

@ngaylinn
LLMs are never your friend

They are always in what can best be described as "the grifter" mode. The entire training regime of a generative AI chatbot is geared towards getting one thing, an upvote from a human rating the quality of the conversation

Admittedly this is an over-simplification. Reinforcement Learning with Human Feedback involves training a reward policy - a 2nd neural network that is ultimately responsible for rewarding the chatbot for giving "good" responses.

@bornach @ngaylinn

Indeed, that second network is trained to be "engaging" in conversation. Its goal is to keep the attention of the user, mostly for marketing purposes.
It's not really difficult to recognise, but it's never easy.

Can ChatGPT be a doctor? Bot passes medical exam, diagnoses conditions

ChatGPT's latest software upgrade, called GPT-4, is "better than many doctors I've observed" at clinical diagnosis, one physician said.

Insider

@GordanKnott @ngaylinn @DrewKadel
Yet another flawed benchmark in which the LLM very likely memorised the answers
https://wandering.shop/@janellecshane/110104164829618120

Without any knowledge of how much the training dataset was contaminated by the medical exam questions/answers (and OpenAI's own whitepaper admits there is contamination)
https://youtu.be/PEjl7-7lZLA?t=4m0s
we cannot really know how it would perform in the real world if say a novel virus were to start spreading

Janelle Shane (@[email protected])

Attached: 1 image Remember seeing something about GPT-4 doing well on standardized tests? It turns out it may have memorized the answers. https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks #gpt4 #AIHype #ThisIsWhyWeDontTestOnTheTrainingData

The Wandering Shop