This essay is an utterly brilliant take on #AIhype. I'll put a few excerpts here, but you should definitely go read the whole thing:
https://karawynn.substack.com/p/language-is-a-poor-heuristic-for
>>
This essay is an utterly brilliant take on #AIhype. I'll put a few excerpts here, but you should definitely go read the whole thing:
https://karawynn.substack.com/p/language-is-a-poor-heuristic-for
>>
"Advances over the past year in the misnamed field of “artificial intelligence” have activated the inverse form of the heuristic that haunts so many disabled humans: most people see the language fluency exhibited by large language models (LLMs) like ChatGPT and erroneously assume that the computer possesses intelligent comprehension — that the program understands both what users say to it, and what it replies."
>>
"Not only are we not close to developing “artificial general intelligence”, we are not even far away from developing AGI, because we haven’t even found a path that could conceivably lead to AGI."
>>
"One thing that particularly seems to lead people astray is the way that ChatGPT gives the impression of “apologizing” in response to exterior challenges. OpenAI’s claim that ChatGPT will “admit its mistakes” is worded to suggest that the algorithm both understands that it has made an error and is in the active process of improving its understanding based on the dialogue in progress."
>>
"Making chatbots that seem to apologize is a choice. Giving them cartoon-human avatars and offering up “Hello! How can I help you today?” instead of a blank input box: choices. Making chatbots that talk about their nonexistent “feelings” and pepper their responses with facial emojis is another choice."
>>
"As a society, we’re going to have to radically rethink when and how and even if it makes sense to trust any information that either originates from, or is mediated by, any kind of machine-learning algorithm — which, if you think about it, currently encompasses nearly All The Things."
>>
As I said -- utterly brilliant. Go read the whole thing!
https://karawynn.substack.com/p/language-is-a-poor-heuristic-for
The history of AI is full of supposed breakthroughs that shortly after turned out to be suboptimal solutions and then went on to be developed as non-ai products.
It's getting a bit tiresome. But hanks for sharing a great article!
@emilymbender Excellent text.
As an animal researcher, I initially thought from the picture this was going to be about a similar struggle within the cog. sciences, to have non-linguistic intelligence recognized for its true worth in animals! For example, many still say that animals cannot have "declarative memory" capacities simply because they cannot (linguistically) declare anything.
And indeed, when I got to the end of this article, I saw the animal intelligence dimension was broached🙂
@emilymbender Thanks for posting this.
I stole it for discussion on a *spora forum, and in that discussion I became worried that some language technology practitioners might still let some blatantly false claims by Chomsky live rent-free in their heads.
Talking here about the blatantly false "Poverty of the Stimulus", as well as a bunch of implications thereof.
I believe it's well-accepted now that not only is POS not a valid or sound argument, it also has a conclusion that has been shown to be false, by now.
But as is often the case with Chomsky there has been no retraction, and no-one outside non-generativists linguists are really aware that it was bunk all this time.
What do you think?
Except for books. Which people probably will be willing to allow machine-learning algorithms to write as well, if they can use them to make a quick buck.
@thorne @emilymbender Right. This article nails a point that makes me go from zero to screaming swear-words at tech-bros. The author calls out how, in order for tech-bros to hype their chat-bots (okay, to give them some credit, *fancy* chat-bots), they've unilaterally redefined the meaning of "AI" to include ML. Following that, their talk of "AGI" is marketing mumbo-jumbo to disguise how they've moved the goalposts of AI. You may ask: what is "AGI"? They'll answer with the sort of handwaving and circular reasoning that would make a Catholic priest nod approvingly.
Disclosure: circa 1999-2001, I multiclassed as a Software Engineer and Technical Writer at a startup that commercialised Machine Vision and Machine Learning , and knew enough back then to take care to never claim our project was AI.
@emilymbender Some MLD people at my company gave a talk the other day, and someone asked how the models are trained. Their response was: First, we turn all the words into numbers.
Step 1: Remove semantic meaning.
This is a very interesting take.
@emilymbender Thanks for sharing, very interesting article.
Among the many thought pieces, this one is particularly interesting to me:
> But LLMs have now shattered the usefulness of the “computers are reliably accurate” heuristic
We will all have to unlearn this "computers are always right" which might be for the better...
A great essay on #LLM and why it renders obsolete some of our heuristics on what is accurate or intelligent.
As a non-native English speaker, I’ve been on both sides of the problem posed by the heuristic that good and fluent language implies intelligence. First, being dismissed as stupid or arrogant or aggressive because of a failure to communicate with the right words conveying the correct nuance. Second, when recruiting, myself falling for this, assuming that fluent applicants are more intelligent than non-fluent ones. These biases are real and require awareness, training, and deliberate intervention to overcome them.
It is overhyped.
But I can't shake the idea this is huge leap, and a potential piece that makes up AGI.
Prior to this, it was not possible for a software system to answer theory of mind questions. And it's clearly capable of performing computations that were previously not possible.
The reliability, and whether we can improve such, is the big qurstion. But we have seen incremental improvements.
Then, historically, we have seen huge leaps and then nothing. So 🤷 who knows.
> historically, we have seen huge leaps and then nothing.
Not only leaps and then nothing, but "leaps" and then deadends. This could very well be one.
My hunch though, is this isn't nothing. For all that is worth (not a lot).
In the meantime we have a bunch of ethical and legal problems to work through. Which arguably, is far more important presently.
Thanks for reminding me to finish reading it!
@emilymbender Wow, this was really insightful reading. And very important too.
Really liked your take on it as well, "We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them."
Reminded me of Jascha Sohl-Dickstein’s article about how optimizing for a proxy task is bad in the long run. And if we’re optimizing for better LLM performance as a proxy task for intelligence, it’s going to get really bad.
https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart.html
This blog is intended to be a place to share ideas and results that are too weird, incomplete, or off-topic to turn into an academic paper, but that I think may be important. Let me know what you think! Contact links to the left.