I think Stochastic Parrots was right, in 2021, to say that the #LLM of that era were not grounded in any communicative situation. But it’s a lot less clear to me that’s still true about #ChatGPT. The point of tuning with human feedback is very explicitly to train the model in a specific dialogic situation & for a particular audience.
In principle that’s ground for rapprochement. Critics could say “look, we were right about what was missing, and they’ve had to adjust their strategy—which is now better.” But that doesn’t seem to be happening; instead a critique that was originally “this won’t work” is hardening and turning into “this must not be allowed to work.” https://irisvanrooijcogsci.com/2023/01/14/stop-feeding-the-hype-and-start-resisting/
Stop feeding the hype and start resisting

Three weeks ago, I wrote a blogpost about how ChatGPT is a “stochastic parrot” (a term coined by Bender, Gebru, McMillan-Major, & Shmitchell, 2021; see also this video for an explan…

Iris van Rooij
@TedUnderwood
I find the argument still stands, admittedly to a lesser extent because we're starting to provide systems with a mental model of human counterparts.
The latent issue we always had, which is exploding, is humans using a naif, anthropic mental model of machines.
@ideaferace We do badly need to be helping students and others understand how the systems work: context windows, word-piece tokenization, etc. I’m a little less convinced that it’s a crisis when journalists use the verb “understand.”