@pawsplay

But also Turing test: Every time a machine or animal does something new, we move the goalposts on what it means to be human.

We cope so hard because we just *have* to be special. It's a trip watching that slowly collapse.

@Phosphenes So, it turns out, we're just animals with an uncomfortable talent for thinking about the nature of reality.

@pawsplay

Maybe being uncomfortable with reality will be our new Turing test! 😀

@Phosphenes Hey, if we can demonstrate software can both understand existential issues, and feel anxiety, I feel like we ought to at least call them cousin and give them a hug.
@Phosphenes @pawsplay Genuinely convincing a skilled interrogator that the bot is human, in an unrestricted text conversation, was where the original goalposts were (Turing, 1950). I think we’re currently getting *close*. But in the 1980s people got excited because there was success on a considerably wider set of goalposts (sounds pretty human to an idiot in a hurry, when talking about specific topics/roleplaying a specific scenario). 1/2
@Phosphenes @pawsplay Since then we’ve been moving the goalposts slowly *back* to where they were to start with. Like training a striker first to kick the ball far enough to get it over the line at all, then getting them to be more and more consistent at getting near the middle…
@johnaldis @Phosphenes Right. When I was a youth, I was actually interested in natural language interpretation. But during my burnout years, a lot of the interesting problems were "solved." But generating an actual conversation is next level, and we are not there. Because fundamentally, LLMs are limited because they don't know anything. They don't get why you shouldn't eat glue, because they don't eat, and they haven't learned enough about us to guess. They have no model for it.