I’m all for enthusiasm and all that jazz, but this is semi obviously personal projection idealology and is a direct result of the type of work he was doing. It’s not like he caught a cold, he developed an anthropomorphic response from his programmed object. having said that, the whole “she’s real!” isn’t an impossibility, neigh, it is an inevitability. he’s just a bit cart before the horse here, and needs to watch Her and go touch grass. we’re a few years away from where he thinks we are now. like that Google engineer from Bards days who jumped the shark claiming they had AGI too…
LLMs will never be conscious.
LLMs are what happens when someone gets hyperfocused on a single metric. On the plus side, they’ve shown us a flaw in the Turing test.
To be fair, LLMs can be quite useful tools to fill the gaps around traditional tooling for writing and coding. But I agree with you that they will never become AGI, just by their very design.
When a metric becomes a target, etc.

Fuck no. It is only because of the Turing test that we can say they’re not conscious. You get someone questioning a bot and a person at the same time, they’re gonna figure out who’s who in short order. See: how many Rs in strawberry, name states without an E, should I walk to the car wash.

If a program was indistinguishable from a person, what basis would we have to say the person is intelligent but the program is not?