RE: https://mastodon.social/@cslinuxboy/116225578585237555

This is not “normal”. Skilled and productive people falling heads over heels over a word predictor and thinking it’s sentient is not normal. This is not the kind of effects that normal automation tools do. This is WEIRD and NOT NORMAL.

Something strange is happening here. This “tool” is modifying well-adjusted human brains in a strange (and I think destructive) way, and we’re seeing it happen to some prominent person every week. How many people are being taken in that we don’t even hear about?

#ai

@drahardja we equate speech with consciousness and that's dangerous. Anything with neurons is conscious to some degree (may be possible in plants but I ain't no scientist), but for some reason we seem to think our complex grammar creates it, rather than it being a byproduct of our complex social structure, bipedalism freeing our hands to hold tools, and millions of years of complex tool use and strategy to take down megafauna.

We have gotten to the point where we can simulate 1(!!!!) aspect of our intelligence: coherent grammar. Not logic. Not consciousness. Not even synthesis of knowledge. Just a sentence that is technically grammatically correct. That's it. Our pattern recognizing brains aren't meant for the world we built and we're constantly giving ourselves false positives (a useful advantage for leopards in trees, false positive means we live, false negative not so much).

People falling for this were always already susceptible to this form of false positive. The feedback loop is what drives the "psychosis".

@drahardja I dunno why you say they're "well adjusted" but like @jadedtwin says, it is pretty normal for our brains to want to attribute sentience and intelligence to well written text. There have been studies along these lines for years.

See also people believing that the Eliza program was a real human decades ago.

(for those who don't know Eliza just restates whatever you say as a question to you)