I should probably read Weizenbaum's book. As early as 1976, just 10 years after ELIZA (and probably earlier) he understood that AI could not avoid having potentially dangerous biases.
Good grief, Weizenbaum wrote this ~50 years ago. Remind you of anything in particular currently happening in tech?
@stevenf an easier way to make money than delivering real capabilities.

@stevenf @nicklockwood

💯 “Decisions made by the general public about emergent technologies depend much more on what that public attributes to such technologies than on what they actually are or can and cannot do.”

@stevenf Apart from anything else, I love the drive-by insight that people don't understand language entirely, either – something that most #ActuallyAutistic people, for example, experience first hand. (I wish I had 10p for every time I think I've said something perfectly rational only to be met with blank looks.)
@fishidwardrobe @stevenf That’s a very reversed way of looking at it. Language is not some abstract entity that is exists on its own, and language has no meaning outside humans. Language is the communication. Language is interaction. Sender and receiver cooperate, and failure to communicate is the failure of both parties.

@ahltorp @stevenf Well, yes, I think that's exactly what he (and I) are saying. That's what "contextual frameworks" means, I think? We understand each other only because of shared context?

Ironic that we are not understanding each other, yes? :D

@fishidwardrobe @stevenf Yes, of course it’s quite probable that I don’t express myself clearly enough (as you noted).

The thing I think I object to is the “people don't understand language entirely”, because as a linguist, I define language loosely as “the thing that people understand”, and that’s why I found your reasoning “reversed”.

If it’s “people don’t understand everything everyone says”, then that’s trivially true: not all humans know all languages, not all subjects, etc.

@ahltorp @stevenf Well, no. I don't think Weizenbaum is saying either of these things. He's saying that even given a common language and subject, people STILL often fail to understand each other, because they have a different context for the same words and phrases.

For example, the word "woke" will have very different connotations for you depending on your politics. But essentially it still means the same thing.

And of course a computer program can have no understanding of it at all.

@stevenf oh come on, the tools have come a long way. They barely need to fake product demos anymore.
@stevenf The only difference now is the VC money involved.
@stevenf @vmbrasseur By a curious coincidence, I just reviewed Weizenbaum’s book this week: https://ockham.online/computer-power-and-human-reason-6d9432cc2850

@stevenf

So true.

And, thanks for alt-texting it!

@stevenf Boy-howdee. And Eliza ran on the Rogerian method of open-ended questions -that's it! As for language and context, one would think that #Lojban would be useful in this regard. Nein! #aiconfab galore! (LLMs hallucinate more than I ever did~!)
@[email protected] what's the source for that page?
@stevenf @drewmccormack
I believe I never gave a single computer course at the Uni without mentioning this very example. Well, the first time I read it was 10 years later than it was published. But I still remember the red cover of the book (Russian translation - at the time I was behind the iron wall).

@stevenf

People have an incredibly strong cognitive bias toward ascribing personhood to phenomena. It's why we assumed that there had to be a thunder god behind the thunder. It's why people project complex cognition onto their pets. It's why I call my Google Home voice assistant "she".

With systems that can mimic personhood as well as current AI, most of us don't stand a chance of avoiding this misperception. And they're going to keep getting better.