So the "AI Safety" people at the chatbot company will not be the ones who think about, I don't know, the ELIZA chatbot from 60 years ago and the issues that arose let alone with this one?
@timnitGebru Goldfish have a better collective memory than Silicon Valley.
@researchbuzz @timnitGebru Do not attribute to lack of memory what is disingenuous malice.
@researchbuzz @timnitGebru Silicon Valley sure as hell archives better though, that's worth money...
@timnitGebru three years from now and this person is looking for work and wondering why all possible employers think she is mentally unstable
@timnitGebru Can you say more about this? I remember Eliza but didn't get to hear any of the cautionary tales

@danilo
Joseph Weizenbaum writes about the phenomenon of people thinking Eliza is a psychologist in

Weizenbaum, Joseph (1976). Computer Power and Human Reason: From Judgment To Calculation. San Francisco: W. H. Freeman. ISBN 978-0-716-70464-5. OCLC 1527521.

@timnitGebru “Never tried therapy before but this is probably it?” Ow, I just sprained my eyes from rolling them so hard.
@timnitGebru i have never tried therapy but that's probably not it

@chrisisgr8 @timnitGebru

i have never tried therapy but autocomplete's probably it

Which makes me think about a future on autocompletomancy: the art of divining the future from the utterances of a chatbot.

@timnitGebru if it’s a commercial job title that suggests concern for the well being of *users*, it’s a marketing role in disguise.
@timnitGebru This brought to mind a different Eliza, which was a wonderful visual novel type game from Zachtronics exploring AI driven therapy https://store.steampowered.com/app/716500/Eliza/
Save 50% on Eliza on Steam

Eliza is a visual novel about an AI counseling program, the people who develop it, and the people who use it. Follow Evelyn Ishino-Aubrey as she reconnects with people from her past, gets to know the people of Seattle who use Eliza for counseling, and decides the course of her future.

@timnitGebru Do you know if you use "chat mode" will they be, in turn, training on your voice data as well?
@timnitGebru @RuthMalan I'm physically shaking my head reading this. no no no no no no no... on the spectrum of terrible ideas, this is pretty ass-terrible
@timnitGebru someone in a discord I'm in recently shared the output they got from what is clearly a wrapper on an LLM (a la ELI5 but more options; "explain like X") where they prompted it for "the most intuitive" description of BPD and felt like its output was super validating and other people chimed in agreeing that's it's fair/realistic/accurate/affirming. I'm not sure if they realized it's an LLM (surely they must? but?), and I don't want to yuck their yum, but 😓
@timnitGebru why is “I’ve never done therapy but this replaces it” a take I’ve seen several times before? alongside the whole situation with Replika, the folks pushing these chatbots really do want to impose them on vulnerable communities they have no real ability to help

@zzt @timnitGebru
it's so dangerous. Emotionally manipulative and way too much power to a program that is incapable of empathy or sympathy, it has no way to relate or have life experience context. It's is only an illusion of it & it's really sad that real people are satisfied with ersatz people.
It is pacifying people with dummies.

This is not therapy. Having been through therapy, you cannot replace the insight and approach that you get from a real therapist.

@timnitGebru

I think we all know that they know.

@timnitGebru
I had never heard of ELIZA before. Yikes.
@timnitGebru rereading this book, at your suggestion, and it’s amazing how prescient it is.
@timnitGebru I think it's high time we stopped referring to "AI", and instead substitute the phrase "imaginary friend(s)".
@timnitGebru that's gonna get people killed...

@timnitGebru

Somebody has to come with a way to rank chatbots, specially in regards of:

1. How effectively are helpful resources for specific business customers.

2. How somehow are helpful virtual reallife experiences for tech bros.

¯\_(ツ)_/¯

@timnitGebru I wrote a version of Eliza for the BBC B computer called 'Dolittle'. It did things like changing a '.' into a ',' and then producing an error message so that if you typed *. to catalogue a disk it changed that to *, with the appropriate error message.
'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says

The incident raises concerns about guardrails around quickly-proliferating conversational AI models.

@timnitGebru I've heard that an appeal of "alternative medicine" is that the practitioners talk to clients longer than medical professionals talk to patients, and people feel like they've been heard and respected, even though the advice they get is useless if not outright dangerous.

More broadly, we're increasingly isolated, and this is another way to exploit our loneliness without the danger of people actually forming communities.

@timnitGebru This is so sad like, this person is basically telling the world that they don't have a single genuine friend.
@timnitGebru AI Safety people
“Who is Eliza?@
@timnitGebru Weizenbaum was disturbed when people used Eliza this way. It's not any better an idea half a century later.