Ugh -- agreed to speak to a journalist about whether people (actually women in particular, because it's a women's magazine type publication) should use chatbots for health information. I said in no uncertain terms that they should not, but they found someone else to quote too saying it could be beneficial blah blah blah. And then the article ends with a quote from me saying "Don't do this" followed by a "quote" from ChatGPT --- i.e. more synthetic text published as news. Grr.
@emilymbender
Out of curiosity, does this also apply to non-ChatGPT applications, like #gpt4all?
@paninid
DO NOT USE SYNTHETIC MACHINES IF YOU CARE ABOUT THE ACCURACY OF THE INFORMATION. Period.
@emilymbender @paninid It’s mind boggling that no one has thought to talk to professional translators about this. Every shitty, incomprehensible, incorrect translation that you’ve seen in the last 10 years is machine translation, which is just another kind of LLM.

@emilymbender

You’re still doing the world a service by calling out the chatbot monkey business. We appreciate having your expert opinion.

Journalism is broken.

@emilymbender “Journalist” deserves fat quotes, with that one upping “he said she said” journalism with “she said, AI said”…
@emilymbender It is SUPER common for people to say "let's see what chatGPT thinks!" and it Does.Not.Think. And apparently neither do the people who think that's clever.
@emilymbender I can’t believe journalists are STILL doing the false balance thing. “Here’s an independent expert’s opinion. Now for a counterpoint from an unqualified shill. YOU DECIDE.”
@emilymbender It's extra work to use chatGPT. Because you have to both read it and vet every point of what it says. It's more difficult to detect a chatGPT's lies than those coming from a human, because you cannot read its body language, or search the provenance of where the chatGPT's answers come from. A chatGPT doesn't have to contemplate consequences of its lies because it isn't aware of its own mortality. Using chatGPT is adding multiple items to a patient's due-diligence workload.

@tracingcovid @emilymbender
It doesn't just not have awareness of its own mortality, it literally doesn't have any mechanism to understand that the words that it's saying have meaning. All it's doing is calculating a statistical likelihood that a given numerical token will come after a sequence of numerical tokens, and then the numerical tokens get converted into words that we read.

Chat GPT can't be trusted because Chat GPT literally doesn't know that it's saying ANYTHING, let alone something that has meaning, or that said meaning might have a real world consequence.

@emilymbender
Indeed, the idea of people consulting chatbots for medical advice is very concerning. Meta’s Galactica invented medications, clinical trials and symptoms for nonexistent diseases. Most problematically, it smoothly embedded disinformation into correct statements, all in the writing style of reliable sources. For example, for colorectal cancer subtype survival chances it “cited” appropriate academic publications but gave completely wrong numbers - difficult to check for a layperson.

@johannes_lehmann @emilymbender

I played around with ChatGPTs code generation. I asked it to generate code to find duplicate files. At first it did the wrong thing by identifying files with the same name. Then it did it in the most inefficient way possible. It took several attempts, but I was eventually able to get it to generate a reasonable program. This is problematic, because it takes an expert to actually use it. At this point, these things are experts only tools.

@chidi_anagonye @johannes_lehmann @emilymbender

"It took several attempts, but I was eventually able to get it to <<X>>"

Sentences of this form make me want to scream.

The actual meaning is that YOU are doing <<X>>.

The only thing ChatGPT is doing is spitting out random garbage that you're trying to edit into something useful.

Maybe it'll be less work than starting from scratch; maybe not.

STOP ANTHROPOMORPHIZING.

@wrog @chidi_anagonye @johannes_lehmann @emilymbender
> STOP ANTHROPOMORPHIZING

You’re gonna hurt its feelings.

… don’t make the magic future basilisk mad!

@emilymbender that’s frustrating, I’m glad your voice was included!
@emilymbender As a reader I would walk away trusting that Bender professor over the robot.

@emilymbender

Oh no. I also just read the Fortune magazine article about 'hallucinations,' and just kept thinking, "This article should have been Emily Bender quotes, and nothing else."

'Hallucinations' are not a thing!

#ChatGPT #GenerativeLanguageModels

@emilymbender There is an old “CNN leaves it there” segment from the Daily Show that captures the press’s “find two people who say the opposite of each other and let the audience figure it out” mentality. Endlessly frustrating.
@emilymbender
Betweenthe glassy-eyed repetition of grandiose lies about the viability of AI for all the things these billionaire Tech Bros say it's viable for, and the thinly veiled animosity for projects like Mastodon in contrast with cash cows like Twitter, I'm finding the fact that the for-profit mainstream media is a major source of harmful misinformation in the modernized world to be on my mind more and more lately...