Ugh -- agreed to speak to a journalist about whether people (actually women in particular, because it's a women's magazine type publication) should use chatbots for health information. I said in no uncertain terms that they should not, but they found someone else to quote too saying it could be beneficial blah blah blah. And then the article ends with a quote from me saying "Don't do this" followed by a "quote" from ChatGPT --- i.e. more synthetic text published as news. Grr.
@emilymbender
Indeed, the idea of people consulting chatbots for medical advice is very concerning. Meta’s Galactica invented medications, clinical trials and symptoms for nonexistent diseases. Most problematically, it smoothly embedded disinformation into correct statements, all in the writing style of reliable sources. For example, for colorectal cancer subtype survival chances it “cited” appropriate academic publications but gave completely wrong numbers - difficult to check for a layperson.

@johannes_lehmann @emilymbender

I played around with ChatGPTs code generation. I asked it to generate code to find duplicate files. At first it did the wrong thing by identifying files with the same name. Then it did it in the most inefficient way possible. It took several attempts, but I was eventually able to get it to generate a reasonable program. This is problematic, because it takes an expert to actually use it. At this point, these things are experts only tools.

@chidi_anagonye @johannes_lehmann @emilymbender

"It took several attempts, but I was eventually able to get it to <<X>>"

Sentences of this form make me want to scream.

The actual meaning is that YOU are doing <<X>>.

The only thing ChatGPT is doing is spitting out random garbage that you're trying to edit into something useful.

Maybe it'll be less work than starting from scratch; maybe not.

STOP ANTHROPOMORPHIZING.

@wrog @chidi_anagonye @johannes_lehmann @emilymbender
> STOP ANTHROPOMORPHIZING

You’re gonna hurt its feelings.

… don’t make the magic future basilisk mad!