I've been seeing chatter about someone using ChatGPT to cure their dog's cancer, so decided to go find out what that was actually about.

Short 🧵>>

The short version is that there are some really exciting developments here about mRNA vaccines, based on genomic sequencing of tumor cells, that seem to be having a beneficial effect on the dog in question.
>>

https://www.theaustralian.com.au/business/technology/tech-boss-uses-ai-and-chatgpt-to-create-cancer-vaccine-for-his-dying-dog/news-story/292a21bcbe93efa17810bfcfcdfadbf7

But the meme version of this story is super misleading. The actual work of creating the treatment was done by people, using various tools, and to the extent that machine learning was involved it was in things like AlphaFold.

Also, the dog isn't cured -- even the dog's person acknowledges that.
>>

As for how ChatGPT was relevant to this? Apparently, the desparate dog owner was apparently using it as information source, and landed on the idea of immunotherapy:
>>

But despite being pretty clear in the body of the article that the "cure" (not a cure) came about through the work of scientists (and this dogged dog-dad), The Australian promotes the story like this.

Shame on them.
>>

As a general media literacy tip: If the claim is that someone used "AI" or "ChatGPT" to do something, the real story is probably something else.
@emilymbender Except maybe when its something very bad, like mistakenly erase all of their emails. 😐
@jor @emilymbender nah. Still works: "this poster was not meant to be antisemitic. It was AI."
Narrotr's voice: it was not AI.
@jor @emilymbender That's a pretty mild example of the bad that can come from these things.

@emilymbender Right at the top of the story (italics mine): "Riddled with cancer, Rosie the rescue dog had only months to live, until her dogged owner collared a chatbot to collaborate with elite medical scientists in the quest for a cure."

Thanks for unpacking this. We know how many people don't read past headlines. sigh

@emilymbender This is AUS not Oz but even here when AI/magic tech actually “works” there is often a human behind the curtain.
@Pineywoozle I'd say there is 'always' a human behind the curtain. And there should always be a human to check the LLM's results. They are proven to be wrong, or to hallucinate whole sequences. @emilymbender
@Tooden Yep. They do a few things well on their own but not much and definately not the way it’s hyped.

@emilymbender

Some (I for one) think that calling an #LLM "#ArtificialIntelligence" is a misnomer. More marketing hype than anything "intelligent".

As you said above... using #AI pattern matching software for molecular engineering is one thing. Using an LLM to produce #AIslop #microslop #clickbait is another.

@emilymbender i mean maybe chatgpt wrote the article
@emilymbender I'm seeing a common misconception from my non-tech friends that generative ai is capable of innovation. I try to explain that it can't do that but it doesn't seem to sink in. Headlines like this aren't helping.

@emilymbender

Somewhat like "the pedestrian was killed by a car" news reports.