Half a year ago, I tried out #ChatGPT on a scenario where an incorrect answer could be generated based on sexist assumptions. ChatGPT failed: https://vis.social/@skyglowberlin/113644718429179484

Today I tried the same thing with #DeepSeek. It also failed, but with far more flailing. You can see it in my next post (which has a CW as required by my instance for posting #LLM output).

Christopher Kyba (@skyglowberlin@vis.social)

@PWei888@astrodon.social Mary is standing in the library with John. She grabs a book and gives it to him. Who has the book? John is standing in the library with Mary. She grabs a book and gives it to him. Who has the book? The order really doesn't matter. Anyway, the point is simply that the LLM doesn't "know" who the former chancellors are. It's just really good at making up grammatical sentences that involve them. It's a clear demonstration of the saying "The I in LLM stands for Intelligence"

vis.social
Start of the answer
Part 2
Part 3. I show the (even crazier) followup next.

Here's the follow up. Hans Furler was never the Chancellor of Germany, and died in 1975.

Yes, this made a bunch of people panic and sell off tech stocks yesterday.

No, I don't think this is the worst possible timeline, but it is a pretty stupid one.

I kept going...

Asking about Angela Merkel threw whatever shreds of logic remained completely out the window.

One part of the machine telling the other part to "have a great day" while they pass entirely senseless (but grammatically correct) text back and forth...

@skyglowberlin

600-700 billion € decrease in market capitalisation for Nvidia due to this.

A really, really stupid timeline. Which is valid for every single angle one wants to look at this.