quack!

@feliks

As Iโ€™ve said before, the difference between an LLM and a rubber duck is that the duck is smart enough to shut up when it has nothing useful to say.

@david_chisnall @feliks @stilgherrian So, are you saying that LLMs arenโ€™t resistant to rubber-duck cryptanalysis?!

@david_chisnall @feliks That's because LLMs aren't intelligent. They're billed as "Artificial Intelligence" but what they actually are is word prediction algorithms.

It's why they suck at math and produce hallucinations. They don't actually know anything. Sure, they have datasets they can draw from, but the LLM doesn't actually know the difference between "banana" and "iron" only that "I eat a banana" makes sense while "I eat iron" doesn't.

@stilgherrian it doesn't even 'know' which one 'makes sense' it's just the training data is more likely to have references to eating bananas than irons. And if there does happen to be a reference to iron eating in there somewhere, it might just pop that out at you anyway!
@dj2mn @stilgherrian Also, whether "eat iron" is plausible depends on context, and this is exactly the scale difference between an autocorrect (predict next most-likely keystrokes, to form words, given last few words) and an LLM (predicts next words, to predict sentences and paragraphs, based on a few pages worth of sentences), so if "anemia" was in previous paragraphs, probability to pick "iron" (and "supplements") increases because that sequence of words is plausible in a medical context.
...
@dj2mn @stilgherrian ...
and this is what increases the illusion of "knowing" that tricks our human "optimized for seeing jesus' face in a bread toast" brains into imagining there could be some intelligence under an overly complex statistical parlor-trick.
@david_chisnall @feliks in my experience, the rubber duck method can lead you to think out the box; the LLM will happily devise more ways to keep you in the box forever.

@michelv @david_chisnall @feliks I use the rubber duck method for other things than coding, it is a great way to get some insights on how you manage to convey a message in a presentation or what informations are missing from a documentation you're writing.

LLM canโ€™t help on this and never will.

@david_chisnall @feliks

My opinion of ducks (rubber and living) increases every day. XD

Pretty sure I've had a more enjoyable conversation with an actual living quacker than an LLM, particularly an LLM stan. XD

@feliks I approve this message
@feliks my favorite "ducky"
@feliks its almost as if we are an intelligent species. Fascinating.

@SholemAlejchem @feliks I wouldn't go that far ...
surely we have our moments but 'intelligent'?

I'll have to press X to doubt on that.

@feliks Exactly. This is primarily what it is to me. If you can write a decent prompt, you'll probably have thought it through.