@Tattie You just reminded me of a movie-special-effects guy I met a long time ago.
He had a pitch for an educational TV show. The idea was that a group of schoolchildren would be brought in, and a scientist(-looking) person would explain something to them.
The twist was that only about the first half of the presentation would be strictly factual. They'd start by including subtle errors, move on to less-subtle factual errors, building up to completely OTTWTFBBQ fantasy ravings by the end.
The thrust of the setup was to see how long it would take the kids to twig that something ain't right, and to then speak up and start questioning it. The aim, of course, was to teach them (and the audience) to listen critically and question anything that smelled off, no matter how well-credentialled the speaker might be.
Sadly (and predictably) it never got optioned, let alone actually implemented.
I feel like we could really use that kind of thing... about 20 years ago, actually. Or maybe even sooner.
@Tattie Irrelevant but amusing aside: his company made TV adverts for Bundaberg rum, featuring a life-sized polar bear man-in-suit. You know, just the sort of thing you'd expect to find in tropical Queensland.
Said bear was idolised by many chest-beaty manly-men as the epitome of grunt-snort manliness.
The team were endlessly amused by the thought of how these guys would react if they found out that the man in this suit in fact a)was very, very gay, and b)had the physical appearance of about five and a half feet of knotted string.
@Tattie exactly.
I came across a text from 2023 recently that used that term. And I think it is the best term to describe LLMs in their current form as they are pushed I to every product.
But here is the source
https://ryanleetaylor.com/blog/mansplaining-as-a-service
@BrKloeckner @Tattie I remember when the DVD DRM was cracked and someone worked out where in Pi it was, as a kind of 'steganography' for communicating it legally.
It made me think that we could encode anything that way -- binaries, images, etc. Of course, you do have to *find* it in there...
Spamsplaining.
@daniel_bohrer I work with a few of those people. They'll plop an AI answer in to the chat (sometimes that's only vaguely tangentially related to the actual question), and the number of times they'll come back a few minutes later with "ooops, it lied" is astounding.
> LLMs are mansplaining as a service
@Tattie Let me explain to you why you're right…