Interesting: AI decieves on purpose - it's a fun show/pod that's interesting at the same time. Give it a go - laugh and learn at the same time.
#theaifix #GrahamCluley #MarkStockley #podcast #IT #AI #funny

Listen at https://podcasts.apple.com/us/podcast/ais-deliberate-deceptions-and-elons-unhinged-mode/id1753381111?i=1000683949435 or at
https://youtu.be/Qkhl3gwkRWc?si=roElF_sSrUzbvuGO

AI’s deliberate deceptions, and Elon's "unhinged" mode

Podcast Episode · The AI Fix · 01/14/2025 · 41m

Apple Podcasts
@iam_jfnklstrm LLMs have no thought process, which means it also has no purpose, the ones that deceive is the ones developing the LLM.
@sotolf maybe that's true - Im not that technical. But I got the impression that it might have some kind of reasoning or system to keep to a goal given to it.

@iam_jfnklstrm No, an LLM is just statistics and combinatorics, that's why you have a learning set where they have to steal lots of books and stuff from people, basically the LLM is just an engine putting out the most likely next part of a sentence to follow the one it has.

You can think of it as a more complex resource hungry and powerful markov-chain.

There is no goal, it's only what's the most likely thing to come next based on this huge amount of statistics that is built up from having taken in and analysed tons of tons of text.

As far as can summise the only other thing that really can "guide" the LLM to not show text is either to not put something into the training sets (This is what they use really cheap labour from africa for doing, that and tagging stuff so that they can build better statistics) or basically using other statistics that runs over it afterwards and basically replaces the response with a "friendly message that says that the result wasn't good".

@sotolf OK, so it's not the LLM, it's the instructions on top of it.
Now I have to go back and listen again - I feel I got it all wrong from the beginning.
But it's still great fun though..
@iam_jfnklstrm I don't know, I never got into the whole "AI" hype.
@sotolf well this is not so much hype as a show about when things go wrong. Like that many LLMs can't count letters in words, or sort numbers correctly. So it's a counter force to hype...
@iam_jfnklstrm That is because the LLM can't reason, it's just statistics, it doesn't "think", it just orders sentence parts from what is most likely, so if the number of r's in strawberry isn't in the example set it won't be able to spit that out since it's not a likely thing that comes after strawberry. It's not Intelligent, it's not thinking, it's just spitting out the most likely thing given it's input so it therefore can't do logic puzzles, thinking, actually give a decent response to something, it just outputs the most generic "what comes next" that it can find.