#Business #Misconceptions
Stop asking AI about how it works · “The model is just confidently hallucinating its own 'reasoning.’” https://ilo.im/16932p
_____
#AI #Reasoning #Hallucinations #Design #ProductDesign #UxDesign #UiDesign #WebDesign #Development #WebDev

LLMs Hallucinate: Why AI Explanations Are Often Made Up | Britney Muller posted on the topic | LinkedIn
Stop asking ChatGPT about how it works (it's making stuff up) We've all seen this: Someone gets a weird answer from an LLM and asks, "Why did you say that?" The LLM replies, "I mentioned X because of Y..." And the user thinks they've uncovered some underlying LLM logic. The model is just confidently hallucinating its own 'reasoning'. LLMs are not decision trees. They are not SQL databases that you can query. They are not logging anything internally to read back to you. They are not truth engines. They are probability machines. When you ask a model "why did you answer in that way?" the model isn't looking back at its own architecture or code, it's just predicting: [What would a helpful AI assistant say in this situation?] This is called Post-Hoc Rationalization: inventing a story/reason that fits or justifies the answer it just gave you. Not to mention, LLMs are tuned (via RLHF) to be helpful and agreeable. -Sycophants by design, telling you what they think you want to hear. Best analogy I can come up with for these sycophantic tendencies: Ever asked a kid who has chocolate all over their face, "Did you have a cookie?" (when they weren't supposed to)? They'll think up & give you an answer that's most likely to make you happy (and keep them out of trouble). "The dog ate one!" The big difference? Kids know the truth! LLMs don't. They just invent one. Note: I want to applaud everyone testing these tools; it's the most powerful path to learning! It's one thing to read about this stuff and an entirely other thing to experience it. Be kind to yourself. Be kind to others & Stay Skeptical! (edit to improve my 🍪 analogy) | 179 comments on LinkedIn