@dahukanna

#OMG !!
You are so right. They #believe it's a newer and better #Form of #Google and believe everything, it spits out.
Someone must tell them, that #ChatGPT hallucinates and is not a secure #Information #Base. It's Teacher's Work to clear this. #Happy #Fun with this.

#llmhallucinations

Hallucinating LLMs should be thought of as defective powertools that introduce structural flaws into whatever your building. Instead of the powertools company doing a recall of their faulty tools, they say that they need more regulations and tell consumers they just need to make sure they're using the powertools under ideal circumstances. This absolves the company of any responsibility and shifts the responsibility unto the end-user. It's not their fault they designed, manufactured, and sold defecting tools, it's your fault for not using them correctly! People then repeat the company's excuse "you just need to hold that drill at an exact 23 degree angle in relation to true magnetic north, that's why it's not working!" instead of just demanding better quality from the powertools company who's racking in dump trucks of money by selling faulty powertools.
#ai #LLMs #llmhallucinations
#LLMHallucinations in the sentence "the kango came out of the djoah and it tasted good" what is the taste of the djoah
If you ask #Bard to explain what this code does, without providing any code, it gives you the Fibonacci tester #LLMHallucinations
Large Language Models (LLMs) like PaLM or GPT occasionally demonstrate hallucinations, where the model makes stuff up that either doesn't make sense or doesn't match the information it was given. Understanding and managing these hallucinations has become essential as the use of these models increases in various... https://medium.com/google-cloud/generative-ai-understand-and-mitigate-hallucinations-in-llms-8af7de2f17e2 #LLMhallucinations #AIrisks #MisinformationChallenge #softcorpremium