#Palantir Demos Show How the #Military Could Use #AI #Chatbots to Generate #War Plans
Software demos and #Pentagon records detail how chatbots like #Anthropic ’s Claude could help the Pentagon analyze intelligence and suggest next steps.
#ai #artificialintelligence
> just what we need. A #hallucinating AI planning a war battle.
#Hallucinations: What Causes Them?
There are some surprising reasons for #hallucinating, and some are even normal.
https://www.psychologytoday.com/us/blog/the-mind-doctor/202509/hallucinations-what-causes-them
According to researchers at University of #Glasgow, #LLMs are not #hallucinating. The technical term is #bullshitting.
"Calling their mistakes ‘hallucinations’ lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This is the wrong metaphor.... they are not trying to convey information at all. They are bullshitting."
https://link.springer.com/article/10.1007/s10676-024-09775-5

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
Classic … #ChatGPT #hallucinating again. 🙃
A working solution is using ANSI escape codes.
\e[7m inverted text here \e[27m
Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.
"Google Gemini misidentified a poisonous mushroom, saying it was a common button mushroom."—Emily Dreibelbis Forlini >
https://www.pcmag.com/news/dogs-playing-in-the-nba-googles-ai-overviews-are-already-spewing-nonsense
#AI #Google Gemini #hallucinating #misinformation #AIdangers
https://techround.co.uk/news/oxfords-computer-science-ai-hallucinates/
An interesting identification has been made at #Oxford University. They have developed an #algorithm that lets you identify when an AI is "hallucinating".
Dr. Sebastian Farquhar, a co-author of the study, explains, “We’re essentially asking the #AI the same question multiple times and observing the #consistency of the answers. A high #variation suggests the AI might be #hallucinating.”
This method is focusing on what they call ‘semantic entropy’.