#MissKittyWide #Mind #technology. If #Infinity, then space is your home. And since the body disappears and most people believe that they continue, then you are space. #Space is #Spirit. Ultimate #consciousness. You're not away. You're #hallucinating. Kids playing in a sandbox. I'm not dissociating.

#Palantir Demos Show How the #Military Could Use #AI #Chatbots to Generate #War Plans

Software demos and #Pentagon records detail how chatbots like #Anthropic ’s Claude could help the Pentagon analyze intelligence and suggest next steps.
#ai #artificialintelligence

> just what we need. A #hallucinating AI planning a war battle.

https://www.wired.com/story/palantir-demos-show-how-the-military-can-use-ai-chatbots-to-generate-war-plans/

Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans

Software demos and Pentagon records detail how chatbots like Anthropic’s Claude could help the Pentagon analyze intelligence and suggest next steps.

WIRED

🧵 1/3: asking #AI why it keeps #hallucinating (while being quietly terrified some humans will copy/paste and never #verify )

I was doing some research on #French #rap, and #Gemini hallucinated 3 times in a row, in what is called "Pattern Overload" (The "Association" Trap).

The Mushroom That Makes the Brain See Little People

A common edible mushroom in parts of China is drawing scientific interest after being linked to consistent hallucinations of miniature human like figures.

Unexplained.ie

#Hallucinations: What Causes Them?
There are some surprising reasons for #hallucinating, and some are even normal.

https://www.psychologytoday.com/us/blog/the-mind-doctor/202509/hallucinations-what-causes-them

Hallucinations: What Causes Them?

Hallucinations are often frightening and are caused by mental and general medical illnesses. What are those illnesses? Are hallucinations ever normal?

Psychology Today

According to researchers at University of #Glasgow, #LLMs are not #hallucinating. The technical term is #bullshitting.

"Calling their mistakes ‘hallucinations’ lends itself to the confusion that the machines are in some way misperceiving but are nonetheless trying to convey something that they believe or have perceived. This is the wrong metaphor.... they are not trying to convey information at all. They are bullshitting."

https://link.springer.com/article/10.1007/s10676-024-09775-5

#LLM #ChatGPT #OpenAI #Claude #LLama #AI

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink

Classic … #ChatGPT #hallucinating again. 🙃

A working solution is using ANSI escape codes.

\e[7m inverted text here \e[27m
@shanselman and Mark Russinovich learn responsible #AI. If there is one recorded #MSIgnite session you want to see, it is this one. Learn about limitations and threats through live demos like #jailbraking, #promptinjection, #reasoning, #hallucinating, #kindness, and how to prepare for them. https://ignite.microsoft.com/en-US/sessions/BRK329
Scott and Mark learn responsible AI

Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.

"Google Gemini misidentified a poisonous mushroom, saying it was a common button mushroom."—Emily Dreibelbis Forlini >

https://www.pcmag.com/news/dogs-playing-in-the-nba-googles-ai-overviews-are-already-spewing-nonsense

#AI #Google Gemini #hallucinating #misinformation #AIdangers

https://techround.co.uk/news/oxfords-computer-science-ai-hallucinates/
An interesting identification has been made at #Oxford University. They have developed an #algorithm that lets you identify when an AI is "hallucinating".
Dr. Sebastian Farquhar, a co-author of the study, explains, “We’re essentially asking the #AI the same question multiple times and observing the #consistency of the answers. A high #variation suggests the AI might be #hallucinating.”

This method is focusing on what they call ‘semantic entropy’.

Oxford’s Computer Science Researchers Can Now Tell When AI Hallucinates

An interesting identification has been made at Oxford University. They have developed an algorithm that lets you identify when an AI is hallucinating. An AI hallucination is when AI models such as Gemini and ChatGPT

TechRound