ICYMI 👀

If you use a lot of #LLM workflows and are annoyed with #AI #hallucination, prompting your model with a few additional instructions for dealing with uncertainty might just clear the errors out of your workflow.

Give it a read! 👇

https://timthepost.com/posts/avoiding-model-hallucinations-through-structured-uncertainty-handling/

Squelch LLM Hallucination Via Structured Uncertainty Handling

A short essay on how uncertainty impacts and causes model hallucinations as well as a brief guide to curbing it.

Tim's Press

Happy 20th Anniversary, “Truthiness”

It’s probably even more relevant to this decade’s technology than last decade’s politics

#colbert #truthiness #ai #hallucination

I think I’ve maxed out on What Can Possibly Go Wrong, and i’m proceeding straight to Oh Fucking God.

#fungi #mushrooms #mycology #chatgpt #ai #artificialintelligence #fieldidentification #danger #genai #hallucination #whatcanpossiblygowrong

Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem https://arstechni.ca/LPkp #hallucination #AIsycophancy #sycophancy #madeup #facts #AI
Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

In new research, AI models show a troubling tendency to agree with whatever the user says.

Ars Technica
Today’s lecture was on disjunctivist accounts of #hallucination. The students looked somewhat perplexed, but I’m betting that they understood more than they realised. They just don’t see yet how it constitutes an explanation of the (seeming) character of hallucination. #PhilPerception #disjunctivism
Today’s lecture was on disjunctivist accounts of #hallucination. The students looked somewhat perplexed, but I’m betting that they understood more than they realised. They just don’t see yet how it constitutes an explanation of the (seeming) character of hallucination—and they are probably not alone in that! #PhilPerception #disjunctivism

Lisa's Quietude (2024-25)

New works from: #HalluciNation

If you deal with any kind of #LLM that produces any kind of #hallucination, please take fifteen minutes to read this now so you possibly don't have to deal with it any longer.

LLM uncertainty is the primary driver of model hallucination, and I discuss ways you can address it from inference all the way down to prompts you can use right now.

#AI is never going to be 'perfect' - let's drive the conversation toward practical, instead.

https://timthepost.com/posts/avoiding-model-hallucinations-through-structured-uncertainty-handling/

Squelch LLM Hallucination Via Structured Uncertainty Handling

A short essay on how uncertainty impacts and causes model hallucinations as well as a brief guide to curbing it.

Tim's Press

Here is my morning trying to convince Gemini that, no, the --discard option in Debian's losetup is pure hallucination.

#ai #ml #hallucination #linux #debian #opensource #google #gemini

"Đángucci LLMs là nguồn tin 100% song? Aragon có thôibeat điểmfantomhallucination, datalimitation, và ranoutinfor. Nhưng mỗi năm, connghióai, AI jций một l זהuείς hơn expert Création. Vàyet, chpasstilbird vẫn nhiều thay đổi cần đ统一. Nhướng ý bạn? #AI #LLM #TinCậy #Hallucination #TechFuture"

https://www.reddit.com/r/singularity/comments/1oaxlx6/will_llms_become_a_reliable_source_of_information/