πŸŽ©πŸ€– Scientists have finally discovered the elusive "hallucination neurons" in #LLMs, and it only took them a 2,512-page paper to do so! Because who doesn't love a light read on artificial brain synapses? Clearly, hallucinations aren't just for humans anymore. πŸ˜‚πŸ“š
https://arxiv.org/abs/2512.01797 #hallucinationneurons #neuroscience #artificialintelligence #lightreading #humor #HackerNews #ngated
H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs

Large language models (LLMs) frequently generate hallucinations -- plausible but factually incorrect outputs -- undermining their reliability. While prior work has examined hallucinations from macroscopic perspectives such as training data and objectives, the underlying neuron-level mechanisms remain largely unexplored. In this paper, we conduct a systematic investigation into hallucination-associated neurons (H-Neurons) in LLMs from three perspectives: identification, behavioral impact, and origins. Regarding their identification, we demonstrate that a remarkably sparse subset of neurons (less than $0.1\%$ of total neurons) can reliably predict hallucination occurrences, with strong generalization across diverse scenarios. In terms of behavioral impact, controlled interventions reveal that these neurons are causally linked to over-compliance behaviors. Concerning their origins, we trace these neurons back to the pre-trained base models and find that these neurons remain predictive for hallucination detection, indicating they emerge during pre-training. Our findings bridge macroscopic behavioral patterns with microscopic neural mechanisms, offering insights for developing more reliable LLMs.

arXiv.org