Twenty years ago when Dr. C. Everett Koop released the Surgeon General’s report, The Health Consequences of Involuntary Smoking, it was the first Surgeon General’s report to conclude that involuntary exposure of nonsmokers to tobacco smoke causes disease. The topic of involuntary exposure of nonsmokers to secondhand smoke was first considered in Surgeon General Jesse Steinfeld’s 1972 report, and by 1986, the causal linkage between inhaling secondhand smoke and the risk for lung cancer was clear. By then, there was also abundant evidence of adverse effects of smoking by parents on their children.
An Australian court ordered Facebook owner Meta Platforms <a href="https://www.reuters.com/markets/companies/META.O" target="_blank">(META.O)</a> to pay fines totalling A$20 million ($14 million) for collecting user data through a smartphone application purporting to protect privacy without disclosing its actions.
By providing external information to large language models (LLMs), tool augmentation (including retrieval augmentation) has emerged as a promising solution for addressing the limitations of LLMs' static parametric memory. However, how receptive are LLMs to such external evidence, especially when the evidence conflicts with their parametric memory? We present the first comprehensive and controlled investigation into the behavior of LLMs when encountering knowledge conflicts. We propose a systematic framework to elicit high-quality parametric memory from LLMs and construct the corresponding counter-memory, which enables us to conduct a series of controlled experiments. Our investigation reveals seemingly contradicting behaviors of LLMs. On the one hand, different from prior wisdom, we find that LLMs can be highly receptive to external evidence even when that conflicts with their parametric memory, given that the external evidence is coherent and convincing. On the other hand, LLMs also demonstrate a strong confirmation bias when the external evidence contains some information that is consistent with their parametric memory, despite being presented with conflicting evidence at the same time. These results pose important implications that are worth careful consideration for the further development and deployment of tool- and retrieval-augmented LLMs. Resources are available at https://github.com/OSU-NLP-Group/LLM-Knowledge-Conflict.
CBS RADIO WORKSHOP CBS Radio Workshop was a revival of the Columbia Workshop of the late thirties. All 86 episodes survive today. The series aired from 27...