@ct_bergstrom @emilymbender
Carl, I really appreciate your portion of the article. The following quotes from you and your co-writers pinpoint my concerns. That such misconstructions are being projected across the net and quoted w no context around their generation appears to me likely to trend towards chaotic dumbing down of our collective understandings.
From the Article above:
(https://www.pnas.org/doi/10.1073/pnas.2401227121)
"LLMs are simply models of word form distributions extracted from text—not models of the information that people might get from reading that text (45). "
"And most importantly, when someone uses an LLM to generate a literature review, the claims generated are not directly derived from the manuscripts cited. Rather, the machine creates textual claims, and then predicts the citations that might be associated with similar text. Obviously, this practice violates all norms of scholarly citation. At best, LLMs gesticulate toward the shoulders of giants. "
"Automatically generating something that looks like a manuscript is very different from the iterative process of actually writing a manuscript. Yet the output can be difficult to distinguish, particularly in a cursory read or by inexpert readers. "
"This false dichotomy between communication and investigation reflects a fundamental misunderstanding of the nature of science (56) that devalues the communicative aspects of science and ignores the role of writing in the process of formulating, organizing, and refining ideas."
This last being, from my view, one of the most critical dangers to new discovery and creative thought that mis-use of AI represents.
Thank you for clearly pointing this out.