There is a new issue over at @CriticalAI and a few readings are about Generative AI and higher education, writing, thinking and knowledge production.

Here: Evaluating LLM “Research Assistants” and Their Risks for Novice Researchers (Free for now)
https://doi.org/10.1215/2834703X-12095982

#AI #CriticalAI

Evaluating LLM “Research Assistants” and Their Risks for Novice Researchers

Abstract. LLM research assistants promise gains in efficiency and productivity, purporting to “streamline” the challenging, recursive, and often messy work of identifying, evaluating, analyzing, and synthesizing the existing research on a given topic. These tools have prompted concern in scholarly research communities, particularly among educators, who understand the processes of research-based writing—including rhetorical analysis, source evaluation, and the ability to grasp and analyze a set of research questions and to locate them in a larger context—as crucial activities, integral to building students’ intellectual abilities and skills. Through a close examination of one LLM research assistant, Google's NotebookLM, this analysis emphasizes four major points about NotebookLM and LLM research assistants more generally: (1) They are not good at summarizing texts: they get things wrong, make things up, and do so in complex, nonobvious ways; (2) Their outputs mimic, but do not actually produce, the outcomes of human reading comprehension and source synthesis; (3) They are proprietary black boxes; and (4) They risk harm to the cognitive development of their users. By helping students to develop practical understandings of how LLM systems generate what is called “research,” educators can empower them to assess the true capabilities, limitations, and consequences of these products.

Duke University Press