Katie Conrad

@kconrad
23 Followers
30 Following
14 Posts
Professor of English @ University of Kansas. Exploring science, technology, education, literature & culture. Particularly interested in generative AI.
Pandora's Bot [Substack]https://kconrad.substack.com
KU English profilehttps://english.ku.edu/people/kathryn-conrad
LLMs are a selection of a handful of words from billions of possibilities. They are selected one by one until they arrive at a destination that aligns with what we ask of it. But at no point does it understand the words it is writing, in the same way the Macrocilix Maia doesn’t have any conception of looking like bird shit and how that benefits. It simply happens, much like LLM generated sentences! No awareness needed.

Hallucination is Inevitable: An Innate Limitation of Large Language Models

https://arxiv.org/abs/2401.11817

"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. [...] By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hallucinate. "

@garymarcus

Hallucination is Inevitable: An Innate Limitation of Large Language Models

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucination is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all the computable functions and will therefore inevitably hallucinate if used as general problem solvers. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

arXiv.org

AI is how those with power get paid a billion times for the unpaid labour of the powerless.

Remember that AI is not some magical means of generating value from thin air, but rather an exploitative means of extracting it from the soils of labour and the land.

Keep tapping the sign, people.

Will guardrails save OpenAI from its copyright infringement problem? Gary Marcus and I discuss.
https://open.substack.com/pub/garymarcus/p/dall-es-new-guardrails-fast-furious?r=97c7a&utm_campaign=post&utm_medium=web
DALL-E’s New Guardrails: Fast, Furious, and Far from Airtight

If we had to guess, line that’s going to live in infamy from yesterday’s OpenAI announcement, a reply to the New York Times lawsuit, is probably the one that says “Regurgitation” is a rare bug that we are working to drive to zero”. Our view? Good luck with that.

Marcus on AI
Thanks to Reid Southen on Twitter, did a little prompting of my own on Midjourney. Left, original IP; right, Midjourney. Yeah, Disney / Lucasfilm / Marvel / DC can absorb the hit, but what about the regular artists whose work has been scraped to train these systems? Who have lost work, been lowballed, been fired even as their work is stolen to benefit Big Tech? To paraphrase Jane Rosenzweig, to what problem is this the solution?

Amazed by how appealing the AI Pedagogy Project from the metaLAB at Harvard's Berkman Klein Center is.

It's not overwhelming! (Most things AI are)

The activities are engaging, critical, useful, and playful, and the visuals are beautiful. Honored to be included and involved in the advisory group!
http://www.aipedagogy.org

The AI Pedagogy Project – metaLAB (at) Harvard

“Critical AI Literacy in a Time of Chatbots:
A Public Symposium for
Educators, Writers, and Citizens”—registration open! Free and open to the public. Please share widely!https://sites.rutgers.edu/critical-ai/event-details/
Events – Critical AI

“Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’”

More accurately, AI researchers have always said that this isn’t fixable but y’all were too obsessed with listening to con artists to pay attention but now the con is wearing thin. https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’

Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

Fortune
We'll be holding an MLA/CCCC-sponsored webinar on July 26 (11 AM Pacific/2 PM Eastern. https://webinars.mla.org/webinar/what-ai-means-for-teaching/ Our focus will be on the working paper, and we hope to learn more from participants about their needs and identify future priorities. The task force is hoping to engage in a recursive process, learning more about what needs that our two organizations can respond to as LLMs/ChatGPT influence writing and literature classrooms.
What AI Means for Teaching - MLA Webinars

Join us for this free webinar about the risks and benefits of AI and some recommendations for navigating AI in the college classroom.

MLA Webinars
Excited to share my Blueprint for an AI Bill of Rights for Education; thanks to @CriticalAI for the platform. Hoping this can help guide policy in our schools to protect ourselves, our students, and our educational mission. https://criticalai.org/2023/07/17/a-blueprint-for-an-ai-bill-of-rights-for-education-kathryn-conrad/?amp=1
SNEAK PREVIEW: A Blueprint for an AI Bill of Rights for Education  

By: Kathryn Conrad [Critical AI 2.1 is a special issue, co-edited by Lauren M.E. Goodlad and Matthew Stone, collecting interdisciplinary essays and think pieces on a wide range of topics invol…

Critical AI