An Open Letter to Georgetown Students, In Response to Recent Announcements about "Generative AI"

Image source: Bibliothèque nationale de France An Open Letter to Georgetown Students, In Response to Recent Announcements by the University about “Generative AI” Dear students, As you know, in …

Medium

[…] That’s why they are doing everything they can to convince you that you actually do not have the ability to think those thoughts, and that none of the ideas you might have about your own future are ideas that can actually be realized. It’s a big win for them, in their quest to persuade you of your powerlessness, that they have gotten your university to adapt their marketing language for its official statements, to shape its academic programming around the presumption of their indefinite economic primacy, and to pay for you to have free access to technologies that will make it harder — the more you use them — to know yourself to be a free intellectual, creative and moral agent.

@dangillmor

#theaicon #aihype #ai #gemini

@dangillmor

A Response to Emily Tucker's Open Letter to Georgetown Students

Emily Tucker's letter to Georgetown students deserves a serious answer. Not because it is wrong about everything — it isn't — but because it is wrong in a way that matters: it teaches students that outrage and analysis are the same thing, and they are not.

Let me be clear about what Tucker gets right. The AI industry has a genuine accountability problem. The labor practices behind the systems marketed as "intelligent" — the data labelers in the Global South, the content moderators absorbing psychological trauma at scale — are largely invisible and genuinely troubling. The consolidation of data and infrastructure into the hands of a few corporations raises real questions about market power and democratic governance. The environmental costs of large-scale model training are not trivial and are routinely understated in industry communications. These are serious issues, and anyone working in this field honestly has to reckon with them.

But Tucker does not stop there. She slides, almost without noticing, from "the industry has done real harm" to "chatbots are instruments of your intellectual subjugation" to "the tech billionaires fear you" — a rhetorical escalation that ultimately undermines the legitimate concerns she raises.

Consider what she is actually arguing. She suggests that students who use AI tools will lose the capacity to be "free intellectual, creative and moral agents." This is a strong empirical claim, and she offers no evidence for it. The history of technology is full of tools that critics warned would degrade human capability — the calculator, the word processor, the search engine — and the record is considerably more complicated than the warnings suggested. That doesn't mean the concern is automatically wrong. It means it deserves scrutiny, not assertion.

She also frames the industry's messaging about AI adoption as uniquely sinister propaganda, while her own letter is a masterclass in the same technique she is criticizing: urgent language, existential stakes, an implied community of the enlightened versus the captured. "They need us to be so in the habit of denying our own capacity to resist that we actually do lose the capacity to resist." This is not analysis. This is the structure of a threat.

The most frustrating thing about Tucker's letter is that it forecloses the conversation it claims to be opening. If Georgetown's adoption of Gemini is simply "shameful capitulation," then there is nothing to discuss — no version of thoughtful institutional engagement with these technologies that could be legitimate. But that is not how most serious people working on AI governance actually think about it. The question is not whether to engage with these tools but how — what policies govern their use, what transparency is required of vendors, what student data protections are in place, what pedagogical frameworks help people use them critically rather than passively.

Those are hard, important questions. They require exactly the kind of careful, evidence-based reasoning that a great university education is supposed to produce. They will not be answered by a blanket refusal to engage, any more than questions about pharmaceutical industry corruption are answered by refusing to take medicine.

I work in this industry. I have my own significant concerns about where it is headed and how it is governed. But I have found that the most useful thing I can do with those concerns is to be precise about them — to distinguish between the harms that are well-documented and the harms that are speculative, between the practices that are genuinely new and the ones that rhyme with older forms of corporate overreach we already have tools to address. Precision is not the same as complacency. It is the precondition for effective action.

Tucker ends her letter by reminding students that the future is not known. That is true, and it is a genuinely generous thing to say. I would add only this: an unknown future is not well served by a map drawn entirely in ink.