Don't Let AI Write For You

> When I send somebody a document that whiffs of LLM, I’m only demonstrating that the LLM produced something approximating what others want to hear. I’m not showing that I contended with the ideas.

This eloquently states the problem with sending LLM content to other people: As soon as they catch on that you're giving them LLM writing, it changes the dynamic of the relationship entirely. Now you're not asking them to review your ideas or code, you're asking them to review some output you got from an LLM.

The worst LLM offenders in the workplace are the people who take tickets, have Claude do the ticket, push the PR, and then go idle while they expect other people to review the work. I've had to have a few uncomfortable conversations where I explain to people that it's their job to review their own submissions before submitting them. It's something that should be obvious, but the magic of seeing an LLM produce code that passes tests or writing that looks like it agrees with the prompt you wrote does something to some people's brains.

The title and of this article is Don't Let AI Write For You, when its point seems to be closer to Don't Let AI Think For You (see "Thinking").

This distinction is important, because (1) writing is not the only way to faciliate thinking, and (2) writing is not neccessarily even the best way to facilitate thinking. It's definitely not the best way (a) for everyone, (b) in every situation.

Audio can be a great way to capture ideas and thought processes. Rod Serling wrote predominantly through dictation. Mark Twain wrote most of his of his autobiography by dictation. Mark Duplass on The Talking Draft Method (1m): https://www.youtube.com/watch?v=UsV-3wel7k4

This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.

From there, you can (and should) leverage AI for transcripts, light transcript cleanups, grammar checks, etc.

Mark Duplass on The Talking Draft Method

YouTube

I would count direct dictation (eg someone writes down what you say, and that is the final text), as writing, in the context of producing a document (book, etc) that you intend others to read.

It's not the same thing as talking to someone (or a group) about something.

I'm finding AI great to have a conversation with to flesh out ideas, with the added benefit it can summarize everything at the end
You're being steered without being aware of it.
Worse. You’re being steered along a circle

I do this a lot. Start by telling the AI to just listen and only provide feedback when asked. Lay out your current line of thinking conversationally. Periodically ask the AI to summarize/organize your thoughts "so far". Tactically ask for research into a decision or topic you aren't sure about and then make a decision inline.

Then once I feel like I have addressed all the areas, I ask for a "critical" review, which usually pokes holes in something that I need to fix. Finally have the AI draft up a document (Though you have to generally tell it to be as concise and clear as possible).

Writing is, however, a uniquely distinct and well-studied way to facilitate thinking.

I've definitely lost something since migrating my Artist's Way morning pages and to the netbook. (Worth it, though, to enable grep—and, now, RAG).

Yeah this is my problem. I can come up with ideas, but in writing my ideas never come out well. AI has helped me to express my ideas better. People who write well or are successful at writing sometimes fail to understand how uncommon is it to actually be good at writing. Shit is hard.

> Audio can be a great way to capture ideas and thought processes ... This can work especially well for people who are distracted by form and "writing correctly" too early in the process, for people who are intimidated by blank pages, for non-neurotypical people, etc. Self-recording is a great way to set all of those artifacts of the medium aside and capture what you want to say.

Yes, this is my process:

Record yourself rambling out loud, and import the audio in NotebookLM.

Then use this system prompt in NotebookLM chat:

> Write in my style, with my voice, in first person. Answer questions in my own words, using quotes from my recordings. You can combine multiple quotes. Edit the quotes for length and clarity. Fix speech disfluencies and remove filler words. Do not put quotation marks around the quotes. Do not use an ellipsis to indicate omitted words in quotes.

Then chat with "yourself." The replies will match your style and will be source-grounded. In fact, the replies automatically get footnotes pointing to specific quotes in your raw transcripts.

This workflow may not save me time, but it helps me get started, or get unstuck. It helps me stop procrastinating and manage my emotions. I consider it assistive technology for ADHD.

I've long considered writing to be the "last step in thinking". I can't tell you how many times an idea, that was crystal clear in my mind, fell apart the moment I started writing and I realize there were major contradictions I needed to resolve. Likewise I also have numerous times where writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking.

However, there is a lot of writing that is basically just an old school from of context engineering. While I would love to think that a PRD is a place to think through ideas, I think many of us have encountered situations, pre-AI, where PRDs were basically context dumps without any real planning or thought.

For these cases, I think we should just drop the premise altogether that you're writing. If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.

Not long ago my engineering team was trying to enforce writing release notes so people could be aware of breaking changes, then people groaned at the idea of having to read this. The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.

I think it's going to be awhile before the full impact of AI really works it's way through how we work. In the mean time we'll continue to have AI written content fed back into AI and then sent back to someone else (when this could all be a more optimized, closed loop).

For your context, I'm an AI hater, so understand my assumptions as such.

> The obvious best solution is to have your agent write release notes for your agent in the future to have context. No more tedious writing or reading, but also no missing context.

Why is more AI the "obvious" best solution here? If nobody wants to read your release notes, then why write them? And if they're going to slim them down with their AI anyway, then why not leave them terse?

It sounds like you're just handwaving at a problem and saying "that's where the AI would go" when really that problem is much better solved without AI if you put a little more thought into it.

This is kind of a fundamental issue with release notes. They are broadcasting lots of information, and only a small amount of information is relevant to any particular user (at least in my experience).

If I had a technically capable human assistant, I would have them filter through release notes from a vendor and only give me the relevant information for APIs I use. Having them take care of the boring, menial task so I can focus on more important things seems like a no brainer. So it seems reasonable to me to have an AI do that for me as well.

> writing about something loosely and casually revealed to me something that fundamentally changed how I viewed a topic and really consolidated my thinking

You see the same thing in teaching, perhaps even more because of the interactive element. But the dynamic in any case is the same. Ideas exist as a kind of continuous structure in our minds. When you try to distill that into something discrete you're forced to confront lingering incoherence or gaps.

> agent write release notes for your agent in the future...

I have been going back to verbose, expansive inline comments. If you put the "history" inline it is context, if you stuff it off in some other system it's an artifact. I cant tell you how many times I have worked in an old codebase, that references a "bug number" in a long dead tracking system.

But how do you deal with communicating that some library you maintain has a behavior change? People already need to know to look at your code in order to read your comments.

> For these cases, I think we should just drop the premise altogether that you're writing.

Sure.

> If you need to write a proposal for something as a matter of ritual, give it AI. If you're documenting a feature to remember context only (and not really explain the larger abstract principles driving it), it's better created as context for an LLM to consume.

No, no, no. You don't need to take that step. Whatever bullet-point list you're feeding in as the prompt is the relevant artifact you should be producing and adding to the bug, or sharing as an e-mail, or whatever.

I agree with most of this, but my one qualm is the notion that LLMs "are particularly good at generating ideas."

It's fair enough that you can discard any bad ideas they generate. But by design, the recommendations will be average, bland, mainstream, and mostly devoid of nuance. I wouldn't encourage anyone to use LLMs to generate ideas if you're trying to create interesting or novel ideas.

I have found the one of the better use cases of llms to be a rubber duck.

Explaining a design, problem, etc and trying to find solutions is extremely useful.

I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.

I'm torn.

I sometimes use them when I'm stuck on something, trying to brainstorm. The ideas are always garbage, but sometimes there is a hint of something in one of them that gets me started in a good direction.

Sometimes, though, I feel MORE stuck after seeing a wall of bad ideas. I don't know how to weigh this. I wasn't making progress to begin with, so does "more stuck" even make sense?

I guess I must feel it's slightly useful overall as I still do it.

Mainstream ideas are often good. That's why they're mainstream. Being different for being different isn't a virtue.

That being said I don't think LLMs are idea generators either. They're common sense spitters, which many people desperately need.

I think it's just a confusing use of the term "generating." It's thinking of the LLM as a thesaurus. You actually generate the real idea -- and formulate the problem -- it's good at enumerating potential solutions that might inspire you.

"by design, the recommendations will be average"

This couldn't be more wrong. The simplest refutation is just to point out that there are temperature and top-k settings, which by design, generate tokens (and by extension, ideas) that are less probable given the inputs.

I feel like LLMs are just forcing me to realize what writing actually is. For me, writing is basically a mental cache clear. I write things down so I can process them fully and then safely forget them.

If I let an LLM generate the text, that cognitive resolution never happens. I can't offload a thought i haven't actually formed - hence am troubld to safely forget about it.

Using AI for that is like hiring someone to lift weights for you and expecting to get stronger (I remember Slavoj Žižek equating it to a mechanical lovemaking in his recent talk somewhere).

The real trap isn't that we/writers willbe replaced; it's that we'll read the eloquent output of a model and quietly trick ourselves into believing we possess the deep comprehension it just spit out.

It reminds me of the shift from painting to photography. We thought the point of painting was to perfectly replicate reality, right up until the camera automated it. That stripped away the illusion and revealed what the art was actually for.

If the goal is just to pump out boilerplate, sure, let AIdo it. But if the goal is to figure out what I actually think, I still have to do the tedious, frustrating work of writing it out myself .