"When we write documentation, we often assume someone will read it top to bottom. Even when we skim, we start at the top, absorb context, build a mental model. And we infer stuff, like if you’re reading design system docs, you probably already know what a design system is.
AI agents don’t work like this. They retrieve the most relevant chunk based on semantic similarity and produce a response from that slice. If the definition is three paragraphs in and the agent retrieves paragraph one, it fills in the gaps.
That’s where hallucination creeps in. You’re absolutely right! Not because the model is careless, but because much of our documentation is structured for narrative flow, not retrieval. It was always fragile, humans were just good at compensating.
Writing for AI agents accidentally makes documentation more accessible. A screen reader user navigating by headings needs the same explicitness an AI agent needs. A new team member needs definitions that don’t assume prior knowledge. A developer working in a second language needs sentences that say exactly what they mean. Explicitness helps anyone who can’t rely on context to fill gaps.
Look at well-documented APIs. The ones that specify exactly what parameters do, what they return, what breaks. They’re used more, trusted more, cause fewer support tickets. Explicitness scales."
https://gerireid.com/blog/ai-is-accidently-making-documentation-accessible/
#TechnicalWriting #Acessibility #TechnicalCommunication #AI #SoftwareDocumentation #AIAgents #APIDocumentation #Markdown


