The great thing that Claude Code - or OpenAI Codex - brings to technical writers is that they can assess the accuracy of any piece of documentation that relies on software code, by analyzing the relevant code base(s).

This is extremely helpful because it definitely helps you fact-check your docs against the source code, namely to see if the Subject Matter Experts (SMEs) were bullshitting you or if the docs became outdated due to the cadence of new releases.

Another advantage is that even if you work in an organization with its own QA team, you can help them catch bugs at an earlier stage of the Software Development Life Cycle (SDLC). For example, yesterday I found an inconsistency between how a certain behavior was coded in the backend and how that same behavior was interpreted by the frontend.

And the best thing is that, since Claude Code does not entirely relies on neural networks but rather also uses regular expressions, the results have an higher degree of determinancy than the ones offered by common LLMs. Even though it's not perfect and you always have to tell it where to direct its attention (meaning the name of the most relevant repository) , this ability of taking advantage of a "Ai-based sniffer" for code is terrific.

For all these reasons I believe that every technical writer that doesn't use Claude Code or a similar toll in its own regular workflows will be immensely disadvantaged.

#TechnicalWriting #AI #GenerativeAI #SoftwareDocumentation #Claude #LLMs #GenerativeAI #ClaudeCode #SoftwareDevelopment #QA

This article is great in the sense that it describes most of what I'm doing nowadays as a technical writer. I even put different LLMs reviewing each other's drafts, which is a lot of fun. That's why, personally, I can't be so pessimistic as others are currently being. LLMs are just a new technology that you need to incorporate in your workflows. Of course, there are some skills that will probably become atrofied. At the same time, a new set of skills is emerging. If you don't see that. you will be completely left behind. You just need to use these tools by making use of critical thinking.

"After deliberation for a few months, I reached a conclusion about what I wanted to say: the model that’s emerging is a cyborg model of technical writing, a humans + AI combination. This is in contrast to the many articles, which now seem to come at an even faster pace, saying that AI will replace human labor. I realize there’s a lot of opinion on this debate, but my argument for why the humans + AI (cyborgs) model is the winning one, rather than replacement, is because of this observation: almost no tech writers at my work have automated complex processes using AI. And in my own use of AI over the past few years, the model that’s emerged is a close intertwining of machine and human interaction to produce content. I’m talking with AI all day. It’s not doing much on its own without my constant steering, direction, and feedback."

https://idratherbewriting.com/blog/cyborg-model-emerging-talk

#AI #GenerativeAI #LLMs #Chatbots #TechnicalWriting #TechnicalDocumentation #SoftwareDevelopment #SoftwareDocumentation

The Emerging Picture of a Changed Profession: Cyborg Technical Writers — Augmented, Not Replaced, by AI

I recently gave a presentation to students and faculty in person at Louisiana Tech University on March 30, 2026, focusing on what I call the cyborg model of technical writing. The idea is that the emerging model for tech writing isn’t one in which AI replaces tech writers but rather one in which AI augments tech writers. Tech writers interact with AI in a continuous back-and-forth, conversational, iterative manner. This post contains the recording, slides, transcript, summary, notes, and more from my presentation.

I’d Rather Be Writing Blog and API doc course

"With AI, the writer’s role moves to what I call context ownership. This is not a soft concept. A context owner is the person in your organization who governs what your AI tools know, how your content is structured, whether the output meets your quality and accuracy standards, and how your documentation systems connect to your product and engineering workflows.

In practice, context ownership looks like this:

A context owner defines and maintains the templates, standards, and structural rules that AI tools follow. Without these, AI produces content that is internally consistent within a single document but inconsistent across your documentation as a whole. Your customers notice, even if you don’t.

A context owner reviews and validates AI-generated drafts against product reality. AI tools do not know what your product actually does in edge cases. They do not know what changed in the last release that hasn’t been documented yet. They do not know that the API endpoint described in the engineering spec was modified during implementation. The context owner does.

A context owner manages the documentation pipeline. In a modern documentation operation, this means version control, docs-as-code workflows, API-driven publishing, and automated quality checks. These are technical systems that require technical management. AI can operate within these systems, but it cannot design, maintain, or troubleshoot them.

A context owner bridges engineering and customer-facing content. This is the function that has never been automated in any transition, and AI has not changed that. Someone has to understand what engineering built, determine what customers need to know about it, and make sure the documentation connects those two realities accurately.
(...)
This is not a diminished version of the writer’s role. It is a more senior, more technical role than “writer” has traditionally implied"
https://greenmtndocs.com/2026-03-25-ive-seen-this-before/
#AI #LLMs #TechnicalWriting #SoftwareDocumentation #ContextEngineering

I've Seen This Before: What Five Technology Transitions Tell Decision-Makers About AI and Documentation | Green Mountain Docs

The pattern is clear. The question is whether you'll repeat the expensive mistake.

Green Mountain Docs

"To begin with, everything you document has to be in a format that's as structured and machine-readable as possible. The key here is to disambiguate as much as you can, even if you have to repeat yourself. So, don't bother with the formatting of your documentation or the look and feel of your API portal. Instead, focus on using well-known API definition standards based on machine-readable formats. Use OpenAPI for documenting REST APIs, AsyncAPI for asynchronous APIs, Protocol Buffers for gRPC, and the GraphQL Schema Definition Language. Whenever possible, store the API definitions in several formats, such as JSON and YAML, for easy interpretation by AI agents.

But that's not enough. If you don't have all your operations clearly defined, AI agents will have a hard time understanding what they can do. Make sure you clearly define all operation parameters. Specify what the input types are so there are no misunderstandings. So, instead of saying that everything is a "string," identify each individual input format."

https://apichangelog.substack.com/p/api-documentation-for-machines

#APIs #APIDocumentation #AI #AIAgents #LLMs #OpenAPI #TechnicalWriting #SoftwareDocumentation #Programming

API Documentation for Machines

What are the elements that make API documentation easily consumable by a machine?

The API Changelog

The problem is that most companies with the resources to properly implement role fluidity only want to hire "unicorns." Having worked in hybrid roles at smaller companies before and after the widespread adoption of LLMs, I must say that it's a recipe for burnout. This is not only because it's difficult to assess the quality of your work, but also because, in practice, companies don't care much about documentation. In reality, you'd mostly be a software developer doing some documentation in your "free time."

Another problem with this model of a fluid software documentation team is that it assumes there are or will be software companies willing to prioritize documentation as a sector that deserves its own department. However, technical writers are often placed under the product umbrella, which isn't necessarily bad. In fact, it's much better than being placed under "marketing." Unfortunately, if role fluidity ever becomes the norm, I'm afraid it will most likely start with engineering.

https://passo.uno/docs-team-of-the-future/

#TechnicalWriting #SoftwareDocumentation #Programming #SoftwareDevelopment #AI #LLMs

In the team of the future, roles are verbs, not nouns

If someone asked me to set up a team in charge of software documentation, I would not hire for specific roles or cookie-cut job descriptions. Professions tied to knowledge buckets are bound to shrink or disappear. Instead, I would hire people that could move freely between four quadrants, each defined by the proximity to a focus pole and its skills. The poles in this team setup would be the following: Product Vision, Knowledge Design, Engineering Depth, and Delivery Strategy.

For the forseeable future, AI tools will continue to generate such incomplete and sometime hallucinated outputs that there will be a continuing need for a "human-in-the-loop" to not only use several LLMs to review each other's output but to fact-check the final output. Using one LLM alone results in mediocre quality. Using two LLMs results in (sometimes very) good quality. Use three LLMs with human verification for great/outstanding results.

"1,131 people across the documentation industry responded to the 2026 State of Docs survey — more than 2.5x the number of respondents last year. But the size of the sample matters less than what it represents: a genuine cross-section of the people who create, manage, evaluate, and depend on documentation.

Documentation’s role in purchase decisions is stable and strong, and the case that docs drive business value is well established. The shift this year is in what documentation is being asked to do, and who — and what — is consuming it.

AI has crossed the mainstream threshold for documentation, both in how docs get written and how they get consumed. Users are arriving through AI-powered search tools, coding assistants, and MCP servers. Documentation is becoming the data layer that feeds AI products, onboarding wizards, and developer tools. The teams investing in this shift are treating documentation as context infrastructure, not just a collection of pages.

But adoption has outrun governance, and the gap matters. Most teams are using AI without guidelines in place, and documentation carries a higher accuracy bar than most content. After all, one wrong instruction can break a user’s implementation and erode trust in the product.
(...)
Writers are spending less time drafting and more time fact-checking, validating, and building the context systems that make AI output worth refining."

https://www.stateofdocs.com/2026/introduction-and-demographics

#TechnicalWriting #TechnicalCommunication #SoftwareDocumentation #DocsAsProduct #AI #GenerativeAI

The State of Docs Report 2026 – Introduction and Demographics

The State of Documentation Report by GitBook

"Start small:

Pick one repeatable task that an agent currently handles without explicit guidance. Document it as a skill with entry criteria, steps, and exit criteria.

Validate it. Install skill-validator and run skill-validator check against your skill. Fix what it finds.

Test it with the agent. Invoke the skill explicitly and observe whether the agent follows it as written. Where it deviates, the skill is probably ambiguous.

Add validation to CI. Once you have a few skills, the CI integration keeps them from degrading as the project evolves.

Perhaps unsurprisingly, this is the same pattern I described for project descriptions: start with one file, observe how agents respond, iterate. The difference is that skills demand more precision because they're more prescriptive. That higher quality bar makes deterministic validation tooling valuable; you get feedback on skill quality before the agent runs, not after."

https://instructionmanuel.com/writing-skills-agents-can-execute

#AI #AIAgents #GenerativeAI #Skills #LLMs #TechnicalWriting #Documentation #SoftwareDocumentation

Writing Skills That Agents Can Actually Execute | Instruction Manuel

Writing Skills That Agents Can Actually Execute banner

"In my post The Emerging Picture of a Changed Profession: Cyborg Technical Writers — Augmented, Not Replaced, by AI, I mentioned an upcoming presentation I'm giving to students and faculty. I argue that the future of the profession is the cyborg model, where machines augment our capabilities rather than replace us. In this post, I share notes about what skills a tech writer would need to learn to thrive in this world of augmentation.

If you have feedback about these skills, let me know. My intent here is to demonstrate what actual skills should be emphasized for those entering the profession, or for those currently in the profession who want to get ahead with AI. Note that the following sections are mostly bullet points, in the form of notes."

https://idratherbewriting.com/blog/10-principles-of-cyborg-technical-writer

#TechnicalWriting #TechnicalCommunication #SoftwareDocumentation #Documentation #AI #GenerativeAI #LLMs

10 principles of the cyborg technical writer – brief notes and bullet points on how to use AI to augment your role

In my post The Emerging Picture of a Changed Profession: Cyborg Technical Writers — Augmented, Not Replaced, by AI, I mentioned an upcoming presentation I’m giving to students and faculty. I argue that the future of the profession is the cyborg model, where machines augment our capabilities rather than replace us. In this post, I share notes about what skills a tech writer would need to learn to thrive in this world of augmentation.

I’d Rather Be Writing Blog and API doc course

"I ask AI to explain things all the time. If I observe it do something that I want to learn more about, I ask it. I look at its outputs and ask it to explain decisions it made or how it implemented something. I ask it to help me brainstorm about things, help me think through edge cases or performance considerations, you name it. If the thing that it is explaining has some implication I need to verify, I ask it to find me a link that backs up what it is saying. And then I look at the link to make sure the content is real, comes from a reasonable source, and actually backs up what the AI says. And probaly also ask it questions about the surface area around the thing, until I’m sure I understand it.

If you approach the AI upskill process as a collaborative learning process, where you can interrogate the tool you’re learning about its capabilities, how and why it’s chosing to do the things it’s doing, and to explain anything you don’t understand along the way - you’re unlocking a super power.

AND you have the comfort of knowing you’re asking all your questions of a talking box that won’t remember what you asked the next time it chats with you. So even if you do think it’s judging you, it has amnesia and that judgement won’t last beyond closing the session!"

https://dacharycarey.com/2026/02/23/upskilling-in-ai-age/

#TechnicalWriting #AI #LLMs #AIAgents #Chatbots #SoftwareDocumentation

Upskilling in the AI Age | Dachary Carey

In which I answer someone who asked me how to get started with AI.

Indeed, we can't allow autopilot to head into a whirlwind...

"We may be doing docs-as-code, but docs are not code. Docs run on people, and people are a messy tangle of goals, skills, and emotions. When docs hit the brain, they meet varying expectations, knowledge levels, reading abilities, and needs. None of this can be reproduced or simplified to a single pattern, but good docs use structure and words wisely to produce the best possible linguistic shape that can land safely on most people’s heads. Only humans can decide whether that message is getting across in the right way.

Getting there is a balancing act between business needs, user needs, and your own. That’s the diplomatic tension that forces all good tech writers to slow down and consider all points of view in the room as if they were in the middle of a spaghetti Western standoff. Slowing down is a deliberate, necessary act in all crafts, and tech writing is no exception. No matter how fast LLMs can churn out drafts, they don’t understand the tension in tech writing, to which we’re adding AI itself as an additional consumer of docs.
(...)
The quality of the docs I produce is still high, I was saying. That’s because I’m not letting LLMs take the steering wheel, and because I’m building new habits around them: setting up guardrails, automating what can be automated, and keeping my hands on the decisions that matter. I can do that because I know what good docs look like, and because I’ve been doing this long enough to feel when something’s off. That intuition came from years of wrestling with products and watching users struggle with pages I thought were clear. AI can help me write faster. It cannot replace the slow accumulation of judgment that tells me when to stop."

https://passo.uno/real-cost-of-documentation/

#TechnicalWriting #SoftwareDocumentation #AI #DocsAsCode #GenerativeAI #LLMs #SoftwareDevelopment #AISlop #Programming #TechnicalCommunication #Documentation

The writing was always the cheap part

Last December, quite unrealistically, I took a solemn oath: I would not write again about AI for at least another year. I was growing tired with the incessant noise, the lack of stability, and the self-imposed stress of keeping up with all the attention we must spend on factoids such as how well an LLM can draw a pelican riding a bike, which bury the important aspects of our craft as if they were mere prompt flourish. With my passionate epistle, I thought I’d said all I had to say. I was wrong.