I'm streaming KDE docs:

I'm using Owncast where you can use emojis freely and join chat anonymously.

Be sure to join and ask any questions related to KDE and I'll try my best to answer them.

Every single stream I do is an Ask Me Anything KDE Edition ™️

#KDE #Linux #Documentation #TechnicalWriting #FurryStreamer #FurryVTuber #VTuber #Owncast

Bnnuycast

I'm a hare pretending I'm a human pretending I know C++ and CMake 🐰

Bnnuycast

I'm streaming KDE docs:

I'm trying Owncast today; if it doesn't work well, I might fall back to Twitch.

Be sure to join and ask any questions related to KDE and I'll try my best to answer them.

Every single stream I do is an Ask Me Anything KDE Edition ™️

#KDE #Linux #Documentation #TechnicalWriting #FurryStreamer #FurryVTuber #VTuber #Owncast

Bnnuycast

I'm a hare pretending I'm a human pretending I know C++ and CMake 🐰

Bnnuycast

"With AI, the writer’s role moves to what I call context ownership. This is not a soft concept. A context owner is the person in your organization who governs what your AI tools know, how your content is structured, whether the output meets your quality and accuracy standards, and how your documentation systems connect to your product and engineering workflows.

In practice, context ownership looks like this:

A context owner defines and maintains the templates, standards, and structural rules that AI tools follow. Without these, AI produces content that is internally consistent within a single document but inconsistent across your documentation as a whole. Your customers notice, even if you don’t.

A context owner reviews and validates AI-generated drafts against product reality. AI tools do not know what your product actually does in edge cases. They do not know what changed in the last release that hasn’t been documented yet. They do not know that the API endpoint described in the engineering spec was modified during implementation. The context owner does.

A context owner manages the documentation pipeline. In a modern documentation operation, this means version control, docs-as-code workflows, API-driven publishing, and automated quality checks. These are technical systems that require technical management. AI can operate within these systems, but it cannot design, maintain, or troubleshoot them.

A context owner bridges engineering and customer-facing content. This is the function that has never been automated in any transition, and AI has not changed that. Someone has to understand what engineering built, determine what customers need to know about it, and make sure the documentation connects those two realities accurately.
(...)
This is not a diminished version of the writer’s role. It is a more senior, more technical role than “writer” has traditionally implied"
https://greenmtndocs.com/2026-03-25-ive-seen-this-before/
#AI #LLMs #TechnicalWriting #SoftwareDocumentation #ContextEngineering

I've Seen This Before: What Five Technology Transitions Tell Decision-Makers About AI and Documentation | Green Mountain Docs

The pattern is clear. The question is whether you'll repeat the expensive mistake.

Green Mountain Docs

I'm streaming KDE docs:

I'm trying Owncast today; if it doesn't work well, I might fall back to Twitch.

Be sure to join and ask any questions related to KDE and I'll try my best to answer them.

Every single stream I do is an Ask Me Anything KDE Edition ™️

#KDE #Linux #Documentation #TechnicalWriting #FurryStreamer #FurryVTuber #VTuber #Owncast

Bnnuycast

I'm a hare pretending I'm a human pretending I know C++ and CMake 🐰

Bnnuycast

I've been testing #zensical since I will have to migrate off of #mkdocs and #mkdocsmaterial.

Across 3 sites, I only had to change one line in the yaml files and run `zensical` instead of `mkdocs`. Total potential migration time for 3 sites, 10 minutes.

Why potential and not complete? Pending blog and OpenAPI spec support. Once those are released, I can finish testing and migrate.

@squidfunk looking good so far. Thanks!

#technology #tech #indieweb #blog #technicalwriting #docs #docsascode

My social feed has divided mostly into two camps—those who can now only talk about how excited they are about AI, and those who are refusing to use it at all.

I’m somewhat bemused by both of these positions, I see LLMs as a useful tool, in the way that I see spreadsheets as a useful tool. I also think that the people who are advocating the use of AI for everything are wrong in the same way they would be if they told me I should use a spreadsheet for everything. The spreadsheet people do exist, they just aren’t on every screen I look at, and all the software I use hasn’t morphed into a spreadsheet. I don’t think we can or should ignore AI, but overuse of this technology is incredibly wasteful. My (perhaps overly-optimistic) hope is that we can get past the hype and into a place where we understand when, and when not, to use these tools.

In my work there are a couple of classes of things I want to use an LLM for. They typically involve things that are very difficult to automate in other ways due to the unstructured nature of the source material. I’ve had a lot of success, for example, in using AI to identify where documentation has drifted from the product. When you work on a web browser, just keeping track of what has changed where each month is hard.

The first class of things are tasks that would be good to do, but we don’t have people to put on them, they aren’t urgent. A lot of content health work falls into this. Minor updates, identifying screen shots that need changing, small bugfixes for typos, and so on. If an LLM can accurately identify and fix even 50% of these things, and I can put safeguards in place to avoid submitting LLM errors, we’re making an improvement that would not happen otherwise.

The second class of things are those that are really high priority, and need high accuracy, but where there’s a lot of work needed to get the data into shape. You can put a load of people on that work, but they will also miss things and make mistakes, and it’s tedious work that’s seen as low impact. In this scenario you can get an LLM to help you with the first pass over that material, by providing it with a Skill that’s essentially the instructions you would give a person doing the task. It will absolutely make mistakes, which is why this is a first pass. Human reviewers can then take and check that output, using it as a starting point and no more. In this case you need a robust system to ensure the second part happens, that people don’t simply rely on the AI output after seeing some level of accuracy.

I have an inkling that the most valuable people over the next few years will be those with enough experience to discern what to use when, and those with the ability to put into place processes that safeguard codebases, datasets, and people from the potential downsides of these tools.

https://rachelandrew.co.uk/archives/2026/03/24/do-you-need-ai-for-that/

#technicalWriting

Do you need AI for that? – Rachel Andrew

"To begin with, everything you document has to be in a format that's as structured and machine-readable as possible. The key here is to disambiguate as much as you can, even if you have to repeat yourself. So, don't bother with the formatting of your documentation or the look and feel of your API portal. Instead, focus on using well-known API definition standards based on machine-readable formats. Use OpenAPI for documenting REST APIs, AsyncAPI for asynchronous APIs, Protocol Buffers for gRPC, and the GraphQL Schema Definition Language. Whenever possible, store the API definitions in several formats, such as JSON and YAML, for easy interpretation by AI agents.

But that's not enough. If you don't have all your operations clearly defined, AI agents will have a hard time understanding what they can do. Make sure you clearly define all operation parameters. Specify what the input types are so there are no misunderstandings. So, instead of saying that everything is a "string," identify each individual input format."

https://apichangelog.substack.com/p/api-documentation-for-machines

#APIs #APIDocumentation #AI #AIAgents #LLMs #OpenAPI #TechnicalWriting #SoftwareDocumentation #Programming

API Documentation for Machines

What are the elements that make API documentation easily consumable by a machine?

The API Changelog

The problem is that most companies with the resources to properly implement role fluidity only want to hire "unicorns." Having worked in hybrid roles at smaller companies before and after the widespread adoption of LLMs, I must say that it's a recipe for burnout. This is not only because it's difficult to assess the quality of your work, but also because, in practice, companies don't care much about documentation. In reality, you'd mostly be a software developer doing some documentation in your "free time."

Another problem with this model of a fluid software documentation team is that it assumes there are or will be software companies willing to prioritize documentation as a sector that deserves its own department. However, technical writers are often placed under the product umbrella, which isn't necessarily bad. In fact, it's much better than being placed under "marketing." Unfortunately, if role fluidity ever becomes the norm, I'm afraid it will most likely start with engineering.

https://passo.uno/docs-team-of-the-future/

#TechnicalWriting #SoftwareDocumentation #Programming #SoftwareDevelopment #AI #LLMs

In the team of the future, roles are verbs, not nouns

If someone asked me to set up a team in charge of software documentation, I would not hire for specific roles or cookie-cut job descriptions. Professions tied to knowledge buckets are bound to shrink or disappear. Instead, I would hire people that could move freely between four quadrants, each defined by the proximity to a focus pole and its skills. The poles in this team setup would be the following: Product Vision, Knowledge Design, Engineering Depth, and Delivery Strategy.

For the forseeable future, AI tools will continue to generate such incomplete and sometime hallucinated outputs that there will be a continuing need for a "human-in-the-loop" to not only use several LLMs to review each other's output but to fact-check the final output. Using one LLM alone results in mediocre quality. Using two LLMs results in (sometimes very) good quality. Use three LLMs with human verification for great/outstanding results.

"1,131 people across the documentation industry responded to the 2026 State of Docs survey — more than 2.5x the number of respondents last year. But the size of the sample matters less than what it represents: a genuine cross-section of the people who create, manage, evaluate, and depend on documentation.

Documentation’s role in purchase decisions is stable and strong, and the case that docs drive business value is well established. The shift this year is in what documentation is being asked to do, and who — and what — is consuming it.

AI has crossed the mainstream threshold for documentation, both in how docs get written and how they get consumed. Users are arriving through AI-powered search tools, coding assistants, and MCP servers. Documentation is becoming the data layer that feeds AI products, onboarding wizards, and developer tools. The teams investing in this shift are treating documentation as context infrastructure, not just a collection of pages.

But adoption has outrun governance, and the gap matters. Most teams are using AI without guidelines in place, and documentation carries a higher accuracy bar than most content. After all, one wrong instruction can break a user’s implementation and erode trust in the product.
(...)
Writers are spending less time drafting and more time fact-checking, validating, and building the context systems that make AI output worth refining."

https://www.stateofdocs.com/2026/introduction-and-demographics

#TechnicalWriting #TechnicalCommunication #SoftwareDocumentation #DocsAsProduct #AI #GenerativeAI

The State of Docs Report 2026 – Introduction and Demographics

The State of Documentation Report by GitBook

I'm streaming KDE docs:

Be sure to join and ask any questions related to KDE and I'll try my best to answer them.

Every single stream I do is an Ask Me Anything KDE Edition ™️

Today I'll take a look at the KDE Frameworks tutorials.

#KDE #Linux #Documentation #TechnicalWriting #FurryStreamer #FurryVTuber #VTuber #Twitch

Herzenschein - Twitch

KDE bnnuy doing bnnuy stuff 🐰

Twitch