Those of you who work in an organisation that makes heavy use of GenAI coding assistants: how did you educate your developers that if they automate sweeping changes to user-facing code, they MUST also automate the corresponding sweeping changes to user-facing documentation?

Or have you identified a strong correlation between embracing GenAI coding assistants and not giving a flying fuck about user-facing docs?

@xahteiwi ”not giving a flying fuck”

I think this is the baseline we’re working from in general, sadly

@xahteiwi It still goes through QA, review and everything, so all of that still happens.

Edit: partly because GenAI ingests the user facing docs, so if you want your product to work somewhat better in an "AI-assisted" world and ops and agents, those docs better be close enough to right.

@xahteiwi Using Docs to tell them to update User Facing docs (agent.md or development.md). The only thing GenAI Agent do better than the average human developer is reading markdown docs and following it.
@hikhvar Okay, this approach works when we're talking about docs that (a) live in the same repo as the code being modified, and (b) use Markdown because that's the least error-prone format for the LLM to ingest. Fair summary?

@xahteiwi It works better for markdown, but from what I see, the agent I'm working mostly with (claude) is also capable of editing other docs format that are not Office formats like DocX. If your docs live in a wiki like e.g. Confluence or MediaWiki the agent could do it if there is a sufficient capable MCP. Also at least claude is able to work with multiple git repositories.

The funny thing is, the better documented and accessible the processes are the better the agents are.

@hikhvar Okay. How well is that working for you with GUIs and screenshots?
@xahteiwi Ok, I have not done this in practice. In theory the agent could ask a the human in the loop for an updated screenshot.

@hikhvar Consider that screenshots are typically cropped, or include highlights, or blur parts to hide information that is private or uninteresting, etc. So, automating such screenshot updates is an "interesting" task.

Consider further that you may have hundreds of screenshots that are all affected by a change of one line of CSS, rendering your "let's ask a human" approach impractical.

And sadly, not thinking about this sort of thing is exactly what I mean by not giving a fuck about docs.

@xahteiwi @hikhvar As far as I know, our doc team updates those before release.

(How much of that is already also automated already I don't quite know. Probably at least in progress.)

@xahteiwi was that different when people created the code? If changing CSS is a problem in the process, you must instruct everyone to not change it. In the best case you have verified this via a CI test in your pipeline. If so, the agent will see this signal similarly to your new hire will learn about it.
@hikhvar Well of course it was the same when people created the code, but they typically wouldn't make nearly as sweeping and impactful changes.

@xahteiwi well depends on the humans you are working with. I have seen both experienced (I'm an expert I know what I'm doing) and inexperienced (How bad can a small CSS change be?) do sweeping and impactful changes. I have been both.

With Coding agents you need better documentation or testing do guide what changes are acceptable, and what changes are unacceptable. The irony is, that will help humans as well.

@xahteiwi TBH is on of the parts in which AI has improved things the most:

a /document skill.md not only instructs Claude to do a diff and update docs accordingly

@cdf1982 -EPARSE
@cdf1982 I fail to parse your reply. I do not understand what you meant to say.

@xahteiwi Happy to help you parse it!

Your original question was about how teams handle the relationship between large, automated changes to user-facing code produced with/by GenAI and the corresponding need to keep user-facing docs in sync.

My answer was that, in my experience, this is actually one of the areas where AI helps the most.

Specifically, we instructed Claude explicitly to:
1) analyze the diff of the changes made to the code… (1/2)

2) identify which parts of the documentation are impacted
3) update those parts accordingly

In our case, a simple instruction such as “/document skill.md” can guide the model to:
- read the modified code
- compare it with the existing documentation
- apply consistent updates where needed

So rather than increasing the gap between code and docs, these tools can reduce it, provided they are used intentionally. We really never had better docs, and more up-to-date, than now. (2/2)