My social feed has divided mostly into two camps—those who can now only talk about how excited they are about AI, and those who are refusing to use it at all.

I’m somewhat bemused by both of these positions, I see LLMs as a useful tool, in the way that I see spreadsheets as a useful tool. I also think that the people who are advocating the use of AI for everything are wrong in the same way they would be if they told me I should use a spreadsheet for everything. The spreadsheet people do exist, they just aren’t on every screen I look at, and all the software I use hasn’t morphed into a spreadsheet. I don’t think we can or should ignore AI, but overuse of this technology is incredibly wasteful. My (perhaps overly-optimistic) hope is that we can get past the hype and into a place where we understand when, and when not, to use these tools.

In my work there are a couple of classes of things I want to use an LLM for. They typically involve things that are very difficult to automate in other ways due to the unstructured nature of the source material. I’ve had a lot of success, for example, in using AI to identify where documentation has drifted from the product. When you work on a web browser, just keeping track of what has changed where each month is hard.

The first class of things are tasks that would be good to do, but we don’t have people to put on them, they aren’t urgent. A lot of content health work falls into this. Minor updates, identifying screen shots that need changing, small bugfixes for typos, and so on. If an LLM can accurately identify and fix even 50% of these things, and I can put safeguards in place to avoid submitting LLM errors, we’re making an improvement that would not happen otherwise.

The second class of things are those that are really high priority, and need high accuracy, but where there’s a lot of work needed to get the data into shape. You can put a load of people on that work, but they will also miss things and make mistakes, and it’s tedious work that’s seen as low impact. In this scenario you can get an LLM to help you with the first pass over that material, by providing it with a Skill that’s essentially the instructions you would give a person doing the task. It will absolutely make mistakes, which is why this is a first pass. Human reviewers can then take and check that output, using it as a starting point and no more. In this case you need a robust system to ensure the second part happens, that people don’t simply rely on the AI output after seeing some level of accuracy.

I have an inkling that the most valuable people over the next few years will be those with enough experience to discern what to use when, and those with the ability to put into place processes that safeguard codebases, datasets, and people from the potential downsides of these tools.

https://rachelandrew.co.uk/archives/2026/03/24/do-you-need-ai-for-that/

#technicalWriting

Do you need AI for that? – Rachel Andrew