I've seen people claiming - with a straight face - that mechanical refactoring is a good use-case for LLM-based tools. Well, sed was developed in 1974 and - according to Wikipedia - first shipped in UNIX version 7 in 1979. On modern machines it can process files at speeds of several GB/s and will not randomly introduce errors while processing them. It doesn't cost billions, a subscription or internet access. It's there on your machine, fully documented. What are we even talking about?

@gabrielesvelto I've been presented a case where changes were quite trivial across many repos, but making those changes still required taking context into account. LLM was helpful.
But...
that same presentation showed logs of tool admitting of doing forced push when it was from the start specifically instructed not to do forced pushes.

Feels like we need sandboxed dev environments where these tools could not do dangerous things, as they themselves are bad at this.

@aurisc4 I've done a mix of grep and send a lot of times to add context. If more sophisticate refactoring is needed there are tools that understand the syntax of practically any language in existence and can be used for direct manipulation of the ASTs. Every problem where the input is machine-readable can be solved in a faster, cheaper and more reliable way using tools that process the data directly rather than passing through a (very large) neural network.