One of the interesting consequences of LLMs/agents is that the short-term costs of boilerplate in code have plummeted. An interesting question is whether those costs are simply shunted to the future or whether they are permanently reduced.

@ltratt

The question is how well an LLM can rewrite boilerplate for new boilerplate when the underlying API changes. Boilerplate is usually bad because it's inflexible: you're making ever user of an API write almost the same thing, rather than giving them an abstraction, which means you're forcing them to provide all of the defaults. When something new requires a default, or when the right choice for a default changes, you require every single consumer of your API to change their code.

From what I've seen of LLMs, they'll be very good at this kind of change in 90+% of the cases and catastrophically (and non-obviously) wrong in the rest.

@david_chisnall Indeed, that's one of the things that might just be shunted to the future. Or their understanding may improve to the point that that they catch all (or nearly all) the hard cases. Some people are implicitly making big bets on the good case happening, and they may well end up being right, but I think only time will tell!