Agreed with everything @kevinriggle wrote here. Another angle on this to try with people who simply do not understand what software engineering •is•: “What’s the impact on the other 7/8?”

AI can generate code fast. Often it’s correct. Often it’s not, but close. Often it’s totally wrong. Often it’s •close• to correct and even appears to work, but has subtle errors that must be corrected and may be hard to detect.

1/ https://ioc.exchange/@kevinriggle/113641234199724146

Kevin Riggle (@[email protected])

What I’m taking from this is that software engineers spend most of our time on engineering software, and writing code is (as expected) a relatively small portion of that work. Imagine this for other engineering disciplines. “Wow structural engineers seem to spend most of their time on meetings and CAD and relatively little time physically building bridges with their hands! This is something AI can and should fix. I am very smart” https://fortune.com/2024/12/05/amazon-developers-spend-hour-per-day-coding/

IOC.exchange

All the above is also true (though perhaps in different proportions) of humans writing code! But here’s the big difference:

When humans write the code, those humans are •thinking• about the problem the whole time: understanding where those flaws might be hiding, playing out the implications of business assumptions, studying the problem up close.

When AI write the code, none of that happens. It’s a tradeoff: faster code generation at the cost of reduced understanding.

2/

The effect of AI is to reduce the cost of •generating code• by a factor of X at the cost of increasing the cost of •thinking about the problem• by a factor of Y.

And yes, Y>1. A thing non-developers do not understand about code is that coding a solution is a deep way of understanding a problem — and conversely, using code that’s dropped in your lap greatly increases the amount of problem that must be understood.

3/

@inthehands @RuthMalan every dev wants a greenfield project. LLMs shade even greenfield projects brown.

But then it's not the devs that are asking for this* so much as a managerial class looking for the sort of silver bullet that brings down both pay and the amount of time dealing with a type of worker they find difficult.

*not the ones who are any good, anyway

@rgarner @RuthMalan
Yup. All that.

And Brooks’s maxim that there is no silver bullet still stands undefeated.

@inthehands @RuthMalan and that thing about adding people to projects. An LLM isn't a person quite so much as the average of some.
@rgarner @inthehands @RuthMalan Oh, I like this one. Even if it were an actual person, it’s a person who read your code but none of the design docs and hasn’t participated in any of your team’s discussions. They’d have a good chance of coming up with something reasonable, but could also totally bodge it without realizing it.

@jrose @rgarner @inthehands @RuthMalan I’m totally on your side in this fight, but very soon the AI will have been in your team’s discussions. Or at least the other AI, the one that’s always listening to Slack and Zoom, will have written a summary of those discussions that’s in the coding AI’s context. Design docs too.

Fully-remote teams will have an advantage. At least until RTO includes wearing a lapel mic at all times…

@jamiemccarthy @jrose @inthehands @RuthMalan so far, my experience is: they may have seen the discussion, but they don't "remember" it, and they certainly have no idea which are the salient points. Sometimes even when you ram said points down their throats.

In short, I'm fine asking them to show me a depth-first search, but I would trust them with architecture and logical design decisions about as far as I could comfortably spit a rat.

@rgarner @jrose @inthehands @RuthMalan 100% agree with the overall thrust of what you’re saying

@jamiemccarthy @rgarner @jrose @RuthMalan
Yeah, I'm with Russell here: the whole “soon the AI will think” line simply isn’t justified by either theory or evidence. It’s akin to thinking that if you make cars go fast enough, eventually they’ll travel backwards in time.

Re summarization specifically…

@jamiemccarthy @rgarner @jrose @RuthMalan
…there was a recent paper (lost link, sorry) that systematically reviewed LLM-generated summaries. They found in the lab what people have observed anecdotally: LLMs suck at it because they don’t know what the point is. They’re great at reducing word count in a grammatically well-formed way! But they often miss the key finding, highlight the wrong thing, etc.