This is good (from @shriramk): https://mastodon.social/@shriramk/110040524796761802

The skill of recognizing and diagnosing broken code only becomes •more• important in the face of LLM code generators.

Any experienced programmer worth their salt will tell you that •producing• code — learning syntax, finding examples, combining them, adding behaviors, adding complexity — is the •easy• part of programming.

The hard part: “How can it break? How will it surprise us? How will it change? Does it •really• accomplish our goal? What •is• our goal? Are we all even imagining the same goal? Do we understand each other? Will the next person to work on this understand it? Should we even build this?”

A thought exercise:

Which of the problems in the post above does AI code generation make easier? faster?

Which does it not help?

Which might it exacerbate?

@inthehands Great questions. The cargo cult is in full swing, so we should see the results within a year or two. For now, we lack data to answer. I did read that CoPilot has proven itself to be barely helpful in a limited study.

Reserving judgment but my prediction? It'll have a huge impact on SEs. And basic web dev. But engineers working on complex problems will find less utility unless we make our own tooling. If it turns out to be helpful, serious people will take it seriously.