This is good (from @shriramk): https://mastodon.social/@shriramk/110040524796761802
The skill of recognizing and diagnosing broken code only becomes •more• important in the face of LLM code generators.
This is good (from @shriramk): https://mastodon.social/@shriramk/110040524796761802
The skill of recognizing and diagnosing broken code only becomes •more• important in the face of LLM code generators.
Any experienced programmer worth their salt will tell you that •producing• code — learning syntax, finding examples, combining them, adding behaviors, adding complexity — is the •easy• part of programming.
The hard part: “How can it break? How will it surprise us? How will it change? Does it •really• accomplish our goal? What •is• our goal? Are we all even imagining the same goal? Do we understand each other? Will the next person to work on this understand it? Should we even build this?”
A thought exercise:
Which of the problems in the post above does AI code generation make easier? faster?
Which does it not help?
Which might it exacerbate?
@inthehands you’ve got the wrong end of the stick there. Copilot doesn’t take out the easy parts *or* the hard parts, but the *boring parts*. The “write this line again but for the next button instead of the previous one” or “shit which arg does what in splice again, guess I need to look it up”. The little speed bumps of joyless flow-interruptions.
Shock absorbers don’t drive, navigate, or pick the destination but they make more destinations possible, desirable, enjoyable, and to more drivers.