This is good (from @shriramk): https://mastodon.social/@shriramk/110040524796761802
The skill of recognizing and diagnosing broken code only becomes •more• important in the face of LLM code generators.
This is good (from @shriramk): https://mastodon.social/@shriramk/110040524796761802
The skill of recognizing and diagnosing broken code only becomes •more• important in the face of LLM code generators.
Any experienced programmer worth their salt will tell you that •producing• code — learning syntax, finding examples, combining them, adding behaviors, adding complexity — is the •easy• part of programming.
The hard part: “How can it break? How will it surprise us? How will it change? Does it •really• accomplish our goal? What •is• our goal? Are we all even imagining the same goal? Do we understand each other? Will the next person to work on this understand it? Should we even build this?”
A thought exercise:
Which of the problems in the post above does AI code generation make easier? faster?
Which does it not help?
Which might it exacerbate?
My “hard part” list ended only because of the post size limit; it goes on, of course.
From @h_albermann: “Are we solving the right problem?” And would solving a slightly different problem simplify things? Reduce risk? Open doors? How will we measure, reflect on, reassess these answers as we build?
From @awwaiid: “How can I get rid of this?” How can I split it? Abstract it? How can we prepare for future change? But not over-prepare? What kind of flexibility should we invest in? Leave room for?
How does this software impact people? Its users? Its stakeholders? Its maintainers, current and future? Society? Especially the most vulnerable?
Yes, coders have a part in all of these questions above! We are often the first to see crucial details, often the first to have a sense of the •reality• of the whole system (as opposed to our wishful imaginings of it). There is no such thing as “just coding;” all actions have consequences.