This is good (from @shriramk): https://mastodon.social/@shriramk/110040524796761802

The skill of recognizing and diagnosing broken code only becomes •more• important in the face of LLM code generators.

Any experienced programmer worth their salt will tell you that •producing• code — learning syntax, finding examples, combining them, adding behaviors, adding complexity — is the •easy• part of programming.

The hard part: “How can it break? How will it surprise us? How will it change? Does it •really• accomplish our goal? What •is• our goal? Are we all even imagining the same goal? Do we understand each other? Will the next person to work on this understand it? Should we even build this?”

@inthehands but I don't mind automating the easy part. It can still take time. I can see doing TDD with me writing the tests and AI writing the code for instance.

The thing with this ChatGPT hype is that people seem to either think that it's worthless or that it's going to replace humans.

I think it might give a 10-20% speed up to senior developers, which still is very revolutionary

@mark I can see that possibility. I can also easily imagine it being a net negative: the generated code has flaws (either technical or goal-related) that are more costly to fix post hoc than to have thought through carefully from the outset. It’s going to take a lot of work to figure out how to use these tools well.
@inthehands I suspect like any new technology it will be massively overused before we learn when it is and isn't appropriate to use. Like meta-programming in ruby