This is good (from @shriramk): https://mastodon.social/@shriramk/110040524796761802
The skill of recognizing and diagnosing broken code only becomes •more• important in the face of LLM code generators.
This is good (from @shriramk): https://mastodon.social/@shriramk/110040524796761802
The skill of recognizing and diagnosing broken code only becomes •more• important in the face of LLM code generators.
Any experienced programmer worth their salt will tell you that •producing• code — learning syntax, finding examples, combining them, adding behaviors, adding complexity — is the •easy• part of programming.
The hard part: “How can it break? How will it surprise us? How will it change? Does it •really• accomplish our goal? What •is• our goal? Are we all even imagining the same goal? Do we understand each other? Will the next person to work on this understand it? Should we even build this?”
A thought exercise:
Which of the problems in the post above does AI code generation make easier? faster?
Which does it not help?
Which might it exacerbate?
My “hard part” list ended only because of the post size limit; it goes on, of course.
From @h_albermann: “Are we solving the right problem?” And would solving a slightly different problem simplify things? Reduce risk? Open doors? How will we measure, reflect on, reassess these answers as we build?
From @awwaiid: “How can I get rid of this?” How can I split it? Abstract it? How can we prepare for future change? But not over-prepare? What kind of flexibility should we invest in? Leave room for?
How does this software impact people? Its users? Its stakeholders? Its maintainers, current and future? Society? Especially the most vulnerable?
Yes, coders have a part in all of these questions above! We are often the first to see crucial details, often the first to have a sense of the •reality• of the whole system (as opposed to our wishful imaginings of it). There is no such thing as “just coding;” all actions have consequences.
The hard bit of software engineering has always been "deeply understanding a problem well enough to implement a useful solution to it". The scary bit of people trying to AI away the programming part is that most rely on writing the software to understand the problem domain -- remove that part and you truly do have YOLO driven development with badly thought out questions "generating" bad and kinda broken software.
@mapcar @adamhill @endocrimes
Ha, well, •this• pianist doesn’t spend hours practicing scales — which probably explains my middling technique!
Regardless, I think you’re on the money with the last part: anything AI can automate by turning the web into boilerplate is probably ought to be a new language or library abstraction. I have longer thoughts on this I’m going to post at some point.
@inthehands Thinking about it, I suppose one thing that bothers me about foregoing the opportunity for someone to write the initial code is – with code written by a competent person, you have a chance to divine the coder's *intent*.
Example: If I've used command pattern but there are only two commands, it's probably because I think there might be more commands added to the code in future.
With LLM-generated code, of course, there is no intent. Just code.
@fishidwardrobe Yes! Or if there is intent, the intent comes from the code the LLM is plagiarizing. If you’re lucky, that might happen to line up with the problem in front of you. If you’re lucky.
Regardless, all LLM-generated code is legacy code of suspect quality from the moment it’s created. If we’re going to let that into our dev processes, we need to think about the implications.
In the last 60 years, few economists have contributed more to exposing the failures of capitalism than Joseph Stiglitz. Formerly the chief economist of the World Bank and chair of the U.S. Council of Economic Advisers under President Bill Clinton, Stiglitz won the Nobel Prize in Economics in 2001 for his work showing that the possibility of having different information can lead to inefficient market outcomes. On this episode of Capitalisn't, Stiglitz joins Bethany and Luigi to discuss his latest book, "The Road to Freedom: Economics and the Good Society" (W.W. Norton, 2024). The book, as Bethany describes it, is a "full frontal attack on neoliberalism" that provides a prospective roadmap towards a more progressive form of capitalism. Together, the three discuss the role of mis- and disinformation in producing market inefficiencies, the importance of regulation, institutional accountability, and collective action in correcting market failures, and the role of neoliberalism in today's global populist uprising. In the process, they underscore the close link between economic and political freedom.
@inthehands Great questions. The cargo cult is in full swing, so we should see the results within a year or two. For now, we lack data to answer. I did read that CoPilot has proven itself to be barely helpful in a limited study.
Reserving judgment but my prediction? It'll have a huge impact on SEs. And basic web dev. But engineers working on complex problems will find less utility unless we make our own tooling. If it turns out to be helpful, serious people will take it seriously.
@inthehands you’ve got the wrong end of the stick there. Copilot doesn’t take out the easy parts *or* the hard parts, but the *boring parts*. The “write this line again but for the next button instead of the previous one” or “shit which arg does what in splice again, guess I need to look it up”. The little speed bumps of joyless flow-interruptions.
Shock absorbers don’t drive, navigate, or pick the destination but they make more destinations possible, desirable, enjoyable, and to more drivers.
@inthehands :o I see the same thing at library science.
well any organization will have problems in communicating decisions between spaces of time, but library (and any other science that works with taxonomies and classifications) need to convey information and knowledge to other people outside of the "institution".
many times a librarian will describe collections with their understanding of things in mind and not about the collective understanding and necessities of the user of the institution.
@inthehands seeing these things generate boilerplate JS and other easy to check stuff - that's kinda handy, pretty energy intense for what it does
asking them to do *anything in C*, or other contexts where its easy to be wrong and important not to be - horrifying, almost every output has serious problems and if you're not personally expert-level you will probably not catch them all
@inthehands but I don't mind automating the easy part. It can still take time. I can see doing TDD with me writing the tests and AI writing the code for instance.
The thing with this ChatGPT hype is that people seem to either think that it's worthless or that it's going to replace humans.
I think it might give a 10-20% speed up to senior developers, which still is very revolutionary
@inthehands I wholeheartedly agree!
Here is a related recent thread on that topic, if you are interested:
https://floss.social/@janriemer/110034737129225638
Especially see the post by @yuki2501
"#AI is going to make software engineers obsolete", is the most ridiculous thing I've heard in my career as a software engineer. People who say stuff like this have absolutely no idea, what software engineering is about: - Communication with people to specify requirements - Considering _A LOT OF CONTEXT_ - 95% reading existing code, 5% writing new code - Reuse existing code - Find patterns and relations - ... #ArtificialIntelligence #SoftwareEngineering
@inthehands you forgot the most difficult part, even: maintenance.
Will it keep running in ten years? Can you maintain the development pace over years? How can you build software that keeps running through decades? How do you architecture teams and software so your team keeps shipping as fast in ten years as today.
@inthehands Love this. More software engineers need to ask themselves questions like "Should we even build this?".
Or my personal favorite: "What happens if it's wrong?", which I feel like is the key question that people building these AI chatbots *aren't* asking themselves.
@inthehands back in my day, writing #software was still split into two or three roles: The Systems Analyst and Designer, then the much lower ranked programmer, and lower down again, maybe a data entry person too.
Systems Analysts didn't need to be able to write code at all, they dealt with all the higher level stuff you mentioned.
I suppose in a way these days it's almost gone back full circle with product teams acting as the Systems Analysts and handing the coding to an engineering pool.
@inthehands any HR worth their salt will tell you that you use the word "programmer" to address multiple professions and even roles.
That said, I'm happy when software engineers are asking themselves the latter two questions you've listed. But they don't have to, in principle.
If your coop or a company doesn't have processess that concern discoverability and understandability, then it's not software engineers' responsibility to eyeball those, nor will it end well for anyone, because they're not professional technical writers / technical HR.
@inthehands I've seen a fairly large team of devs burn multiple years building stuff because nobody would admit that the exec driving the project hadn't actually articulated something specific to build.
I am sure GPT could have generated 10x as much code in that time, for just as much accomplishment.
Yes and yes!