So, what was up last week, when all of tech suddenly decided that writing software isn't good or useful or interesting?
Cuz, I gotta say, that sucks, and you're all wrong.
So, what was up last week, when all of tech suddenly decided that writing software isn't good or useful or interesting?
Cuz, I gotta say, that sucks, and you're all wrong.
Like, I really don't get why so many of you are so eager to have statistical models write code for you.
I've been arguing for literally my whole career that the actual writing isn't the hard part of software development. But wow, did everyone take that in the wrong direction recently.
Understanding the system is the hard and valuable part. And I genuinely don't know how you think you're going to do that if you never get to do any of the safe and easy interactions with the system.
@dr2chase @ljrk @matzipan @jenniferplusplus @scottjenson It's pretty important to avoid noisy comments, IMO. The vast, VAST majority of useful comments I've ever written or encountered (only some of which I wrote earlier) have been about WHY the code is there and written the way it is. Even for abstruse "clever" circumlocutions where you write down what that impenetrable snarl does, you're stepping all over your own hair if you don't also explain why you had to create the Gordian knot in the first place.
I mean, write down what whole blocks or functions do, sure -- by all means, document your API properly -- but comments are much less impactful and much easier to skim past if there's too much spurious chaff about what the code does tanking your SNR.
@Llammissar @dr2chase @matzipan @jenniferplusplus @scottjenson Yup. If the code is hard to read, a comment is not a proper fix, fix it in code instead.
Code should be as expressive as possible and only things not able to be expressed in code should be written in comments. Otherwise you also easily run into wrong comments which were either wrong from the start or became wrong due to change anomalies resulting from the redundancy: If comments contain redundant information, a change in code may forget to update the comment and result in misleading comments.
I do admit, this is maybe a bit tainted through my background in code audits/security. In my experience, most comments are just wrong. And in many cases dangerously wrong. But reading those comments could mislead the reviewer to think the code was right. At some point I started stripping comments from code pre review because most comments are just distractions from the real problem and not design explanations – which they should be.
@cratermoon @ljrk @jenniferplusplus @scottjenson
No I am not talking about refactoring in this case.
But anyway, abstraction of repeated code does not come for free. It always comes at the cost of code understandability. I think this article (https://overreacted.io/goodbye-clean-code/) had an example of exactly what I mean.
@matzipan @cratermoon @jenniferplusplus @scottjenson I know the article but the problem here is a different one: The abstraction was bad. A good abstraction requires deep understanding behind the principles of the algorithm, why it was "repetitive" and how this can be properly modeled. Simply pulling out functionality and splitting code because of length is bad. Maybe the data structure choice was wrong, maybe things weren't properly (de)coupled, ....
Let's say it this way: Length of functions, duplicate code, etc., are indicators of bad code: But short functions without duplication are not implicating bad code.
@jenniferplusplus Precisely. Writing code is the easy part. Verifying that it actually does what you want, plus what you think you want is actually what you really want... and engineering the whole design behind it, that's the difficult part.
Of course you can automate the easy part, but that's... a microoptimization not worth the huge cost. Especially since it comes at more debugging overhead.
@jenniferplusplus so far the only thing I've liked is the fancier predictive text I've seen in VS2022
Mainly in reducing some tedious typing. For example if I type
thing.x = other.x + logic
When I type a line after starting with "thing" it'll usually predict
thing.y = other.y + logic
It still gets things wrong a little too often for my liking but reducing time spent on boring stuff is useful IMO.
@jenniferplusplus what we should be doing with this tool is automating the meaningless act of typing the code while our job as programmers should be the hard part: building a mental model of the program and transcribing it onto the machine.
Asking an LLM to write code to sort this dataset with a regex that selects for whatever seems like a good use to me. It's an annoying and menial thing to actually write code for, and I have better things to do!
That said, I refuse to use LLM code tools
@jenniferplusplus sorry, maybe I'm talking about something tangential. Maybe I'm imagining a much narrower use case.
I would use an LLM for the kinds of code I usually write a script to generate. Serializers, boilerplate, class definitions with dozens of overloads, giant switches, rewriting strings as char arrays. Meaningless trivial stuff that's really just text more than it is code.
I guess that's really what I mean: using an LLM to generate the text, not the code.
@jenniferplusplus but I'm also assuming a skilled programmer who has already learned everything you can learn from packing a structure into a byte array. I could easily see new programmers getting wrong ideas from this.
I think it should be a tool like our IDEs generating boilerplate and nothing more
@alysbrooks I'm not sure why scaffolding boilerplate code would benefit from a multi billion parameter probability model built from stolen labor and paid for in climate catastrophes.
That seems like a job for a templating system
But now thanks to this technological advancement, managers can write firewall rules without needing security people!
https://github.com/eunomia-bpf/GPTtrace
What could go wrong?
(edit: above is only for tracing and the GPT is providing a good head-start, I'm half-trolling)
@jenniferplusplus @gizmomathboy The wheel of saṃsāra keeps turning. 5GLs, expert systems, CASE, rapid application development, UML, OOP, microservices, low-code and no-code, machine learning… everything has a season of silver-bullet hype and then contraction.
Just toss copies of Fred Brooks’ work at the hucksters and true believers while you pick the mix that gets the job done.
Fire all your junior programmers to save money, run out of senior engineers as they retire. It's a great way to turn your business into a permanent vassal state of Google, Microsoft, or Amazon.
Provided climate disaster hasn't completely destroyed civilization by then, in 20 years there may be huge opportunities for anyone who can figure out how to resuscitate old machines and do ASM on them when the entire tech industry is destroyed by its own self-enshittification.
This. I tried to have this conversation with coworkers lobbying to use MML to answer chat queries on our website, and absolutely failed to convey that somebody has to generate the data for the system to harvest, and then edit and monitor its replies.
& besides, who doesn't immediately type "Agent" when they start talking to a chatbot? ("If the answer to my question was on your website, I'da found it by now.")
@jenniferplusplus yeah, between LSP and copilot I'm really not sure what all the buzz is about. Neither of these tools save me from the part of my job that actually involves burning large amounts of time.
In the case of copilot even less so because it isn't building new tools to solve difficult problems: it's trying to build something akin to what it has seen before and usually that's not what I aim to do.
If you asked an LLM to envision what dynamic, OS based instrumentation would look like 40 years ago it would not magically invent DTrace.