I do not need to hear from people who can't code or at the very minimum seem to actually hate it how I can supposedly code more efficiently
This is a post about large language models
I do not need to hear from people who can't code or at the very minimum seem to actually hate it how I can supposedly code more efficiently
This is a post about large language models
I also do not need to hear from people who can't understand the cognitive load difference between writing code yourself and trying to understand code someone, or *something*, else wrote. Especially when the something else will be able to slip in little bugs that are easy to overlook and which no human coder, not even the most junior, would ever put in there
This is a post about that Ptacek article some people think has some good points for some baffling reason (it doesn't)
Also, and I can't believe I'm going to actually deconstruct his arguments further, fucking linters and unit tests? Really?? Putting aside your AI is writing said unit tests so you have no idea what it's testing for, these are tools designed for catching the occasional human flub. They were *not* designed to hold back a tidal wave of sewage such as the one produced by LLMs
Like idk how to tell you this but you can easily introduce bugs that the linter and unit tests won't catch
Shocking, I know
Me sending a thousand AI-driven Mac trucks onto the road: it's fine actually if they do something stupid the guardrails will stop them
*watches a truck slam clean through a guardrail and plummet into a ravine leaving a large mushroom cloud*
Ah. Well. Nevertheless
Honestly I think there's a disconnect between LLM proponents when it comes to code and the rest of us. They see code as a purely mechanical thing, and so ripe for automation. To them claims of artistry and craft are something to roll your eyes at, arrogance from senior engineers who think too highly of themselves
Meanwhile said senior engineers have the decades of experience to know how much of programming relies on artistry and craft, how much of it is fundamentally a creative endeavor
@jbqueru @eniko I doubt COBOL was intended to displace programmers; there weren't enough programmers in 1958 to worry about displacing and machine time was far more expensive than human time. Like FORTRAN, COBOL was aimed at subject matter experts already doing the work. Programming as a standalone generic profession didn't come about until the 60s or 70s.
The difference between those systems and AI is in how vendors treated errors. If COBOL or SQL gave wrong results for a properly constructed expression, that was an error to be fixed. If an AI tells you to put glue on a pizza or to drink gasoline, the vendor shrugs and is mildly embarrassed and goes on doing what they were doing. All failures and errors are on the user's part - for not phrasing their prompt 'correctly', for not verifying the machine's stochastic response. We've gone from GIGO to IGO with vendors basically saying that requirements and acceptance tests don't matter. All that matters is line go up.
We still use FORTRAN, COBOL, and SQL - they have proven value and are fit for purpose. They solve actual problems. They are easier to learn and easier to produce error free code than their predecessors. The only problem AI is intended to solve is that other people have money.
@jbqueru @eniko do NOT under ANY circumstances lump COBOL in there. Period. Full stop. This is outright wrong and demonstrates a complete ignorance of COBOL's origins and intent.
COBOL was not in ANY way meant to make programming 'easier.' It was designed specifically as a successor to FLOWMATIC to create a portable language which was self-documenting. NOT to be easy to write.
The whole point of leaning on English was to make it *portable* on machines of the time, a very rare thing.
@eniko I hadn't read the article until seeing your thread, but now I have. The first few paragraphs left me puzzled until I hit this line and then it all clicked:
> I work mostly in Go
Ah yes, that's why he needs to write repetitive code that requires extensive unit-testing. That's also why the kind of code generated by an LLM works (maybe). Good luck fixing a bug in that code five years from now.
@gabrielesvelto "My honeymoon with the Go language is *extremely* over."
strong start XD
@eniko @gabrielesvelto I think most people agree that Go has this verbose aspect to it, for example, functions return the error and result value which you immediately need to check with an if statement. This is just tedious to write, and I remember when I was trying out Copilot, was kind of convenient to have generated for you, even if it wasn’t 100% what you wanted.
The kinds of LLM tools he is talking about are a step beyond the Copilot that just autocompletes. I do understand what his point in the blog was even if I don’t like the vibe coding productivity hustle culture these people talk about.
It is for a specific kind of programming: corporate code that you write for a job for getting paid money. There is no passion or artistry here. You need to complete Jira tickets at the end of your sprint. You have to fulfill requirements and pass the acceptance criteria. PRs are mostly rubber stamp, nobody clones your branch and test it, the best you can hope for is someone noticing a superficial mistake.
Yeah, I’m now over 40 and I also sometimes think it’s not worth my time to spend learning the intricacies of some microservices interacting with each other. I also tend to get caught in some procrastination traps that are not related to the assigned task I’m working on.
Our stuff at work is in C# so I don’t know how well it works with this AI shit, but another thing about Go is that it is very easy to work with code in a lot of different repos because you can just import code directly without having to create, build, deploy and import packages that we have to do with Nuget.
I should also mention that at work we have a shit ton of legacy code, split into many different repos, sometimes hard to know what parts are actually in use and what is outdated, or why is some parts of the data handled in the “wrong place”, was it just convenient at the moment or does it make sense for the “business logic” to do it this way?
And then the teams get reorganized and you’re working with stuff you’ve never seen before and it takes quite a while to catch up. Not to mention the fact that each time someone that has 5+ years of experience leaves the company, the institutional knowledge is also lost.
Meanwhile the middle managers hired 3 months ago are asking for changes to the system without fully understanding how it all works together.
Meanwhile, the technical debt piles up and fossilizes and gaps in observability resulting from shifting through multiple platforms for cost optimization have left even simple customer support issues to become a puzzle to be solved.
So yeah, I wouldn’t mind if there was some tool that helped me to automate some part of my work. And it’s probably going to involve the LLMs at some point in the future. I don’t have to like it, but it’s a job, and that comes with the territory.
@gabrielesvelto I don’t have a problem with mediocre code, there is a lot of low-quality code too, but the over engineered object oriented stuff relying on meta programming constructs is the thing that makes stuff hard to understand and change, while the idea was to “abstract away” the repetitive stuff, I suppose.
I only have the appeal right now, I don’t actually know if these tools work for our code yet (Copilot was kind of underwhelming).
@gabrielesvelto @eniko
I came here to say something similar.
If you have to write extensive boilerplate code, I blame the programming language.
It's a bit like the AI email use case depicted. AI might work "well enough" here but it's solving the wrong problem!
This disconnect has always existed.
Way before LLMs I could bash out code that mostly worked but needed cleaning up and optimising and I'd get immediate pressure to put it into production and move onto the next thing.
It's the same for people like voice-over artists who have spent decades honing their craft and now accountants with no knowledge or taste for good voice acting are replacing them with AI because it's "good enough" and they can't appreciate the difference quality makes.
god did you see that recent fly io blog post where the author goes "uhh but actually if you're trying to be an artisan when coding you're literally doing it wrong and literally nobody will care so llms are good actually"
like damn sorry you don't actually enjoy programming i hope you get well soon
edit: actually went and read the rest of the thread and realized that was exactly the article you were talking about lmaooo
@manchmalscott lol yeah
but yeah that guy is insufferable
@eniko I mean, we don't even get fully-automated #IT where any "#AI" could in theory perform as it's ripe for that.
#Protip: Ask "AI" fans if they would cobditionlessly entrust an "AI" with their financial details in full.