One of the ways that LLM-authored code improves productivity is by merely SAYING it does things. It's way faster than the whole time-consuming process of actually doing things. This is real code someone sent to me for review.
This code base is a gift that keeps on giving. The irony is that I found this code because I'm troubleshooting an "Access Denied" error message!
@paco my code optimization LLM says that you can speed up your program by ditching lines 123 and 124. It should not have adverse side effects. 😬

@fabrice It's good that it put a comment in to tell me that it was going to print a message about verifying. I might not know what that line of code does, otherwise.

You can look at my timeline here and see the days that I work on something AI-related. The days I snark on AI and the days I work with it are exactly correlated. 😃

@paco @fabrice
There's an old unix AI that would probably significantly improve that code.

Try `rm -rf *`

GPT-4: 'I wrote unit tests!' *Leaves out the part where it actually runs them*
@rgo sounds like some consultants deliverables I got to review. Their unit tests were call some function then assert true.
@compassDoesWhat
I mean, that at least does test that the function call didn't fail. It's not extremely useful, but it is a test.
@rgo
@nachof @rgo and it gathers test coverage… which was funny since they didn’t have a test coverage requirement on the contract.
@paco incredible

@jamesoff @paco

if there is a tech opportunity, am already to join

@paco This is bad - you're supposed to have a random sleep if you're going to make some placbocode
@silverwizard @paco not just a sleep, better make sure those fans spin up for a moment
@barometz @paco ah true, find a GPU pi calculation tool and calculate a million digits

@paco

Fascinating! All problems of the world solved with two lines of vibe coding.

@paco This is maybe the greatest four lines of code I have ever read
@paco No wonder productivity is up by 30% at big tech. Stuff like this is driving those numbers.
@paco Does this have consequences for the colleague? In my opinion, it would be a legit case for consequences. It's a clear breach of several common values and of respect.
@clemensprill Wouldn't that be nice? This person has done exactly what the business has asked them to do. By telling them that LLM-generated code is bad, I am doing what company does not want done. I'm the bad guy here.

@paco To be honest, if your company complains about you rejecting this, it's a to be gone company anyway 😬. And well, being not even able to do a basic check of your code should be a clear violation of standards in the company, in any company. But yep, I see that trash anywhere as well.

It's a waste of everyone's time. Same as these company news written by colleagues with help of AI. Short information becomes bloated so hundreds of people read longer than what the colleague saved by LLM.

@clemensprill @paco unfortunately when the standard is ā€œAI ALL THE THINGSā€, then there is no violation here. Especially when the company has invested ten kajillion dollars in AI, no one wants to hear how stupid it is.
@steggy @paco Sad but true 😬.

@paco @clemensprill

- KIIDS, DINNER IS READY!!! Make sure you WASH YOUR HANDS FIRST!
* echo from upstairs "Washing our hands right now."
* echo from upstairs "ā˜‘ļø Hands washed successfully!"

@clemensprill
A company where this has consequences wouldn't hire the person in the first place.
@paco
@j_j @paco In general more senior colleagues use AI without much awareness how it works, its limitations and what to watch out for. Same with younger colleagues. And of course there are colleagues who never reviewed their things previously as well. So I wouldn't say those people wouldn't be hired but it depends on their development regarding new skills or tools. If it stays like that, it's a major issue.

@clemensprill
Not sure if my personal anecdote is relevant, but I know a 'senior engineer' who does not know how to write code. They spent their career copy/pasting, some trial&error, and now vibe coding. They couldn't write FizzBuzz. Their career was successful in a place that's copy-paste heavy and has somewhat repetitive tasks, mostly because the code base is a copy/paste graveyard; the same bugs appears everywhere and the same fix helps.

@paco

Writing this is my therapy. I tried to teach this person. Their background is in a different field and they are nice. However, it's impossible to have a normal conversation with, let's be clear, an imposter. I tried breaking down problems in small pieces, explain what basic functions do and how to combine them. All my feedback was fed into copilot. And yes, with my 'prompting' AI can produce working code.

There's more than one such person and the company @paco @clemensprill

encourages this in a subtle manner. Code reviews somehow skip known-broken code. Testers and salespeople work around known defects. Everyone is sort-of complicit in the scheme.

I no longer work there, but they are still successful and going strong.

@j_j
> they are still successful and going strong.

So it's one of those "programming is just a tool" companies. I will never know if that's a better, worse or simply different view from "programming is an art, it must be curated".

@paco haha this makes me think the whole programming landscape is turning into the programming mess I had to clean up at the beginning of my career that made me pivot away from coding where the now fired dev had decided to do the completed count BEFORE the action of doing the thing. So when it was handed to me manager was like we are doing great so much is already done and then I was like uhh it wasn't, his code just did nothing if processes failed for any reason and proceeded onward šŸ˜‚
@paco did you instruct an LLM to reply to him?
@paco This is huge. LLM assisted code has three main benefits.
1. The developer has to think about the requirements
2. The LLM is far more likely to add comments as plain text in the output so humans can reason about it
3. Comments make the codebase more digestible for LLM to be better the next time.
@paco I'm currently fixing a former colleague's copiloted project and there are so many comments saying things like "removed thing for better performance" without actually having done so.
@paco I actually encountered such "happy path" code, written by an human. But this was once in 25 years 😬.

@paco

Thanks to LLM generated code the success rate of lambda package built verification is up to a stunning 100%.
Marvellous!

@digital_bohemian @paco Do I have a serious gap of knowledge if I have no idea what a #lambdaPackage is, after working in IT for almost 30 years?
@paco The resulting code is also way faster. Gotta improve performance!
@paco To be fair I once took over a "almost finished" test suite from someone who had just left the company and discovered every test case was exactly like this. They had been "working on it" for months. This was over 20 years ago, well before we had LLMs to blame :D
@paco Yes, LLM's are just like humans lazy.
@fietsrz They are nothing like humans. Lazy is a state of mind. LLMs are just probabilistic token generators.

@paco Grossartig.

echo "Solving World Hunger ... done!"

@paco

if there is a tech opportunity, am already to join. anyone can help

@paco would you be interested in coming to speak at Prompt||GTFO?

@paco 😭 I ... but ...

Well, it executes fast I suppose.

@paco The real scary part is that pattern HAD to be in the training data enough times to be meaningful. 🄶
@gimulnautti @paco
It identified the common parts of code that echoed the same things but digferd inthe actual doing stuff, then just included the common parta. Exactly what a stochastic parrot is supposed to do.
@paco the only language model I'll ever need is one that can type 'A' followed a long series of h's at the click of a button, for whenever I need a response to whatever nonsense tech has come up with next
@paco Not just improved productivity. The run time is also blazing fast.
@paco reminded me of this: https://xkcd.com/221/
Random Number

xkcd
@paco Sounds like the person also skipped the time-consuming process of doing things.
@paco ha! didn't even set a delay between them!
@leyrer großartiges Beispiel für ā€žsetzen, 6, und bleiben Sie um Himmels Willen sitzen!ā€œā€¦
@paco This is because Humans change themthelf after you got told...
@paco

What LLM specifically, do you know?

@paco

You can't make this stuff up. Oh, wait, you can.

#AI #Insanity

@SpaceLifeForm yeah but the NEAT THING is that these days, you don't have to - that's what LLMs are for!
@paco It's very efficient - after all, the quickest way to verify things is to not verify things at all.
@paco
This reminds me of the story about an early version of MS Word.
From https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/
ā€œone programmer, who had to write the code to calculate the height of a line of text, simply wrote ā€œreturn 12;ā€ and waited for the bug report to come in about how his function is not always correct.ā€
The Joel Test: 12 Steps to Better Code

Have you ever heard of SEMA? It’s a fairly esoteric system for measuring how good a software team is. No, wait! Don’t follow that link! It will take you about six years just to understa…

Joel on Software
@paco No wonder the fascists love this AI stuff so much. It's continuing the long running tradition of government by lying.
If I were on the org chart I'd have a three strikes policy for this sort of thing*: get caught submitting a PR like this 3 times, you're fired for rank incompetence. (and by "this sort of thing" I mean yeeting LLM datafarts into an editor instead of writing real code)