Is anyone else experience this thing where your fellow senior engineers seem to be lobotomised by AI?

I've had 4 different senior engineers in the last week come up with absolutely insane changes or code, that they were instructed to do by AI. Things that if you used your brain for a few minutes you should realise just don't work.

They also rarely can explain why they make these changes or what the code actually does.

I feel like I'm absolutely going insane, and it also makes me not able to trust anyones answers or analysis' because I /know/ there is a high chance they should asked AI and wrote it off as their own.

I think the effect AI has had on our industry's knowledge is really significant, and it's honestly very scary.

@Purple I was about to toot on my timeline, but actually this is a good springboard.

I've reviewed like 3 or 4 community PRs this week and the common thread is that you get very verbose explanations for what the PR does at first, but the followup is weak.

Also, these code changes always seem to contain a whole load of extra code that didn't need to exist, almost like the AI didn't quite understand what was going on..

@halfy

I think AI is incredibly weak at actually understanding problems with any moderate degree of complexity.

People forget it's a language model that is just very advanced at predicting what would be a likely, or seemingly sensible answer to a prompt. It's kinda inherent to it's design, they try to overcome it by having it write it's own prompt ("thinking mode") but even then it still just doesn't grasp the full idea

@Purple @halfy LLMs fundamentally do not "understand" things. they cannot access meaning. as such they also can't really fundamentally get better at this. all you can do is little tweaks that cannot resolve the fundamental problems with them cause they are based on how they function.
@elexia @Purple @halfy @saraislet Yeah, I heard a presentation recently where someone said "until [product] understands" and boy am I glad I was on mute because my reaction was very 😬😬😬
@Purple @halfy Keep notes. One day when everything falls to pieces, you'll be able to explain why.
@anne_twain @halfy In all honesty I hope to be out of that place before that happens 😅
@Purple @anne_twain @halfy still worth doing for posterity anyway imo
@elexia @Purple @halfy Indeed. Historians love accounts written at the time.
@Purple I struggle to even ask about work with my ex colleagues because they all seem to have bought into the company's two feet all in dive into AI, and it feels like we're living in separate realities.

@swift I sometimes end up talking to them about it, and they seem to be aware AI is sometimes wrong... But yet they keep using it for literally everything.

It's almost like an addict who knows the addiction might be hurting them, but can't stop using.

I can see the damage it's doing to the long term maintainability of our environment and platform too :(

@Purple @swift It is an addiction. It's very similar to gambling.
@jackemled @Purple thing is, I'm not sure they're even using it personally all that much. They've actively bought into the idea of doing the work to integrate into systems they build for clients because of The Possibilities, the New Capabilities it unlocks. And I just don't know how to reconcile people I know are otherwise smart, conscientious, environmentally considerate, etc. with what seems so clearly to me to be blatant snake oil.

@Purple @swift you've actually nailed it  it looks like an addiction because prompting is like a slot machine: each prompt is another pull on the slot machine hoping for a good result, and maybe, just maybe, if they prompt just one more time, they'll get a good result 

<insert rant about needing human psych as a mandatory class during education here>

@OctaviaConAmore @Purple @swift gods, fuck that makes so much sense. we've known for a long time that giving rewards (pseudo-) randomly is more effective at reinforcing a behavior than giving a reward every time. the bad results in that way might actually be helping to reinforce the use. except people's ability to scrutinize the results also seems to rapidly atrophy and they will just use whatever it spits out without actually understanding it.

@elexia @Purple @swift yup, that pseudo-random reward is exactly the mechanic I was thinking of  

as for people who use genAI's cognitive abilities declining, there's a concept called... cognitive offloading, I think?

I think one of the most societally common examples is where husbands will often not even try to remember dates of important things because their wife has them well tracked, increasing reliance on her and making the husband's ability to track important dates decline  

in cases where groups start having specialists, it's beneficial, but it does lead to a decreasing ability to do that thing for the people who no longer actively exercise that part of their brains 

@Purple working together with people who actually enjoy programming is a luxury :/
@karpour @Purple Yeah it’s gotten to the point that I’m trying to find people to pair program with on side projects that want to avoid LLMs altogether. Turns out it’s tough!

@Purple One of the biggest things the AI stuff has taught me is that so many programmers just... don't actually want to program.

I never would have expected that, but it's like the whole game industry here (for example) just doesn't actually want to make games. They want to HAVE MADE games, but absolutely despise the process. And I just don't get it. I LOVE the process of making things!

@kitcat

To be honest the making process sometimes frustrates me, but I don't see how you could be proud of what you have made if it's just hastily thrown together by a chatbot!

I'm proud of the things I've made, because I've put my own brain to use to create something I had in mind. I thought it worked this way for others too, but like you say I think we might be in the minority here 😅

@Purple Oh yeah, definitely has those frustrating moments, but honestly the solving something after that is just so great!

Then I see someone bragging about their 30k loc vibe coded chatbot output that they've never read and just... I can't comprehend what they're celebrating.

On the bright side, seeing that makes me all the more proud of doing the cool things myself, since I've seen the alternative :3

@kitcat @Purple Just kind of true of everyone using AI really- all those 'artists' who want to 'have made' art but don't want to draw are the same.

I can understand the desire to have a finished product, and I can also understand the fact that people have to do something they don't enjoy for a wage, it's just so disappointing that people's answer seems to be, well, this.

@Purple Yeah, sounds familiar sadly. One day one of my colleagues can write perfect high quality code and the next day I am looking at a PR and the WTF/min is incredibly high and I know what happened... When confronted in the way if "I am not really sure why you wrote it like this, normally you have a very different style" they admit they used AI that day cause they were lazy...

And it's often very obviously weird stuff... Like for example we always program Python with typing and suddenly there is no typing anywhere to be found. Recently I even got a PR where >everything< was done using getattr meta-programming instead of typed property access and that >should< be something you almost never need to use and wonder when reading even once.

I really like my colleagues and normally they can make wonderful stuff, but when things like this happen... I don't know, they just given up or something? 😅

@Purple I was lucky to be in a project where this effect was somewhat subdued.
Outside of that context:
I just got started scrolling through the docs to configure something specific, only for someone else to pose a question to a chatbot, which of course made something up.
A senior colleague was trying to get a camera drone to launch, I open the manual online, he started talking into his phone (guess who got the right answer, though the LLM nonsense was tried first).
There’s more.
@Purple so yeah, you are not alone.
I used to think my (now previous) employer mostly had a staff that actually somewhat cared about the craft. I am not so sure anymore.
On the contrary, I more and more get questioned on or have to justify my decision to avoid these tools wherever I can (sure, being able to do so is a privilege, depending on current pressure at wherever one works to feed and house themselves).

I get that the _craft_ changes over time — I had colleagues who grew up on assembler and their C was sweet — but I’m pretty bewildered by how bad stuff is and why do the _customers_ accept it? The customers, who give companies the _money_?

Then I deal with a hilarious-not string of systemic errors from, oh, my cable company ‘s billing system. The only cable provider I have, and they’re trying to persuade regulators to let them increase their monopoly.

@crypticcelery @Purple

@Purple it's also happening to senior academics! 
@pounce We will definitely experience the consequences of this in 5-10 years from now 
@Purple
yes and I got fired shortly after probably because I was not "productive" enough compared to them
@arjan @Purple should have written a bunch of random garbage code that doesn't really work, clearly.

@Purple at large, i see this as capitalism digging its own grave, unfortunately at the expense of individuals.

what's a good argument against "we all should use ai to solve problems of a business faster" given that garbage throwaway code has silently been acceptaed across the board?

@river

The frustrating part is, in order to be able to convince someone they need to have a deep understanding of what output the AI is producing.

To the lazy or unsuspecting eye its output looks superb and perfect. But only if you have a reasonably deep understanding of the engineering challenge you've asked it to solve, you will understand it's more often garbage than not.

I don't see a way to win this battle, other than finding a company that doesn't fall for AI and keeps producing high quality services, even though they may take a little longer to build than competitors. Eventually others will start to collapse under the technical debt

@Purple yea i know a teamlead who just had to fire one guy like that, who wouldn't stop trying to push broken ai slop despite it being very obvious. it's legit a cult that breaks people's brains

@Purple As someone who basically refuses to use AI for anything, it's likely I'm unaware of just how much the use of it has pervaded people's working behaviors.

My understanding is that AI and LLM's are meant to be a tool. As with any tool, if used properly, it can be helpful and effective in completing a task. But there's a reason that people are educated on how to do things manually before they're allowed to use tools: there needs to be that sense of intuition that informs a person whether something seems correct or doesn't.

What you're describing, seems to be an outsourcing of critical thinking to these tools. It's like putting someone in the driver's seat of a self-driving car and now they're panicking when they have to take manual control.

@Purple using AI is just like managing a fairly incompetent Junior. Welcome to management. This is the fear and dread and uncertainty that you feel for the first couple of years after somebody has put you in charge of a software team. Where productivity can't be managed, and the results are not really within your control. How long do you wait to fire your AI assistance and go looking for another?
@geichel @Purple
AI is not at all like managing a fairly incompetent junior, because a major goal of managing a fairly incompetent junior is to help them grow into a fairly competent senior

@Purple I think it’s more exposing than changing.

AI coding won’t make you able to do something you can’t do, it can give you code you must understand and verify.

It can’t replace someone with knowledge, but it can, if used sensibly, augment some parts. Like boring refactoring and generating some simple boring stuff.

@yon @Purple A recent study found that AI used in the coding workplace created feelings of efficiency, but actually caused a slowdown in production.
@TheMNWolf @yon @Purple yeah, i suspect many teams understand their code less and less as the time goes.
@river @TheMNWolf @Purple Code and *business rules*, the much more difficult part vs most code written today.

@TheMNWolf @Purple The amount of verification needed goes up steeply as you can trust the AI. You can’t trust a human to write perfect code (hence tests), but an AI can just go wild and do anything.

So if you don’t understand the business rules needed nor can develop at the level the AI is trying to do, it won’t work.

But if you let it into production the good old “it will happen and your options don’t matter” kicks in.

It’s like having me review papers about quantum mechanics. I can find some spelling errors, but I will have no clue if something is plausible sounding gibberish or the real deal.

Well I guess the upside is that qualified and trained developers will become more and more scarce. Yay future we didn’t want. :/

@Purple same here, and I hate it so much, it's beyond words

@Purple I haven't seen it at these extremes in AI yet (with seniors).

What I have seen is that someone pushed a huge upgrade out in record time, one that would normally require careful consideration and planning.

Turned out they instructed an LLM to do the upgrade and not much else, because I later spotted some pitfalls mentioned in the upgrade guide (and not a long one, either).

To me, LLMs are tools; you can use them, but they do not absolve you of your responsibility to at least try to do your due diligence as a programmer (which goes beyond writing code and shipping it).

@Purple it is not just you. My former threw fits when I spelled out very clearly that it did not work, would not work, would cost the company hundreds of thousands of dollars they insisted we didn't have, and not only that, but every one of the problems from the junior was because they didn't listen, just plugged shit into ChatGPT, and the others were too lazy to actually review it.

@Purple I’ve been feeling like the whole industry has been going crazy for years, now, and the AI step is just the latest of many bananapants steps towards oblivion. But it’s certainly a big one.

Like, for ages the software industry has been high on its own farts about self-importance and trying to justify itself as the source of its self-made problems and a sinkhole of outsized valuation. And all for what?

@Purple oh yea this has definitely happened to me. i feel like its most prominent in people who also overestimate their own abilities, which is also surprisingly common in this industry
@Purple me, an unhired watching all the experienced people melt their brains: 
@kirakira Not sure if jealous... Maybe a little bit if we ignore the financials of it 
@Purple no no the financials ruin everything else, i'm 32 and i've never had autonomy over my own living situation, i can remember every time i've been on an airplane, and i can't even go to local cons anymore unless people help pay for it. i think a moderate tech salary is my yearly income every 2-3 months, there's a reason i'm still trying to get in 
@Purple to a much lesser degree, it happens to me too

@Purple One of my coworkers (very much not a senior though) is in a particularly dangerous spot for how he uses it, as he frequently doesn't have the foundational knowledge to understand how the output is only convincing-sounding bullshit, and he's leaning on it so hard I'm not sure if he's actually learning those foundations...

Meanwhile I'm the resident stick-in-the-mud who knows how the damned things actually work under the hood, trying to temper the marketing-wank everyone's blindly repeating... My boss is appreciative though, as he understands that it's coming more from simple pragmatism about the environment the tech exists in than "fear of change" (even if it did actually work, (which it usually doesn't) VCs are currently shoveling money into a furnace to subsidize it, and when that money dries up it's going to skyrocket in price to the point it's likely not cost-effective)

@Purple I was asked today why I don't want to teach juniors and beginners how to use AI to write code. This. This is why
@Archivist @Purple i mean, that would be a short lesson: Don't.