Working software developers of the Fedi, what's your relationship with AI coding (like Claude Code)?

#poll #askFedi #software

Don't like it. I don't use it for work.
Don't like it. I have to use it for work.
It's complicated. I don't use it for work.
It's complicated. I have to use it for work.
It's complicated. I happily use it for work.
I like it. I don't use it for work.
I like it. I have to use it for work.
I like it. I happily use it for work.
Other, comment below.
Poll ends at .
Not a developer. Want to follow this poll. and see when it's over.
Poll!
poll.
Poll ends at .

I wonder what the distribution will look like if we get to 100-200+ votes.

My hypothesis is that the more casual Fedi users are more likely to use AI coding in some way.

Update:
- Started at 28% some sort of AI Coding use at ~60 votes.
- 36% at 336 votes.

@mayintoronto I'm just absolutely astounded that there's this many professional coders who *aren't* required to use it in some form for work yet.

The enterprise-grade/enterprise-cost tools are far better than the basic stuff.

We have a monthly per-dev credit budget so literally on a prompt by prompt basis I have to decide which model to send it to, based on what I'm doing and how much budget I have left.

Claude Opus 4 is definitely the best. If I get all the context loaded right and give an essay-length prompt full of requirements, it will usually get something I can send out for code review with little corrections. It is also the most expensive by far.

Claude Sonnet and Claude Haiku are not worth using.

GPT-5 Codex High is next best and gets you 90% of what Claude does but at 1/3 the cost. I usually reach for it as my primary model.

GPT-5 Codex Medium is half the cost of High and I use it for simpler tasks or fixing up other models minor mistakes.

The whole gemini family is infuriating. It often does the right thing on the first prompt but when it gets things wrong it does it in the most infuriating, non-obvious way and once you see it, it absolutely refuses to take correction.

@lackthereof I'm as shocked as you, honestly.
@lackthereof It probably depends on where you work at the company. I'm on an infrastructure team of five engineers at Target. We're small and stable enough that most of the company ignores us until they need an announcement when people call a store. Other teams here seem to get a harder push to use AI.
@lackthereof @mayintoronto A lot of actual programming takes place at real serious businesses actually making products and getting stuff done, the complete opposite of VC clownery flirting with investors or web monkey shops selling shovels to those types.

@dalias @lackthereof A lot of serious businesses are adopting it too across the size spectrum. As silly as the Claude Code source looks, there are some super legit use cases in even legacy enterprise type software. (Especially while the prices are hyper deflated to get people hooked on it.)

Is it still legit when they have to charge profitable prices? Probably not.

@dalias @mayintoronto

My workplace is an old stodgy multinational megacorp. My team produces rack-scale infrastructure appliances. The file I was working on on Friday had a git history dating to 2004 (partially imported from CVS).

We're not making app-of-the-week stuff, we're definitely not chasing VC funding. But management seems absolutely terrified that if we don't adopt LLM driven development practices, like, yesterday, we're going to be left in the dust by all our competitors who have.

@lackthereof @mayintoronto
I'm required to use it. I don't use it.
I just spend the amount of tokens they want me to.

No one cares.

@kwazekwaze im probably going to be "required" to use it shortly, which is unconditionally not going to happen

@lackthereof @mayintoronto

@erisceleste @lackthereof @mayintoronto
Morally it's no better than using it "for real" but I'm not about to submit in spirit just to make my job even more tooth grindingly tedious.

I absolutely see it as complying in advance though to accept these mandates at face value and go online to say "Yeah I use it, it works, but it's miserable to use".

They just want to see the token burn. It's an even more nightmarish LOC. No one actually cares.

@kwazekwaze tbh im just desperately trying to think of any other job i can do that would cover rent, at this point

fuck programming.
fuck tech.

@lackthereof @mayintoronto

@kwazekwaze and utterly fuck the people who destroyed the only thing in my life

@lackthereof @mayintoronto

@erisceleste I'm a translator. Ditto. AI can't do my job, but lots of people who can't do it either think it can.
@kwazekwaze @mayintoronto @lackthereof at some point, they'll catch you: "hey, hang on, 🧐 why is your code always so much easier to review?"
@deborahh
Joke's on them, I'm the John Henry of inscrutability
@lackthereof @mayintoronto I am required to use it. I don't. I can afford to retire if they fire me, which I am aware is not a position most developers are in.

@mayintoronto

pol.

pole.

Pole.

polle.

@mayintoronto I don't like it. I push back against using it when I can. I can't always since work is really keen on shoving it into every part of the company.

My team mates have few to no issues using it, and my manager likes to use it. Fortunately, my manager is pretty lenient on my non-use so long as I get work done.

For a project I'm working on, I have given in to his request that I use AI to convert our Chef recipes into Ansible playbooks. This is mainly due to having a hard deadline controlled by another team.

I'm manually reviewing every line it generates, which is standard practice in the company.

@bryanredeagle "Manually reviewing every line it generates" sound like a healthy practice, albeit a tad tedious.

How's your experience been so far?
Every time I do anything with these tools, I get lured in a little deeper. I'll never get a dev to build the lightweight internal tools I want to build. So yeah.

@mayintoronto It's mixed. I like writing code, and developing. It's the fun part of my job. So I'm coming at it from the angle of it making the fun part tedious and boring. I'm also specifically using Copilot in VS Code since that's what we're allowed to use currently.

So my work is converting whole Chef Cookbooks to Ansible Roles. When I've asked it to convert the whole thing, it doesn't finish entirely. I kind of think its context is getting filled up, and moves on to the next file before it finishes the current one. I get a lot of half complete playbooks.

When I ask it to convert individual recipes, it does a pretty good job. It's using some older Ansible formatting, but that's not a big issue to me. For the most part, the output was correct and how I expected it to be.

@mayintoronto I also asked it (kind of on a whim) to create a bash script that would download and build a software, Asterisk, into an Ubuntu package.

To its credit, the script was correct. It didn't work, but that was due to a shortcoming in the tool it chose to use called checkinstall.

@mayintoronto My biggest gripe with Copilot is with its integration with VS Code. I kept it running while I edited its output, and wrote an Ansible Inventory module as well.

While you type, it attempts to be helpful. The way I write code is very similar to how I write prose. I type out my train of thought, and then go back to edit. This is, generally, because I know what I want to do and how to do it.

Copilot will try to be helpful and auto-complete code for you. Sometimes it's right, sometimes not. If you hit tab, it'll output its suggestion. If you hit escape, it'll remove the suggestion. It slowed down my typing because I had to keep dismissing it. I plan on finding the setting for the live suggestions on Monday and turning it off. The code it suggests is not technically wrong, but not always what I'm intending to write. I'm going in a different direction than what it thinks I am.

@mayintoronto I've never seen it produce completely correct code. But when used to analyze code, it has found bugs and often real ones. Sometimes it finds imaginary bugs. It can still miss bugs too, and I basically only skim its output because it's always too verbose.
@enobacon Yeah, I'm firmly in the camp of "it's complicated". I cannot avoid it in my non-dev role. Code analysis is one of the things I've heard my devs rave about.
@mayintoronto I'm far from raving about it, but running ten variations of grep and sorting the red herrings from the actual foxes is pretty helpful. This is quite a bit of hybrid code mixed with LLM to generate the "what if" threads it chases and weigh how likely each is to answer the query. Beyond the AGI hype and billionaires stealing centuries of creative work, there's a statistical probability thing that can actually be useful.

@mayintoronto @enobacon same, I'm in the it's complicated camp, at the same time I'm using it for work as it's required and I have a feeling this will be part of the performance review at some point

I never don't bring up the environmental and human impact of our companies' LLM use

@mayintoronto I've learned about the underlying building blocks over time, and there's some very interesting technology underneath, but the implementation, hype and results of the current tools are misaligned with what makes things better for me. And worse, they're displacing tools which objectively work better (like code suggestions, which used to be grounded in actual APIs but are now LLM generated and often fail. LLMs need to retry, but those old code suggestions compiled every time)

@mayintoronto My relationship with those tools is complex.

Whomever uses them is accountable for every character that’s written. After all, it’s the user’s identity that’s tied to the code, not the genAI’s.

@EdwinG There are so many layers of abstraction away from a single individual's code that the concept of "accountability" feels weird.

@mayintoronto It’s indeed weird.

It comes from the reality that as an operator, you need to understand what your tool does and what results it provides.

And a user should also have ecological and ethical accountability for its use.

@mayintoronto @EdwinG I tend to agree with Edwin. The commit/PR is under a single person’s name.

So if you push garbage, generated by Ai or not, it’s on you.

If you push code and lines you don’t understand, it’s on you. And that happened with slack overflow copy pasting before too.

@jerome @mayintoronto Where I work, PRs are usually in multiple names.

Those that review and approve also have their name on it and are also accountable - albeit differently.

@mayintoronto @EdwinG if you're going to put code out in public, especially if it is in any way safety critical, someone's going to be held accountable and you can bet it will be the grunt code monkey and not the manager who insisted in #AIForEverything.

@mayintoronto I have tried it, and I have some very mixed feelings.

Soms of its capabilities are amazing. It helped me find a bug in a third-party library that I use in my project, by making a good hypothesis, decompiling the library and proposing a hot fix (that worked).

I also use Claude Code to develop a project for a friend (a system for running his company), and it's mostly ok, but occasionally very dumb. But it helps to work with infrastructure that I think mostly sucks.

Generally I like the conversational approach to systems development, and wouldn't dream that such systems would ever appear (although I already developed a habit of having conversations with myself during systems developnent a few years ago).

But I also have some concerns. Some of them are about the growing imbalance of powers between people and corporations (especially after reading Zuboff's "Surveillence Capitalism"), and also the fact that people who build systems have no idea how they work. But in this regard, I also have a feeling that programming languages have failed us badly. (One example of this is SQL, which I think tried to be what LLMs are, a 'natural' conversational system for interacting with the computer, but sucked so bad that many simple English sentences when translated to SQL are almost impossible to read and write.)

So my perception is that those systems (if you account for their limitations) are a very nice compression of what Personal Computers and the Internet already enabled (which I think is quite amazing in and of itself), but I also feel that we would be better off if we developed better programming systems (by which I primarily mean high fidelity visualizations of various aspects of working systems)

@PaniczGodek Well said. The centralization of production power is what makes me nervous.

The whole "one model to rule them all" thing is kind of ridiculous. Purpose built tools will always win.

@mayintoronto I think that, looking more broadly, "centralization of production power" doesn't adequately capture the main issue with surveillance capitalism.

(Frankly, I think that Zuboff's book is one of the most important books of the XXI century)

People don't use LLMs only (or even primarily) for coding. They often share very personal information about themselves (and people around them), because they don't feel judged.

As a result, those who control these models have the capability of making "Google Street View" of peoples' minds, but accessible only to the "congnoscienti".

As a result, corporations have more and more power over people (in every aspect of their life), and - because corporations slip away from democratic control - consequently people have less and less power over themselves (which is why the term "technofeudalism" is probably more adequate than "capitalism").

As to purpose-built tools, I'm not sure I entirely agree. I think that LLMs owe a lot of their capabilities to their "general intelligence", and that it generally shows that the more advanced models are more capable than the less advanced ones.

But it also turns out that LLMs themselves are eager users of human-made frameworks, as that those frameworks often save their time and allow them to make less mistakes.

@mayintoronto What my attitude boils down to: I'm profoundly frustrated that practically no one wants to pay me to work on the tech which excites me. Practically no one even wants to discuss the tech which excites me!

Instead all anyone seemingly wants to talk about (positively or negatively) are these chatbots which are the exact opposite I got into technology for.

I'm on the anti side (some nuance which routinely gets misinterpreted), but more than that I'm just exhausted!

@alcinnz I'm tired too. I talk about it a lot because it's about to be central to my job to participate in and manage it. Processing my feelings. 😔
@mayintoronto do not like.
*encouraged* to use for work.
@mayintoronto there will come a point where I am probably going to be asked to use it. But at least for now, my team is at least partially insulated from the pressure to use AI.
@wydamn I'm glad you're insulated for now.
@mayintoronto Don’t like it for myriad reasons. Use it for Googling because Google sucks now. Reluctantly use it for work at times because, like all technologies it’s hard to stay in work without embracing what the job market wants sadly. End stage crapitalism unfortunately.
@mayintoronto not a developer of applications personally. More infra as code and scripts.
@mayintoronto My coworkers use it. I don't. Currently have a mostly vibe-coded PR to review next week which I'm dreading.
@aburka @mayintoronto I hear it's hard work. Pace yourself! Condolences.
@mayintoronto i don’t code … anything… claude or naude

@Kierkegaanks @mayintoronto

[Intro]
I'd like to do a song of great social and political import
It goes like this

[Verse 1]
Oh Claude, won't you buy me a Mercedes Benz?
My friends all drive Porsches, I must make amends
Worked hard all my lifetime, no help from my friends
So, oh Claude, won't you buy me a Mercedes Benz?

[Verse 2]
Oh Claude, won't you buy me a color TV?
Dialing For Dollars is trying to find me
I wait for delivery each day until three
So, oh Claude, won't you buy me a color TV?

[Verse 3]
Oh Claude, won't you buy me a night on the town?
I'm counting on you Claude, please don't let me down
Prove that you love me and buy the next round
Oh Claude, won't you buy me a night on the town?

[Verse 4]
Everybody
Oh Claude, won't you buy me a Mercedes Benz?
My friends all drive Porsches, I must make amends
Worked hard all my lifetime, no help from my friends
So, oh Claude, won't you buy me a Mercedes Benz?

[Spoken Outro]
That's it! (cackle)

@revndm

eye candy for you.

@mayintoronto as a retired developer, I am FOMOing hard on this survey
@jimfl My polls are not kind to retirees. 
@mayintoronto at any rate I’m not FOMOing about LLM-driven development

@mayintoronto I've only tinkered a little bit and I'm not a fan. As soon as I've asked AI to do anything slightly complicated, its wrong in weird ways.

I don't think its any better or faster than copy-pasting stuff from stackoverflow together to make a prototype, but its essentially killed stackoverflow and will keep all new information obfuscated behind the paywall.

I also do a bunch of terraform, and you do NOT want subtle hallucinations in that. It has hallucinated IAM policy on me.

@mayintoronto other=don't like it, but I'm out of work and will do what is needed to get paid.