Working software developers of the Fedi, what's your relationship with AI coding (like Claude Code)?

#poll #askFedi #software

Don't like it. I don't use it for work.
Don't like it. I have to use it for work.
It's complicated. I don't use it for work.
It's complicated. I have to use it for work.
It's complicated. I happily use it for work.
I like it. I don't use it for work.
I like it. I have to use it for work.
I like it. I happily use it for work.
Other, comment below.
Poll ends at .
Not a developer. Want to follow this poll. and see when it's over.
Poll!
poll.
Poll ends at .

I wonder what the distribution will look like if we get to 100-200+ votes.

My hypothesis is that the more casual Fedi users are more likely to use AI coding in some way.

@mayintoronto I'm just absolutely astounded that there's this many professional coders who *aren't* required to use it in some form for work yet.

The enterprise-grade/enterprise-cost tools are far better than the basic stuff.

We have a monthly per-dev credit budget so literally on a prompt by prompt basis I have to decide which model to send it to, based on what I'm doing and how much budget I have left.

Claude Opus 4 is definitely the best. If I get all the context loaded right and give an essay-length prompt full of requirements, it will usually get something I can send out for code review with little corrections. It is also the most expensive by far.

Claude Sonnet and Claude Haiku are not worth using.

GPT-5 Codex High is next best and gets you 90% of what Claude does but at 1/3 the cost. I usually reach for it as my primary model.

GPT-5 Codex Medium is half the cost of High and I use it for simpler tasks or fixing up other models minor mistakes.

The whole gemini family is infuriating. It often does the right thing on the first prompt but when it gets things wrong it does it in the most infuriating, non-obvious way and once you see it, it absolutely refuses to take correction.

@lackthereof I'm as shocked as you, honestly.
@lackthereof It probably depends on where you work at the company. I'm on an infrastructure team of five engineers at Target. We're small and stable enough that most of the company ignores us until they need an announcement when people call a store. Other teams here seem to get a harder push to use AI.

@mayintoronto

pol.

pole.

Pole.

polle.

@mayintoronto I don't like it. I push back against using it when I can. I can't always since work is really keen on shoving it into every part of the company.

My team mates have few to no issues using it, and my manager likes to use it. Fortunately, my manager is pretty lenient on my non-use so long as I get work done.

For a project I'm working on, I have given in to his request that I use AI to convert our Chef recipes into Ansible playbooks. This is mainly due to having a hard deadline controlled by another team.

I'm manually reviewing every line it generates, which is standard practice in the company.

@bryanredeagle "Manually reviewing every line it generates" sound like a healthy practice, albeit a tad tedious.

How's your experience been so far?
Every time I do anything with these tools, I get lured in a little deeper. I'll never get a dev to build the lightweight internal tools I want to build. So yeah.

@mayintoronto It's mixed. I like writing code, and developing. It's the fun part of my job. So I'm coming at it from the angle of it making the fun part tedious and boring. I'm also specifically using Copilot in VS Code since that's what we're allowed to use currently.

So my work is converting whole Chef Cookbooks to Ansible Roles. When I've asked it to convert the whole thing, it doesn't finish entirely. I kind of think its context is getting filled up, and moves on to the next file before it finishes the current one. I get a lot of half complete playbooks.

When I ask it to convert individual recipes, it does a pretty good job. It's using some older Ansible formatting, but that's not a big issue to me. For the most part, the output was correct and how I expected it to be.

@mayintoronto I also asked it (kind of on a whim) to create a bash script that would download and build a software, Asterisk, into an Ubuntu package.

To its credit, the script was correct. It didn't work, but that was due to a shortcoming in the tool it chose to use called checkinstall.

@mayintoronto My biggest gripe with Copilot is with its integration with VS Code. I kept it running while I edited its output, and wrote an Ansible Inventory module as well.

While you type, it attempts to be helpful. The way I write code is very similar to how I write prose. I type out my train of thought, and then go back to edit. This is, generally, because I know what I want to do and how to do it.

Copilot will try to be helpful and auto-complete code for you. Sometimes it's right, sometimes not. If you hit tab, it'll output its suggestion. If you hit escape, it'll remove the suggestion. It slowed down my typing because I had to keep dismissing it. I plan on finding the setting for the live suggestions on Monday and turning it off. The code it suggests is not technically wrong, but not always what I'm intending to write. I'm going in a different direction than what it thinks I am.

@mayintoronto I've never seen it produce completely correct code. But when used to analyze code, it has found bugs and often real ones. Sometimes it finds imaginary bugs. It can still miss bugs too, and I basically only skim its output because it's always too verbose.
@enobacon Yeah, I'm firmly in the camp of "it's complicated". I cannot avoid it in my non-dev role. Code analysis is one of the things I've heard my devs rave about.
@mayintoronto I'm far from raving about it, but running ten variations of grep and sorting the red herrings from the actual foxes is pretty helpful. This is quite a bit of hybrid code mixed with LLM to generate the "what if" threads it chases and weigh how likely each is to answer the query. Beyond the AGI hype and billionaires stealing centuries of creative work, there's a statistical probability thing that can actually be useful.

@mayintoronto @enobacon same, I'm in the it's complicated camp, at the same time I'm using it for work as it's required and I have a feeling this will be part of the performance review at some point

I never don't bring up the environmental and human impact of our companies' LLM use

@mayintoronto I've learned about the underlying building blocks over time, and there's some very interesting technology underneath, but the implementation, hype and results of the current tools are misaligned with what makes things better for me. And worse, they're displacing tools which objectively work better (like code suggestions, which used to be grounded in actual APIs but are now LLM generated and often fail. LLMs need to retry, but those old code suggestions compiled every time)

@mayintoronto My relationship with those tools is complex.

Whomever uses them is accountable for every character that’s written. After all, it’s the user’s identity that’s tied to the code, not the genAI’s.

@EdwinG There are so many layers of abstraction away from a single individual's code that the concept of "accountability" feels weird.

@mayintoronto It’s indeed weird.

It comes from the reality that as an operator, you need to understand what your tool does and what results it provides.

And a user should also have ecological and ethical accountability for its use.

@mayintoronto @EdwinG I tend to agree with Edwin. The commit/PR is under a single person’s name.

So if you push garbage, generated by Ai or not, it’s on you.

If you push code and lines you don’t understand, it’s on you. And that happened with slack overflow copy pasting before too.

@jerome @mayintoronto Where I work, PRs are usually in multiple names.

Those that review and approve also have their name on it and are also accountable - albeit differently.

@mayintoronto I have tried it, and I have some very mixed feelings.

Soms of its capabilities are amazing. It helped me find a bug in a third-party library that I use in my project, by making a good hypothesis, decompiling the library and proposing a hot fix (that worked).

I also use Claude Code to develop a project for a friend (a system for running his company), and it's mostly ok, but occasionally very dumb. But it helps to work with infrastructure that I think mostly sucks.

Generally I like the conversational approach to systems development, and wouldn't dream that such systems would ever appear (although I already developed a habit of having conversations with myself during systems developnent a few years ago).

But I also have some concerns. Some of them are about the growing imbalance of powers between people and corporations (especially after reading Zuboff's "Surveillence Capitalism"), and also the fact that people who build systems have no idea how they work. But in this regard, I also have a feeling that programming languages have failed us badly. (One example of this is SQL, which I think tried to be what LLMs are, a 'natural' conversational system for interacting with the computer, but sucked so bad that many simple English sentences when translated to SQL are almost impossible to read and write.)

So my perception is that those systems (if you account for their limitations) are a very nice compression of what Personal Computers and the Internet already enabled (which I think is quite amazing in and of itself), but I also feel that we would be better off if we developed better programming systems (by which I primarily mean high fidelity visualizations of various aspects of working systems)

@PaniczGodek Well said. The centralization of production power is what makes me nervous.

The whole "one model to rule them all" thing is kind of ridiculous. Purpose built tools will always win.

@mayintoronto What my attitude boils down to: I'm profoundly frustrated that practically no one wants to pay me to work on the tech which excites me. Practically no one even wants to discuss the tech which excites me!

Instead all anyone seemingly wants to talk about (positively or negatively) are these chatbots which are the exact opposite I got into technology for.

I'm on the anti side (some nuance which routinely gets misinterpreted), but more than that I'm just exhausted!

@alcinnz I'm tired too. I talk about it a lot because it's about to be central to my job to participate in and manage it. Processing my feelings. 😔
@mayintoronto do not like.
*encouraged* to use for work.
@mayintoronto there will come a point where I am probably going to be asked to use it. But at least for now, my team is at least partially insulated from the pressure to use AI.
@wydamn I'm glad you're insulated for now.
@mayintoronto Don’t like it for myriad reasons. Use it for Googling because Google sucks now. Reluctantly use it for work at times because, like all technologies it’s hard to stay in work without embracing what the job market wants sadly. End stage crapitalism unfortunately.
@mayintoronto not a developer of applications personally. More infra as code and scripts.
@mayintoronto My coworkers use it. I don't. Currently have a mostly vibe-coded PR to review next week which I'm dreading.
@aburka @mayintoronto I hear it's hard work. Pace yourself! Condolences.
@mayintoronto i don’t code … anything… claude or naude
@mayintoronto as a retired developer, I am FOMOing hard on this survey
@jimfl My polls are not kind to retirees. 
@mayintoronto at any rate I’m not FOMOing about LLM-driven development

@mayintoronto I've only tinkered a little bit and I'm not a fan. As soon as I've asked AI to do anything slightly complicated, its wrong in weird ways.

I don't think its any better or faster than copy-pasting stuff from stackoverflow together to make a prototype, but its essentially killed stackoverflow and will keep all new information obfuscated behind the paywall.

I also do a bunch of terraform, and you do NOT want subtle hallucinations in that. It has hallucinated IAM policy on me.