Working software developers of the Fedi, what's your relationship with AI coding (like Claude Code)?

#poll #askFedi #software

Don't like it. I don't use it for work.
53.4%
Don't like it. I have to use it for work.
12.4%
It's complicated. I don't use it for work.
5.3%
It's complicated. I have to use it for work.
10.7%
It's complicated. I happily use it for work.
7.6%
I like it. I don't use it for work.
1%
I like it. I have to use it for work.
0.3%
I like it. I happily use it for work.
6.6%
Other, comment below.
2.6%
Poll ended at .
Not a developer. Want to follow this poll. and see when it's over.
Poll!
76.5%
poll.
23.5%
Poll ended at .

I wonder what the distribution will look like if we get to 100-200+ votes.

My hypothesis is that the more casual Fedi users are more likely to use AI coding in some way.

Update:
- Started at 28% some sort of AI Coding use at ~60 votes.
- 36% at 336 votes.

@mayintoronto I'm just absolutely astounded that there's this many professional coders who *aren't* required to use it in some form for work yet.

The enterprise-grade/enterprise-cost tools are far better than the basic stuff.

We have a monthly per-dev credit budget so literally on a prompt by prompt basis I have to decide which model to send it to, based on what I'm doing and how much budget I have left.

Claude Opus 4 is definitely the best. If I get all the context loaded right and give an essay-length prompt full of requirements, it will usually get something I can send out for code review with little corrections. It is also the most expensive by far.

Claude Sonnet and Claude Haiku are not worth using.

GPT-5 Codex High is next best and gets you 90% of what Claude does but at 1/3 the cost. I usually reach for it as my primary model.

GPT-5 Codex Medium is half the cost of High and I use it for simpler tasks or fixing up other models minor mistakes.

The whole gemini family is infuriating. It often does the right thing on the first prompt but when it gets things wrong it does it in the most infuriating, non-obvious way and once you see it, it absolutely refuses to take correction.

@lackthereof @mayintoronto A lot of actual programming takes place at real serious businesses actually making products and getting stuff done, the complete opposite of VC clownery flirting with investors or web monkey shops selling shovels to those types.

@dalias @lackthereof A lot of serious businesses are adopting it too across the size spectrum. As silly as the Claude Code source looks, there are some super legit use cases in even legacy enterprise type software. (Especially while the prices are hyper deflated to get people hooked on it.)

Is it still legit when they have to charge profitable prices? Probably not.

@mayintoronto @dalias @lackthereof - "...legit use cases." And I'm done here. Bye.
@mayintoronto @dalias @lackthereof Making sure that enterprise software become even more enterprisy

@mayintoronto @lackthereof I'm highly skeptical of the claim that there are "legit use cases".

If you're talking about using the models to find patterns correlated with bugs/vulns, that indeed is a good use for statistical models, but having a chatbot interface that gives randomly perturbed answers, rather than a deterministic grep-for-bugs using the statistical model, is just gratuitous badness aimed at exploiting cognitive weaknesses in the user to sell your product, not making best use of the tech.

If you're talking about anything generative, even boilerplate, I don't buy that it's legit. Even in boilerplate, you have to *check* that the slop it vomited is actually correct. You could write scripts to do that, but you could just as easily write the scripts to generate the boilerplate, and have it be deterministic and reproducible and non-planet-burning.

@dalias @lackthereof No, it's doing a rough pass scanning legacy code answering questions about the 300 custom packages that you've inherited and no one left has the institutional knowledge to know how they're all connected.

Doesn't need to be perfect. There are so many cases where you don't necessarily need the right answer, but it gets you on the right path to having the answers to build the right things.

@mayintoronto @lackthereof Ah, so you're thinking about it from a standpoint of "summarization", but where you're not trusting the summary but using it as a basis to start trying to understand something that's otherwise overwhelming to look at?
@dalias @lackthereof Yeah, among other things. Fully aware that it's one hell of a slippery slope.

@mayintoronto @lackthereof Yes, it very much is. The summaries you'll be reading are of the form of the cognitohazard.

For things like this I *really* wish there were a mode on these things to *not* have them pretend to be human, but to output very mechanical looking documents to remind the reader that they are not conversing with a thinking being.