Working software developers of the Fedi, what's your relationship with AI coding (like Claude Code)?

#poll #askFedi #software

Don't like it. I don't use it for work.
Don't like it. I have to use it for work.
It's complicated. I don't use it for work.
It's complicated. I have to use it for work.
It's complicated. I happily use it for work.
I like it. I don't use it for work.
I like it. I have to use it for work.
I like it. I happily use it for work.
Other, comment below.
Poll ends at .

@mayintoronto I have tried it, and I have some very mixed feelings.

Soms of its capabilities are amazing. It helped me find a bug in a third-party library that I use in my project, by making a good hypothesis, decompiling the library and proposing a hot fix (that worked).

I also use Claude Code to develop a project for a friend (a system for running his company), and it's mostly ok, but occasionally very dumb. But it helps to work with infrastructure that I think mostly sucks.

Generally I like the conversational approach to systems development, and wouldn't dream that such systems would ever appear (although I already developed a habit of having conversations with myself during systems developnent a few years ago).

But I also have some concerns. Some of them are about the growing imbalance of powers between people and corporations (especially after reading Zuboff's "Surveillence Capitalism"), and also the fact that people who build systems have no idea how they work. But in this regard, I also have a feeling that programming languages have failed us badly. (One example of this is SQL, which I think tried to be what LLMs are, a 'natural' conversational system for interacting with the computer, but sucked so bad that many simple English sentences when translated to SQL are almost impossible to read and write.)

So my perception is that those systems (if you account for their limitations) are a very nice compression of what Personal Computers and the Internet already enabled (which I think is quite amazing in and of itself), but I also feel that we would be better off if we developed better programming systems (by which I primarily mean high fidelity visualizations of various aspects of working systems)

@PaniczGodek Well said. The centralization of production power is what makes me nervous.

The whole "one model to rule them all" thing is kind of ridiculous. Purpose built tools will always win.

@mayintoronto I think that, looking more broadly, "centralization of production power" doesn't adequately capture the main issue with surveillance capitalism.

(Frankly, I think that Zuboff's book is one of the most important books of the XXI century)

People don't use LLMs only (or even primarily) for coding. They often share very personal information about themselves (and people around them), because they don't feel judged.

As a result, those who control these models have the capability of making "Google Street View" of peoples' minds, but accessible only to the "congnoscienti".

As a result, corporations have more and more power over people (in every aspect of their life), and - because corporations slip away from democratic control - consequently people have less and less power over themselves (which is why the term "technofeudalism" is probably more adequate than "capitalism").

As to purpose-built tools, I'm not sure I entirely agree. I think that LLMs owe a lot of their capabilities to their "general intelligence", and that it generally shows that the more advanced models are more capable than the less advanced ones.

But it also turns out that LLMs themselves are eager users of human-made frameworks, as that those frameworks often save their time and allow them to make less mistakes.