"Marx identified four dimensions of alienated labor: separation from the product of one's work, from the act of working itself, from other people, and from one's own human capacities. In the context of LLM coding assistants, the second of these is doing most of the work." https://writings.hongminhee.org/2026/03/craft-alienation-llm/

I'm not sure I agree. Or, at least, the loss seems multi-pronged, and it's difficult to assess which of the four dimensions is most involved.

Why craft-lovers are losing their craft

Les Orchard made a quiet observation recently that I haven't been able to shake. Before LLM coding assistants arrived, the split between developers was…

Hong Minhee on Things
For example, vibecoding seems to shift a lot of the emphasis in software production to code review. The LLM agent produces code, and a human programmer is meant to be extra diligent in checking that code to ensure that it's minimally garbage. Code review has always been a part of collaborative software design, but previously, you were reviewing the work of a human programmer. That's a form of social connection — you see how that person addresses problems, bits of their personality in how they comment or arrange code. At points in the process, you likely discuss the project with them — direct human-to-human contact. If AI is handling the initial production of code, then much of that connection is lost. Not just alienation from the act of working, but also from other people.

"The tension between craft and efficiency doesn't disappear if you remove capitalism from the picture. LLM coding assistants produce faster results whether anyone is being paid or not, and any community, however it's organized, will eventually have to reckon with what to do with that speed difference."

That tension might remain in non-capitalist societies, but I'm skeptical that a non-capitalist society would ever create something like the multi-purpose LLMs that are causing these disruptions. The sheer amount if capital outlay required to build a monstrosity like OpenAI is so enormous that even our extensive capitalist system has required massive deformation to keep it operative.

"This is close to what Marx imagined machinery could do in conditions other than capitalism: relieve people of repetitive labor so they could do something more fully human with the time."

That may well be what Marx imagined, but there's a value judgment in there about what counts as "human," and we'd do well to question it. By analogy, exercise is repetitive, so why not rely on ozempic and steroids to address physical fitness so that we have more time for creative and deliberative pursuits? But maybe repetitive labor in itself isn't actually less human. Maybe it can be, in the right conditions, an expression of distinctly human concerns, like the parent who changes a thousand diapers in the act of caring for their child. And maybe some methods we strike upon to reduce such purportedly less human labor does more to alienate us than the time we gain does to make us more fully human.

Another way to look at the role that repetition plays in human life is by paying attention to practice. Music is presumably one of the "more fully human" activities Marx and Minhee have in mind when they talk about replacing repetitive labor with machinery, but repetitive labor is how people become capable musicians. They practice scales, chords, technique, etc. It's repetitive, often tedious, and can go on for years, decades, a lifetime. And, of course, techbros say we should just… stop learning to make music, because now AI can do all of the difficult, repetitive parts for us. Ditto for learning to draw, or write, or play games, and so on, ad nauseam. Will taking all of that away finally make us more fully human?
Anyway, I think the interesting part of that blogpost is not its Marxist analysis of LLM use, but rather that it uses Marxist analysis to carve out a rationale for using LLMs. Here's this huge, overtly capitalist project for concentrating capital and alienating labor, but if we rotate Marx's Capital just so, it's possible to extricate the tool from the circumstances of its production and deployment and see it as a pathway to a more fully human existence. Right? RIGHT?!
If it's not clear from the rest of this thread, I'm deeply suspicious of the idea of "fully automated luxury communism."
@lrhodes I believe in partially automated reasonably high quality communism
@burnoutqueen @lrhodes except that automation always feeds capital and devalues labor. The luddites understood that bargain. The problem is structural and not necessarily the technology. AI seems like a technology that requires an exceptionally high level of restraint—currently nowhere in the horizon!

@janinevigus @lrhodes

The whole point of communism is to abolish capital. The fact that you don't understand this perfectly explains why you glorify the luddites.

@lrhodes The problem with highly elaborate philosophies -- including capitalism -- is that the right person, given enough time, navel-gazing, and a thesaurus, can get them to say pretty much whatever they want them to.

@lrhodes I'm skeptical of all of these speed boost claims. I have yet to see that actually happen in the wild, and studies I've read have demonstrated the opposite.

What I see in my own workplace is folks using the AI take 8-12 hours to do what should be a twenty minute task.

@lrhodes Kinda. Compare what's happening with the Chinese models — not a non-capitalist example, but a very differnetly shaped one.

But socialist economies have often invested in management intelligence mechanisms. Look at Chilean cybernetics experiments. There's a fair bit of precedent here for the attitudes that would engender it.

That said I think a _sensible_ ecosystem would be going massively in on solar right now to fuel the AI growth.