New post: "We mourn our craft" https://nolanlawson.com/2026/02/07/we-mourn-our-craft/

No comment on this one.

We mourn our craft

I didn’t ask for this and neither did you. I didn’t ask for a robot to consume every blog post and piece of code I ever wrote and parrot it back so that some hack could make money off o…

Read the Tea Leaves
@nolan I think I picked a good time to retire.
@thereisnocat In my darker moments I am not unhappy to be near the tail end of my career.
@nolan @thereisnocat o fortunatos nimium sua si bona norint.

@nolan sounds like a serious case of AI inevitabilism!

the AI revolution has been "six months" aways for like two years now lol

regardless, we have a choice

@db Maybe I'm wrong! I would love to be wrong on this. 🙂 But I'm pretty sure I'm calling it at the right time.
@nolan thank you so much for this. I’m mourning with you.

@nolan I share much of your discouragement but anticipate a happier continuation of coding than you seem to fear

Your post mentions oil painting, a hobby still enjoyed by millions of people. Last week my wife and I tried a pottery wheel class; it was fun.

I think the fact coding can be immensely fun will prevent it from becoming an archeological curiosity. Whether the *profession* of coding remains fun is unclear, but I'm pretty sure I'll enjoy the *craft* of coding for the rest of my life

@JanMiksovsky Sure yes, I believe a new craft will replace the current one, and that the two will share a lot of similarities. I am not a total doomer, but this post was for me to grapple with my sadness, so I allowed it to be a bit melancholy. I have plenty of colleagues who are totally gung-ho about the new craft and have no sentimentalism for the old craft, but that's just not me. 😅

@nolan I honestly wish you're wrong, that we'll be known as "the people that got scared by AI for a bit", but I fear very much that won't be the case. Though I do not share all your beliefs/thoughts regarding how good AI is/will (ever?) be.

My deepest fear is the "craft" side. My hobby, my dearest love. My GitHub profile was up there showing my work, without any doubt, but now? Now it might be AI. Who knows?
When I wanted to write something and reached an area or a feature that was unconventional, I would dive deep. I would learn everything around and understand it deeply, to its core. Sleep was optional

Is that gone? Is that a phase? Am I getting old and tired (even though I've just reached 30)? What will become of my hobby? Of my code? Of my love for the craft...

@dzervas This is exactly where I'm at. I covered this a bit in a recent post (https://nolanlawson.com/2026/01/31/building-a-browser-api-in-one-shot/) where I marveled at how, with one prompt, I could build someone that previously would have been a proud open source project for me, with stars and issues and PRs and all the rest, but now… I just don't care. It's disposable. Anybody can create another one on-demand. I don't even know what to do with my open source code anymore.
Building a browser API in one shot

TL;DR: With one prompt, I built an implementation of IndexedDB using Claude Code and a Ralph loop, passing 95% of a targeted subset of the Web Platform Tests, and 77.4% of a more rigorous subset of…

Read the Tea Leaves
@nolan Here's what I don't get. You talk about being able to produce 10x the code using an agent. But in an earlier Lobsters thread (https://lobste.rs/c/bnujxj), you said that it's actually a modest improvement. Maybe it feels more dramatic than it is because, as you said in that same post, you're swimming in the culture.
@nolan I also have junior colleagues who are using Claude Code. But it's not clear to me that it makes them dramatically more productive. Mainly what I notice is that their diffs are sometimes larger than I'd like. It makes me want to be all contrarian and produce precise, surgical code changes that do exactly what's needed with a minimum of actual code.
@nolan I think what I'm really hoping for is that someday we *will* look back on how silly it was that we typed JavaScript syntax with our fingers, but because we chose a different fork in the road and started writing Paul Graham-style super-dense Lisp or somesuch instead. (Edit: Or maybe we'll start writing and using domain-specific code generators like the late, great Pieter Hintjens.) That is, we chose to keep writing precise, unambiguous code, but found a better way to kill boilerplate.
@nolan I realize that most programmers, working at a company at the bottom of the org chart, probably don't have the freedom to choose one of these alternate paths I've mentioned. In my current position, I might. Still, I might freak out my non-technical cofounder if I choose to use some esoteric programming language rather than TypeScript, Rust, or Elixir. But those latter two do have practical, built-in options for metaprogramming.
@matt as @nolan also says in the post, just wait six months; there is plenty of grieving yet to be experienced by people who haven't read the writing on the wall
@numist @matt Yep exactly, this is what I was going to say. I'm fractionally more productive today, but I can already see that my biggest bottleneck is trying to run more than 3 damn instances of Claude Code in my terminal when my web app uses one port and the codebase is shared and my laptop only has so much CPU/RAM. These are all dumb and easily solvable problems with more agents running in containers, and every company is madly scrambling to try to build that.

@nolan @matt Even today your mileage depends on what you're using them for. I don't expect today's tools to beat me at navigating hairy technical/political problems at work, but there's lots of my day to day that's annoyingly mechanical—but just out of reach of a shell script—that I've already been able to completely automate away in the past six months.

Consider where we were in 2014 (and what that implies for 2038): https://xkcd.com/1425/

Tasks

xkcd

@numist @nolan @matt that doesn't say much tough. It's hard to say if development is following an exponential curve or a sigmoidal curve. Sigmoidal curves far more common and if we are on a sigmoidal curve, where on the curve we are.

We'll only know longer term, perhaps 5-10 years.

For now I'm very happy that an LLM writes a lot of the boring parts of a project (easily 10x). However, for more complex parts it either needs a lot of guidance or I need to step in (slight improvement or slower).

@numist @matt @nolan what will change in six months? Like specifically?

@nolan

I dunno. I kinda agree, but I don't see LLMs churning out anything particularly good. They're just text search, they can only *reproduce*, and a lot of the 'comp sci' of the last 30 years is pure hooey, and it's all there in the training models.

When things like Rust appear, and actually run at the speeds that things *should* run at on modern hardware, it reminds me not everybody out there is an idiot.

At the point where software can innovate, we'll be debating conciousness. Again.

@megatronicthronbanks Just the other day a coworker sent me a Rust-based C compiler that Anthropic built, which is apparently capable of compiling the Linux kernel: https://github.com/anthropics/claudes-c-compiler

I understand where you're coming from, but it really feels like game over to me at this point.

anthropics/claudes-c-compiler

Claude Opus 4.6 wrote a dependency-free C compiler in Rust, with backends targeting x86 (64- and 32-bit), ARM, and RISC-V, capable of compiling a booting Linux kernel. - anthropics/claudes-c-compiler

GitHub
@nolan @megatronicthronbanks I don't think this is the slam dunk Anthropic seems to think it is. This isn't vibe coding, it's a carefully constructed workflow (made by humans), then passed through an ocean boiling amount of tokens to provide a subpar result that's basically unmaintainable. The fact that it kinda works is impressive (I guess?), if you value magic tricks more than maintainable software that works reliably.
@nolan @megatronicthronbanks Also, the ocean-boiling bit bothers me.

@nolan @megatronicthronbanks the problem is that there are a lot of C compilers out there, it's not surprising that Claude, having ingested many of them in the training data, is able to transliterate a compiler to Rust.

It's more interesting to see how good Claude is on novel problems or how well it can come up with novel compiler optimizations, etc.

@danieldk @nolan As far as I can tell, novel problems for LLMs and GANs = no.
@megatronicthronbanks @danieldk That is why I don't share the resignation of @nolan. I don't see LLM's coming up with things that require my level of experience and expertise. However, I'll happily let it create the umpteenth model, view or controller for me. And even then, I regularly have to intervene in a senior dev role, telling it to look at that thing again and come up with a cleaner solution.
Which makes me think that it's not the veterans who'll get to suffer the most.
@nolan @dgregor79 Wonderful piece, Nolan. 🙏🏼

@nolan
1) there's no way that the people using the mockup-generating machines actually understand their mockup codebases to anywhere near the degree as people who actually spent time thinking about the problems and their solutions. The mockups are therefore unmaintainable.

2) if we could ban asbestos, then we can ban these horrible destructive machines. We can organize, and we can have them all dismantled, and their DRAM & CPUs can be put to less-destructive use.

@nolan
3) we have no convincing reason to tolerate defeatism anymore. With examples like the Mamdani administration, we can all see that there is no excuse.

https://sfba.social/@vij/116014712128853121

We *can* switch off the orphan-shredding machine, and we must.

Manish Vij (@[email protected])

Attached: 1 image

SFBA.social
@nolan i've never made this request to anyone before in my life, but... in this moment, defeatism like that amounts to advocating compliance in advance. And it may actually convince some people to give up before they even think about the necessity of resistance. And that could result in a weaker resistance. For this reason, i sincerely but respectfully urge you to consider deleting that blog post.
@JamesWidman I get where you're coming from. To be clear: this post is not really a plea to anyone in my audience to stop resisting, become a "collaborator," etc. It's really a conversation between me and myself from a ~year ago. You can make of it what you will.
@nolan @JamesWidman Yourself from a year ago no longer exists. All that matters now is what effect the posts have on other people today. And I think these posts do have the effect of discouraging people from resisting, whether you meant it that way or not. You've said that you didn't ask for this future, and you're not celebrating it. Wouldn't it be better, then, to join your efforts with the people that are fighting it, or at least not harm that fight by proclaiming inevitability?
@matt @nolan Yeah; i mean, the use of the language of grief, from the title to the last sentence, pretty strongly implies that the reader should eventually reach the final stage in the stages of grief.

@matt @nolan ...
But the pieces of hardware that enable gen-a.i. (particularly the accelerator chips, or at least, data centers with huge numbers of them) are *not* the same thing as the grim reaper. They are *physical* things. They can be switched off, they can be scrapped, and they can be regulated out of existence. Previous generations did that for other physical things, and we can do it too.

anyway,
https://mastodon.social/@JamesWidman/116032953161658413

@JamesWidman @matt Matt, I cannot credibly cheer on the resisters because I'm no longer one of them. Believe me, I was one of the most annoying anti-AI voices inside of Salesforce (anyone who worked with me will attest to this), but I just don't have the fight in me anymore. I see it as a lost cause.

I admire people who fight for what they believe in, though, so I think it's the job of the anti-AI crowd to persuade the rest of us, push to regulate LLMs ala James above, etc.

@nolan I'm sorry if I pushed too hard. The truth is, I'm not sure if I have the fight in *me* either. A screed like the one I linked yesterday morning can make me feel like I should, but we'll see how long it lasts.
@matt Hah, no worries! You're my friend, I don't mind you pushing back. Maybe someday I'll be embarrassed of what I wrote; wouldn't be the first time. 😛
@nolan As a translator, I feel the pain...

@nolan reading the comments on your blog, and to a lesser extent here, I’m struck (again) by how many experienced engineers keep their heads in the sand about how good this technology is, and how fast it’s improving.

“It’s not a better coder than me.” “We’ll always have to review their code.” Today - maybe. Tomorrow? Not a chance.

@gregr @nolan I had exactly the same thought. People don’t believe it. Or they have a bad experience and assume the rest of us are making it up.

It does depend on what tools you use. What language you code in. Your skill in prompting.

Using a sewing machine is still a skill.

@richard5mith @gregr I understand it only because I was there with them until a year ago. At this point, I can't deny the evidence of my eyes. That doesn't make it any less painful.

@nolan @richard5mith @gregr Right with you, tool use a year ago changed everything. Obviously more since then.

If this technology launched with the 2025 versions of the product vs 2023 there would be a lot less very smart people being willfully ignorant.

I don’t say this as a whole hearted blind endorsement, but nevertheless I believe it to be true.

It is painful, for many reasons.

@dotsie @richard5mith @gregr Yep exactly although in retrospect the 2023 version was already quite capable. I refused to believe it until it became clear that you just needed to chain enough of these dumb things together to do what I do. To their credit, I had colleagues who saw this much earlier than me.
@nolan I tried to leave a comment on this, but wordpress seems to eat my comments, so I'll send it here:

I still use Pinafore as my main fediverse client. Not Semaphore, not Enaphore, and not any other fork that was made since. Admittedly, part of why I still use it is because I don't want to go through the admittedly easy process of logging into my accounts again. But the other reason is sentimentality.

Pinafore, in my opinion, is the best fediverse client. I think that, becuase it the product of being very mindful about what should or shouldn't be used in the code, what features should or shouldn't be implemented, and even just the simple fact that the person who worked on it cared. You showed that you cared by maintaining it. By adding accessibility features and performance improvements. I think you showed care through your labor by making pinafore run well on KaiOS—I still remember that, I think that was cool as hell.

I still remember the first of just two small commits that I contributed to pinafore. This was to allow you to switch instances quicker by adding a "star" button to each item in the instance list. You took the time to review it, even fixing my code after merging, and made sure I was credited in the release notes. I've contributed to a number of FOSS projects, usually drive-by commits, and very few of them have been as welcoming as how you'd managed pinafore. That sticks with a person.

Sure, maybe in six months I could vibe-code my own pinafore. Maybe I could do that now. But I would never have met you, and you would never have met me. The craft gave us the opportunity to work together, however briefly, on something shared. I think that matters.

I got to see what kind of developer I want to be in you. I want to value performance and accessibility. I want my open source projects to be welcoming and enriching to whoever contributes. I want to learn from others what their wants and needs are for software, and to work with them to make that shared vision real.

I just don't see how we'll ever be able to achieve this if all we're doing is toiling, quietly and alone, in front of an LLM's text prompt. I don't know how we can ever learn what is valuable, what we should even type into that text prompt, if we hadn't already done some of that menial labour ourselves. I don't know how we could ever learn what matters if we don't talk to each other, work with each other, raise each other up, etc.. etc..

All I really want to say to you is this: don't lose sight of what code actually is. It is a model of reality. It's a representation of our wants and needs. You can't make a model of reality if you don't know what reality is, and we only have two eyes and two ears, and there's much more to reality than we can ever observe or experience ourselves. We have to learn from each other, learn how to listen, learn how to distill all those wants down into code, into those sets of instructions for the silicon to run.

That is the craft. And you can't do it alone.
@suricrasia I replied on my blog, and I really appreciated your kind words! Our humanity will endure, and I will try not to lose sight of it. Thank you for the lovely way you expressed it.

@nolan

In a word: no.

I refuse to give up when the research is coming back saying that use of these tools degrades the quality of the work, that they cause brain damage which may be permanent, that they rely on theft and disregard for provenance.

If that's because I'm also a 40-something, so be it.

@soph @nolan
I'm not yet a 40-something, and I agree with every word.

@nolan My experience with this is it depends on how the developer viewed what they did.

For some it was a craft as you describe it. Finding the perfect solution.

For others, it was about delivering a product vision. Solving customer problems.

The latter (like me) is less bothered by this future we now live in. Because it was never about the code. It was about the result.

I used to type in listings from magazines. I didn’t enjoy it. I just wanted to play the game.

@nolan I don't know, man. "Wait six months" has been a common refrain on HN and it has historically not aged well.
@varx Honestly, you don't even need 6 months. The future is here; it's just not evenly distributed.

@nolan I can't speak to what I haven't seen, and I can't take people's word for this stuff because there's a *massive* amount of hype.

Just endless waves of dubious benchmarks, demos that turn out to be fake or broken, reporting that isn't actually fact-based.

So, I can only speak to what I've seen.

And what I've seen ain't good.

@nolan Importantly, this was also the situation a year ago, and a year ago people also said "just wait six months". And I did, and it's fundamentally the same situation.

The agents can produce more code, larger projects. But that's actually worse because that's even harder to fix and maintain.

@varx I get the skepticism; there is a lot of junk and bunk out there. My experience comes from working at a small startup where people are already pushing the boundaries of multi-agent orchestrations and whatnot.

I tried to cover this in a recent post; I think my experiment is pretty conclusive. Honestly you could try the experiment yourself with newer models or more loops and probably make the number shoot up: https://nolanlawson.com/2026/01/31/building-a-browser-api-in-one-shot/

Building a browser API in one shot

TL;DR: With one prompt, I built an implementation of IndexedDB using Claude Code and a Ralph loop, passing 95% of a targeted subset of the Web Platform Tests, and 77.4% of a more rigorous subset of…

Read the Tea Leaves

@nolan I read that post (I follow the RSS feed) but there's a really important point that you don't seem to cover:

Is that code usable?

It passes a lot of tests. Is it good enough to use in a real browser? (Functionality, performance, security.) Is it easy enough to work with that you could get it into good enough shape to use? Is it maintainable? *How do you know?*

@nolan A lot of my work has been in security. One of the things a lot of people don't appreciate is that security is largely about what "features" *don't* exist. For example, the feature that lets an attacker read your email. 😃 You have to try to prove that negative.

This is important because a lot of people evaluate software by taking it for a test drive and seeing that the happy path works. But that can never work for security.

The way you write secure software is by having a secure development process; by developing and communicating threat models; by recognizing dangerous patterns and guiding the software around that.

LLMs are notoriously bad at all of this. I don't think this will be better in six months.

@nolan The strongest steelman position I can make for the use of LLMs is that a senior developer can use them for fast feedback and maybe brainstorming. (As long as they're happy to accept a list of serious downsides and externalities.)

When I see junior devs use them, the LLMs lead the dev down the garden path, creating more and more complicated workarounds where a senior dev would back up and take a fundamentally different approach.

And when I see senior devs treat them as a team of junior devs that can independently produce a body of work, well... that's not a good way to work with actual junior devs! You have to carefully review their work, do mentoring, etc. There are somewhat analogous things you can do with agents but I don't have the sense that this is what people are really doing.

@varx The Web Platform Tests are a pretty high bar of quality. If you read through them, most of them are about bizarre edge cases that, yes, include security, e.g. https://github.com/w3c/IndexedDB/issues/476

The code is probably awful when it comes to maintenance, reusability, etc., but I'm starting to wonder if any of those values matter anymore.

There are of course exceptions, e.g. a common joke in W3C circles is about the "hit testing spec" that doesn't exist, but WPTs are otherwise pretty exhaustive.

Transactions should be marked as inactive during key serialization · Issue #476 · w3c/IndexedDB

Chromium (bug), Gecko (bug), and WebKit (bug) all make transactions inactive during structured serialization of object values. The spec addresses this in the "clone a value" algorithm (see also str...

GitHub
@nolan @varx
How could maintainability not matter?
On any long term project that does evolve, ease of change is IMO one of the most important factors.
That ease depends on code quality/complexity.
Unless genAI improves faster then complexity of the code infinitely, you hit the wall as some moment, having a terrible code no-one is willing to touch.
I cannot imagine how this can stop being important thing.