New post: "We mourn our craft" https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
No comment on this one.
New post: "We mourn our craft" https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
No comment on this one.
@nolan sounds like a serious case of AI inevitabilism!
the AI revolution has been "six months" aways for like two years now lol
regardless, we have a choice
@nolan I share much of your discouragement but anticipate a happier continuation of coding than you seem to fear
Your post mentions oil painting, a hobby still enjoyed by millions of people. Last week my wife and I tried a pottery wheel class; it was fun.
I think the fact coding can be immensely fun will prevent it from becoming an archeological curiosity. Whether the *profession* of coding remains fun is unclear, but I'm pretty sure I'll enjoy the *craft* of coding for the rest of my life
@nolan I honestly wish you're wrong, that we'll be known as "the people that got scared by AI for a bit", but I fear very much that won't be the case. Though I do not share all your beliefs/thoughts regarding how good AI is/will (ever?) be.
My deepest fear is the "craft" side. My hobby, my dearest love. My GitHub profile was up there showing my work, without any doubt, but now? Now it might be AI. Who knows?
When I wanted to write something and reached an area or a feature that was unconventional, I would dive deep. I would learn everything around and understand it deeply, to its core. Sleep was optional
Is that gone? Is that a phase? Am I getting old and tired (even though I've just reached 30)? What will become of my hobby? Of my code? Of my love for the craft...
@nolan @matt Even today your mileage depends on what you're using them for. I don't expect today's tools to beat me at navigating hairy technical/political problems at work, but there's lots of my day to day that's annoyingly mechanical—but just out of reach of a shell script—that I've already been able to completely automate away in the past six months.
Consider where we were in 2014 (and what that implies for 2038): https://xkcd.com/1425/
@numist @nolan @matt that doesn't say much tough. It's hard to say if development is following an exponential curve or a sigmoidal curve. Sigmoidal curves far more common and if we are on a sigmoidal curve, where on the curve we are.
We'll only know longer term, perhaps 5-10 years.
For now I'm very happy that an LLM writes a lot of the boring parts of a project (easily 10x). However, for more complex parts it either needs a lot of guidance or I need to step in (slight improvement or slower).
I dunno. I kinda agree, but I don't see LLMs churning out anything particularly good. They're just text search, they can only *reproduce*, and a lot of the 'comp sci' of the last 30 years is pure hooey, and it's all there in the training models.
When things like Rust appear, and actually run at the speeds that things *should* run at on modern hardware, it reminds me not everybody out there is an idiot.
At the point where software can innovate, we'll be debating conciousness. Again.
@megatronicthronbanks Just the other day a coworker sent me a Rust-based C compiler that Anthropic built, which is apparently capable of compiling the Linux kernel: https://github.com/anthropics/claudes-c-compiler
I understand where you're coming from, but it really feels like game over to me at this point.
@nolan @megatronicthronbanks the problem is that there are a lot of C compilers out there, it's not surprising that Claude, having ingested many of them in the training data, is able to transliterate a compiler to Rust.
It's more interesting to see how good Claude is on novel problems or how well it can come up with novel compiler optimizations, etc.
@nolan
1) there's no way that the people using the mockup-generating machines actually understand their mockup codebases to anywhere near the degree as people who actually spent time thinking about the problems and their solutions. The mockups are therefore unmaintainable.
2) if we could ban asbestos, then we can ban these horrible destructive machines. We can organize, and we can have them all dismantled, and their DRAM & CPUs can be put to less-destructive use.
@nolan
3) we have no convincing reason to tolerate defeatism anymore. With examples like the Mamdani administration, we can all see that there is no excuse.
https://sfba.social/@vij/116014712128853121
We *can* switch off the orphan-shredding machine, and we must.
@matt @nolan ...
But the pieces of hardware that enable gen-a.i. (particularly the accelerator chips, or at least, data centers with huge numbers of them) are *not* the same thing as the grim reaper. They are *physical* things. They can be switched off, they can be scrapped, and they can be regulated out of existence. Previous generations did that for other physical things, and we can do it too.
anyway,
https://mastodon.social/@JamesWidman/116032953161658413
@JamesWidman @matt Matt, I cannot credibly cheer on the resisters because I'm no longer one of them. Believe me, I was one of the most annoying anti-AI voices inside of Salesforce (anyone who worked with me will attest to this), but I just don't have the fight in me anymore. I see it as a lost cause.
I admire people who fight for what they believe in, though, so I think it's the job of the anti-AI crowd to persuade the rest of us, push to regulate LLMs ala James above, etc.
@nolan reading the comments on your blog, and to a lesser extent here, I’m struck (again) by how many experienced engineers keep their heads in the sand about how good this technology is, and how fast it’s improving.
“It’s not a better coder than me.” “We’ll always have to review their code.” Today - maybe. Tomorrow? Not a chance.
@nolan @richard5mith @gregr Right with you, tool use a year ago changed everything. Obviously more since then.
If this technology launched with the 2025 versions of the product vs 2023 there would be a lot less very smart people being willfully ignorant.
I don’t say this as a whole hearted blind endorsement, but nevertheless I believe it to be true.
It is painful, for many reasons.
In a word: no.
I refuse to give up when the research is coming back saying that use of these tools degrades the quality of the work, that they cause brain damage which may be permanent, that they rely on theft and disregard for provenance.
If that's because I'm also a 40-something, so be it.
@nolan My experience with this is it depends on how the developer viewed what they did.
For some it was a craft as you describe it. Finding the perfect solution.
For others, it was about delivering a product vision. Solving customer problems.
The latter (like me) is less bothered by this future we now live in. Because it was never about the code. It was about the result.
I used to type in listings from magazines. I didn’t enjoy it. I just wanted to play the game.
@nolan I can't speak to what I haven't seen, and I can't take people's word for this stuff because there's a *massive* amount of hype.
Just endless waves of dubious benchmarks, demos that turn out to be fake or broken, reporting that isn't actually fact-based.
So, I can only speak to what I've seen.
And what I've seen ain't good.
@nolan Importantly, this was also the situation a year ago, and a year ago people also said "just wait six months". And I did, and it's fundamentally the same situation.
The agents can produce more code, larger projects. But that's actually worse because that's even harder to fix and maintain.
@varx I get the skepticism; there is a lot of junk and bunk out there. My experience comes from working at a small startup where people are already pushing the boundaries of multi-agent orchestrations and whatnot.
I tried to cover this in a recent post; I think my experiment is pretty conclusive. Honestly you could try the experiment yourself with newer models or more loops and probably make the number shoot up: https://nolanlawson.com/2026/01/31/building-a-browser-api-in-one-shot/
@nolan I read that post (I follow the RSS feed) but there's a really important point that you don't seem to cover:
Is that code usable?
It passes a lot of tests. Is it good enough to use in a real browser? (Functionality, performance, security.) Is it easy enough to work with that you could get it into good enough shape to use? Is it maintainable? *How do you know?*
@nolan A lot of my work has been in security. One of the things a lot of people don't appreciate is that security is largely about what "features" *don't* exist. For example, the feature that lets an attacker read your email. 😃 You have to try to prove that negative.
This is important because a lot of people evaluate software by taking it for a test drive and seeing that the happy path works. But that can never work for security.
The way you write secure software is by having a secure development process; by developing and communicating threat models; by recognizing dangerous patterns and guiding the software around that.
LLMs are notoriously bad at all of this. I don't think this will be better in six months.
@nolan The strongest steelman position I can make for the use of LLMs is that a senior developer can use them for fast feedback and maybe brainstorming. (As long as they're happy to accept a list of serious downsides and externalities.)
When I see junior devs use them, the LLMs lead the dev down the garden path, creating more and more complicated workarounds where a senior dev would back up and take a fundamentally different approach.
And when I see senior devs treat them as a team of junior devs that can independently produce a body of work, well... that's not a good way to work with actual junior devs! You have to carefully review their work, do mentoring, etc. There are somewhat analogous things you can do with agents but I don't have the sense that this is what people are really doing.
@varx The Web Platform Tests are a pretty high bar of quality. If you read through them, most of them are about bizarre edge cases that, yes, include security, e.g. https://github.com/w3c/IndexedDB/issues/476
The code is probably awful when it comes to maintenance, reusability, etc., but I'm starting to wonder if any of those values matter anymore.
There are of course exceptions, e.g. a common joke in W3C circles is about the "hit testing spec" that doesn't exist, but WPTs are otherwise pretty exhaustive.

Chromium (bug), Gecko (bug), and WebKit (bug) all make transactions inactive during structured serialization of object values. The spec addresses this in the "clone a value" algorithm (see also str...