i do not want to get into the business of posting LLM takes but very briefly:

It feels clear to me that some people* are getting value out of using LLMs for programming. Basically see https://simonwillison.net/'s whole blog. If I think about it purely on the basis of "in a vacuum, can this help me write programs", it seems like an exciting technology.

BUT...

(1/?)

(* it also feels clear that some people are NOT getting value out of LLMs, hoping to avoid flamewars about that please)

(continued from ^)

Google search doesn't work as well anymore because the results are full of LLM-generated articles? I hear about CEOs putting pressure on their teams to produce more faster because they've been told that AI will increase productivity?

it feels sad. even though I find LLMs useful sometimes, with all of the societal impacts it often feels like it isn't actually improving my life.

(2/?)

@b0rk About searching getting increasingly worse, there's another side of this I've thought was interesting.

A non-negligible amount of people no longer ask their technical questions on public forums, they ask their favorite chatbot. These questions, and their answers, are not publicly displayed for other people sharing similar struggles to search for.

@karl @b0rk That's rather brilliant. I often use LLMs to find things for me, in subject areas where I know if they're lying. I used to use Google, before it degraded. Occasionally (like weekly) I would have a public discussion about a question.
I'm not doing those now, so I'm not contributing.
@karl @b0rk I'm seeing this more and more at work and it makes me sad.
@karl @b0rk Stack Overflow seems to suffer the most from this.
@enno @karl @b0rk And anyone trying to teach courses.
@karl @b0rk yes excellent point. Although knowledge was already becoming volatile with everyone switching to Discord instead of forums the last few years.

@boxofsnoo Well, for what it's worth, I see Discord as a IRC replacement, which already enabled similar volatile discussions.

That being said, Discord is hugely more popular, so perhaps it amplified the issue.

@karl @b0rk one of the particular benefits that Stack Overflow had was not the initial answers, but the commentary, refinement, and background references added to the question over time, including "back in 2014, this was the best answer. In 2019, <new tool> generally replaces this solution with.... Support for this started with $version".

(e.g. ECMA modules and general browser advances, packaging tools in other libraries, standards advancement, library evolution, etc)

@karl @b0rk maybe worse, maybe just also bad: the chatbots (even the same one) will tell them different things for the same question. different wrong things. uncommon misconceptions.
@karl @b0rk Long, long ago, companies supported their products and were the first port of call for support. Then they realised that they could just dump customers/users and get them to support each other for free. Then customers got chastised more and more by arrogant forum ā€˜sentinels’ for not giving enough detail and not knowing enough background stuff generally. Contempt as a service. I’ve not used a chatbot but suspect they don’t do that.
@karl @b0rk see also discord before AI... No preservation of knowledge
@mikebabcock @b0rk I see Discord as having a similar effect to IRC, perhaps worse as it's become popular but it was there before.
@b0rk I still think the asbestos metaphor is useful here - it's possible (in fact, not uncommon) for technologies to be both useful in some ways and damaging in others... and it can be the case that the downsides are impossible to suppress or are very long-lasting.
@b0rk I think Google’s problems have more to do with its ad business than the slop at the top, which is a term I think I just coined for when the top of the search results is LLM output.

@nick @b0rk Agreed. I have NUMEROUS issues with LLMs*, but Google has been getting worse for years before chatGPT. It has focused on general information over specific, to the point where it ignores the actual terms you search for in favor of more popular ones, presumably because more popular results equates to more and revenue.

"Slop at the top" is a great phrase.

* Anti LLM rant implied rather than given out of respect for b0rk's request.

@nick @b0rk Both issues (Google being driven by the ad side, and results being full of slop) are really compounding each other.

@b0rk I think that the way we perceive the Internet is still in a state of flux and 20 years from now we'll look at now the way we look at the early 2000s web2.0 the way at the time we looked at the early '90s geocities era when we looked weird at the earlier Usenet era and so on: a rather quaint but ultimately doomed to be obsoleted way to use this technology. In any case, there's a strong generational component, see the origins of the "Eternal September" meme.

At this point I feel it's all unescapably converging towards the amorphous yet omnipresent hose of semi-worthless data that's the backdrop of most cyberpunk media. But who knows?

@b0rk use "web_ on far right. It's hidden. Use Kagi. Include swear words.
@b0rk To me, 'sad' is the right word. We (as the software industry) have been promised so many wildly different productivity boosts, one (read: I) would assume we'd have become somewhat immune. But no, after 4th generation languages, RUP, CASE tools, Lo/No Code, Blockchain, Microservice for everything, we still fall for such promises.
@b0rk Oh nooooo… Now I sound like a grumpy old person. Maybe—just maybe—because I turned into one. 🤪😱😳

@b0rk Anecdata from a freelance dev here: currently on a project for a ~$10 billion revenue company. Not a week goes by without a mail from some layer of management encouraging people to "use more AI". Entire teams are forced to come up with "user facing AI features", with no regard to what those teams are doing. There are mandatory AI sessions on the regular. New hires spend more time talking to copilot than to the rest of my team.

Things are changing very rapidly...

@b0rk see also the struggles LWN are having. Click through rates are down, their original material is lower down google search than LLM plagarised version of their own articles, plus they are getting effectively DDoS'd by scrapers.
It's outright warfare on actual human authored websites.
@b0rk I don't think that's entirely true. Although I'm very LLM skeptical, Google search enshitified before LLMs came about. They only sped a process that was already going on.

@b0rk
As someone who recently graduated and been looking for a job, I've recently tried out Claude code and such because it seems there's an expectation now that I should get good at that.

When I do, I find the experience to be impressive and almost overwhelming with what it can do & how quickly. But then I get a sinking feeling that... I just don't want to be doing this. If I knew in university this is what I'd be doing I'd have kept programming as a hobby and gone into something else.

@b0rk It's a huge bummer. The technology IS cool and does present a huge advancement in our ability to interface with natural language.

I think it's important to frame it, instead of "can it do x?", as "sure, it can or might eventually be able to do X, but at what cost?" LLMs can and should exist and be accessible, and small open source models you can run on commodity hardware DO exist.

The problem is the eschatological venture capital death cult that's made LLMs their hobby horse,as always

@b0rk one thought that came to me when Cursor started seeming to be good enough a lot of engineers started using it was that LLM-assisted coding makes some sense since it's putting the burden of using that tech on coders. AI-based summaries are shoving it in the face of the general public, who don't have as much of a handle on what its limitations are. And just seems to be the latest in the general enshittification of search engines. Social media is especially vulnerable to AI-bots.

@b0rk TIL my hosting provider is raising prices because the companies pushing AI have so much money they can buy up so much disk, ram and chips to drive up prices for everyone.

I am of those pressured to use it to stay employed & if I'm being honest it's destroying my health because even when it works for me, I know it is built on a pile of harms including increased carbon emissions which makes my child's future worse. :(

@b0rk the bit about people being pressured to be more productive is true. I've been told that explicitly by my manager, and all the way up the chain to the CEO
@b0rk I've had to switch to Kagi for search. It's a noticeable improvement over Google.

@b0rk It’s almost heartening to see history repeat (or rhyme), and I can report so:

Automation — and this is one, boy is it one — has a classic Gartner Hype Curve and like most industrialisation-Luddite* tensions, it ends up with more work / more jobs, not less / fewer.

*per the clichƩ, not the historical reality?

But first you have to discover if the automation is:

A. replacing a task that it can’t replace
B. making an existing task exist, but faster
C. making the impossible possible, a whole new task

Bosses (been there, 2000s) will spend a couple of years getting you to bang away at type A, until it proves impossible

They’ll want some good news out of that, and hopefully you’ll have a type C to show them. But first they’ll have to figure out how to fit it, which they will, if they’re good.

And type B will be a bit of a minority — the weird truth is though, that the type A stuff will stop getting done anyway because the world turns, and priorities shift … enpoopifies maybe

Real professionalism comes when irreplaceable practices are institutionalised and valued. Think plane safety, but only for jets. Train safety is higher at the top end too.

But you CAN automate: When trains go driverless, they convert DECADES after the hype and predictions were made. And then they embed all the process and procedure into the automation.

We gotta be more like that in IT.

@b0rk my workplace provides my department with licenses for Copilot and chatgpte. We're encouraged to use it as much as possible. (If the business data in my presentation is falsified by the LLM, though, who gets the blame, me or openai?)

Amusingly, we had to take training classes on using it. It's a conversational chatbot. Why provide training on talking to a robot when they never provided training on talking to my human coworkers? Lol

@b0rk (me clicking on the refresh buttons to read your idea on those issues! side note: it seems to me than Simon and a lot of LLM coders are experienced programmers and are working mostly alone)

@b0rk I have a feeling that when all the hype dies down we will be left with LLM coding tools. It’s just such a boon for coding.

I think it’s because most coding doesn’t require creativity. There are usually only a few ā€œcorrectā€ solutions and, given the right feedback loops (test suites, performance benchmarks), LLMs are good at figuring them out.

@b0rk many postings from people who say so much value. But a bunch turn out to be microslop
@b0rk Even the enthusiasts seem to be getting the LLM blues https://simonwillison.net/2026/Feb/15/deep-blue/
Deep Blue

We coined a new term on the Oxide and Friends podcast last month (primary credit to Adam Leventhal) covering the sense of psychological ennui leading into existential dread that many …

Simon Willison’s Weblog

@b0rk The origin of writing https://dustycloud.org/blog/a-letter-from-2016-to-2026/ is my bitterness that a decade ago, we heard a lot of promises that "don't worry, we'll automate away the boring stuff, you can focus on being creative!" and now people seem resigned to "well, all that creative stuff, I don't do it anymore"

Honestly, for me, not doing the creative stuff is giving up on the things that bring me the most happiness in life. And we know that what LLMs are bad at right now is anything that is genuinely new... they're very good at doing things that have been done before.

So, celebrate those who continue to be creative, I think. Because ultimately, even the vibecoders / vibeartists rely on their work to advance things.

But it's depressing to me to see the promises of what life would be like vs what it's now like.

A letter from 2016 to 2026 -- Dustycloud Brainstorms

On that note @b0rk, I'd include your work in the category of creative stuff worth celebrating. I hope you keep at it!

@cwebber aw thank you! definitely when I think "what am I doing about AI" it's "idk keep writing stuff"

like i added some examples to the dig man page recently, and that is a small thing but not nothing

@cwebber @b0rk YOU're depressed? I worked in the 90s (in the days of Carl Malamud's Internet Travelogue) w subversives to get agricultural scientists in developing countries on the Internet via leased lines to US (paid for w cheaper phone calls). Thesis: Can't solve hunger w inferior access to information than enjoyed by children in the developed world. In retrospect, despite knowing about the dark web etc, we were naive & hopelessly optimistic.

You're today's subversives. H/t. Thanks šŸ’Ŗ

@b0rk Even if I don't always trust Google's motives, I remind myself that a lot of the problems we, and they face, are because "counterparties" are working against both them and us.

If Google had done nothing but try to preserve page rank through the AI onslaught, could that have even worked?

I'm not sure. Page rank worked best with honest pages. As soon as someone wanted to convey the same information that was out there, but with higher page rank, it started to go downhill

@b0rk I think there is a jagged edge - on one side are tasks that benefit from LLMs (new standalone codebases, particularly in dynamic languages, creating written drafts, planning), and a group of people for whom they are useful (people working alone, the very senior who know exactly how to evaluate outputs), and on the other side are places they fall apart, and we (the industry as a whole) don’t spend nearly enough time examining the differences because of the hype
@vicki this was the case in 2025, but now it seems that they can chew through huge legacy codebases in compiled languages and come up with detailed answers that would have taken me hours to figure out, but do it in minutes. This is mostly what I have used them recently, not much code writing itself
@vicki @b0rk yep! and it's people in those aforementioned categories who are the most vocal about blogging, giving talks, writing books, etc, on it [i think they are sincere about their enthusiasm, but def a selective sample]

@b0rk

No flamewars, but this is really not about an individual "gets value / does not get value" decision. Except in the sense that a murderer gets value or does not get value out of murdering someone.

@b0rk I look at this way, these tools are profoundly changing our work, society.

I can sit on the sidelines and ignore it, or I can help shape the conversation in a small way.

I focus on helping find value: https://talk.macpowerusers.com/t/using-claude-as-strategic-thought-partner/44493 and the weaknesses https://agilepainrelief.com/blog/genai-code-quality-fundamental-flaws-and-how-bluffing-makes-it-worse/ and https://agilepainrelief.com/blog/is-ai-making-your-organization-fragile-or-more-resilient/

If I could turn back the clock on this technology, I would.

Using Claude as Strategic Thought Partner

With some recent changes in my business, I needed a clear picture of my business: website traffic (PostHog), search rankings (Google Search Console), newsletter performance (Drip), and social media (Buffer). Several of these platforms don’t make data export easy, so I used Claude Code to write scripts that scrape and export what I needed. This kind of thing is becoming approachable even for non-technical people. The LLM writes the code, you just run it. A useful side effect: those scripts now ru...

MPU Talk

@b0rk Of interest: Chris #Lattner ("Mr.#LLVM") has carefully reviewed the code of #CCC (Claude C Compiler), which is a complete C compiler capable of compiling the entire Linux kernel (although not yet at high quality), produced in Rust in 2 weeks almost fully automatically by the Claude LLM. Lattner has written an insightful blog post about it:

https://www.modular.com/blog/the-claude-c-compiler-what-it-reveals-about-the-future-of-software

Modular: The Claude C Compiler: What It Reveals About the Future of Software

Compilers occupy a special place in computer science. They're a canonical course in computer science education. Building one is a rite of passage. It forces you to confront how software actually works, by examining languages, abstractions, hardware, and the boundary between human intent and machine execution.