re: this that has been making the rounds https://www.techdirt.com/2026/03/25/ai-might-be-our-best-shot-at-taking-back-the-open-web/ i'm always struck by sentences like "the technical barrier went up" that don't attribute what happened to any cause in particular. technical barriers are not agents and they do not go up on their own (nor for that matter are "technical barriers" one monolithic thing that move in a single direction). if you're going to make a plan of action, you have to figure out *who and what* changed (the perception of) "technical barriers"
AI Might Be Our Best Shot At Taking Back The Open Web

I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturall…

Techdirt
i think you could make a good case that the "technical barriers went up" in web dev in particular due to the web becoming commercialized: when you're worrying about click throughs and seo and conversion rates, and moving at capital pace, you make code and use frameworks that sacrifice legibility for extraction and dev velocity. view source is useless nowadays because of the buildup of cruft related to those goals (at least partially, imo)
i can still teach someone how to write html and css and use sftp to upload a website in an afternoon (and honestly css makes this learning process MORE accessible, not less!). but that process is pretty divorced from the main thing people want to use the web for today (make money and run scams)
also! while i'm here! the article says that vibe coding is okay when you're making "tools where you’re the only user, where the stakes are 'my task list doesn’t sync properly' rather than 'customer data got leaked.'" and i think it sucks to downplay the stakes of personal software like this! having a synced task list can be *extremely important* in specific circumstances. and settling for a mode of software dev (ie agentic ai) in which you shrug and say "sometimes it doesn't work" really sucks
also also. the claim that generative AI is trending toward "decentralization"—using the availability of "open source" local models as evidence—seems preposterous to me. of the models mentioned, two are owned/majority funded by Alibaba Group (qwen and kimi), and another is funded by the usual silicon valley tescreal suspects (mistral). the web sites for these companies barely mention their open weight models (if at all), and instead funnel you to their apps or per-token APIs
unless i'm missing something, only the model weights are "open"—the code to train the models isn't—not that it matters, since you kinda *need* tescreal cult cash to train one of these things, and the hardware to do so is increasingly difficult for anyone but the biggest players to buy. so even if you're using it locally, you're still reliant on the big corps to train and distribute those models. hardly seems "decentralized." imo the open models are just PR stunts
regardless, we *already have* the ability to create powerful software in a decentralized fashion—it is called the "personal computer." that's the status quo you need to be comparing your "open models" with imo

@aparrish not disagreeing with you, but the important point about the 'open' models is that unlike online services, the open/local models provide a minimum baseline for capability.

Apart from all of the other terrible things, there's an absolutely horrendous risk involved in getting locked in to an AI service behind an API which can be arbitrarily changed or removed.

@LyallMorrison i get that, but the article seems to understand and advocate for local models as a product that gets updates (eg "six months behind the latest models" implies that the open weight models are still fundamentally in the race). if you're depending on the open weight model vendors to release updates so your workflow can "keep up," you're just as locked in as a per-token api user imo
@aparrish oh, sure. No argument from me! My view on it is mostly around the risk of what we'll lose when the inevitable cash squeeze arrives.
@aparrish I'm aware of one exception, the Allen Institute for AI publishes their methodology and training data. I don't know how their models compare to Qwen or Kimi.
@aparrish Yeah, this claim I keep hearing about local models is basically a bunch of nonsense.
@GeoffWozniak @aparrish also, “you can run the corporate torment nexus on your own device” is not the pitch they think it is.
@emenel @aparrish I'm still very unsure about the notion of these models being "ethical" in any way. I can't say it isn't possible, though.
@GeoffWozniak @aparrish i’m fairly confident it isn’t possible. we’ve had different kinds of ml models in our software for a very long time before this… this kind of model complexity can only exist with massive externalities.
@aparrish Are Mistral tescrealists? (All I know about them is that they're French and that they used to work in Facebook's AI lab.)
@aparrish Making software development contingent on the whims of large companies who stole data and are now handing it back to you for a hefty toll (whether that be tokens or GPU costs) seems like the opposite of decentralised
@aparrish "i don't need to actually maintain it" - i assure you, if you're writing it in the first place, you do
@jplebreton @aparrish or, conversely, if you don't need to maintain it, you don't actually need or want it.
@aparrish i'm partial to the idea of "if you're only hurting yourself you can do it however much you like"; otherwise i'd have to get up in arms about many more weird workflows people use than those including AI
@whitequark @aparrish "don't come crying to me when..." in advance can be doing someone a favour though!
@flippac @whitequark @aparrish my buddy has a policy of issuing one warning, telling people the right or safe way to do a thing, then letting them go. It has merits.
@flippac @whitequark @aparrish I’m an idiot, and keep telling people they’re making a mistake until they start quoting my spiel back to me…
@flippac @whitequark @aparrish there are so many of the world's problems we can bury ourselves under the weight of, we don't really find our colleagues' tooling choices to be high-priority for that purpose
@ireneista @flippac @whitequark i mean, this guy can use llms to write his software until he passes out from pleasure, whatever. what i take issue with is this piece he wrote that *advocates* for this particular workflow using outrageous and baffling arguments

@aparrish @flippac @whitequark (the environmental damage and the theft are externalities that do still bother us, personally, we think that goes beyond personal choice. we're just leaving that aside for the sake of focusing on something else right this moment)

that makes sense, yeah

@ireneista @whitequark @aparrish I mean, sometimes it's me I'm doing a favour when I say "don't come crying to me..."
@ireneista @flippac @aparrish (I do, in the sense that some of them ought to stop being colleagues)
@whitequark @ireneista @flippac @aparrish Fully agreed, and also other people's tooling choices can carry externalities that affect me. At that point I start caring quite a bit.
@xgranade @whitequark @flippac @aparrish yeah we care about externalities, we just aren't going to fight about text editors. we think a person's computer is kind of like their underwear - there are situations in which we might end up borrowing it but only with a very good reason, and we have no right to complain that it doesn't fit us
@ireneista @whitequark @flippac @aparrish Yeah, agreed as well. Telling the difference is wisdom...

@aparrish

They probably meant that you don't hurt anyone's profits if something goes wrong

@aparrish I don't understand this logic at all. If I'm making a tool where I'm the only user: why shouldn't I make it the best it can be? Why shouldn't I do that for myself?
@lunarloony @aparrish ‘I’ll just make myself a piece of crap that I don’t understand and doesn’t work when I want it to” is not an enticing way to spend my time and sort of defeats the point of building a tool in the first place.

@aparrish Yeah, I think there's a lot of danger in trusting these tools... and probably more so the more you trust and less you understand the piles of slop they're cranking out and are capable of debugging or fixing it if need be...

I guess if one's baseline for software is "it can break or go away at any time, taking all your data with it unrecoverably" (because it's a mysterious black box), maybe that's okay... but I sure don't think it is.