re: this that has been making the rounds https://www.techdirt.com/2026/03/25/ai-might-be-our-best-shot-at-taking-back-the-open-web/ i'm always struck by sentences like "the technical barrier went up" that don't attribute what happened to any cause in particular. technical barriers are not agents and they do not go up on their own (nor for that matter are "technical barriers" one monolithic thing that move in a single direction). if you're going to make a plan of action, you have to figure out *who and what* changed (the perception of) "technical barriers"
AI Might Be Our Best Shot At Taking Back The Open Web

I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturall…

Techdirt
i think you could make a good case that the "technical barriers went up" in web dev in particular due to the web becoming commercialized: when you're worrying about click throughs and seo and conversion rates, and moving at capital pace, you make code and use frameworks that sacrifice legibility for extraction and dev velocity. view source is useless nowadays because of the buildup of cruft related to those goals (at least partially, imo)
i can still teach someone how to write html and css and use sftp to upload a website in an afternoon (and honestly css makes this learning process MORE accessible, not less!). but that process is pretty divorced from the main thing people want to use the web for today (make money and run scams)
also! while i'm here! the article says that vibe coding is okay when you're making "tools where you’re the only user, where the stakes are 'my task list doesn’t sync properly' rather than 'customer data got leaked.'" and i think it sucks to downplay the stakes of personal software like this! having a synced task list can be *extremely important* in specific circumstances. and settling for a mode of software dev (ie agentic ai) in which you shrug and say "sometimes it doesn't work" really sucks
also also. the claim that generative AI is trending toward "decentralization"—using the availability of "open source" local models as evidence—seems preposterous to me. of the models mentioned, two are owned/majority funded by Alibaba Group (qwen and kimi), and another is funded by the usual silicon valley tescreal suspects (mistral). the web sites for these companies barely mention their open weight models (if at all), and instead funnel you to their apps or per-token APIs
unless i'm missing something, only the model weights are "open"—the code to train the models isn't—not that it matters, since you kinda *need* tescreal cult cash to train one of these things, and the hardware to do so is increasingly difficult for anyone but the biggest players to buy. so even if you're using it locally, you're still reliant on the big corps to train and distribute those models. hardly seems "decentralized." imo the open models are just PR stunts
regardless, we *already have* the ability to create powerful software in a decentralized fashion—it is called the "personal computer." that's the status quo you need to be comparing your "open models" with imo

@aparrish not disagreeing with you, but the important point about the 'open' models is that unlike online services, the open/local models provide a minimum baseline for capability.

Apart from all of the other terrible things, there's an absolutely horrendous risk involved in getting locked in to an AI service behind an API which can be arbitrarily changed or removed.

@LyallMorrison i get that, but the article seems to understand and advocate for local models as a product that gets updates (eg "six months behind the latest models" implies that the open weight models are still fundamentally in the race). if you're depending on the open weight model vendors to release updates so your workflow can "keep up," you're just as locked in as a per-token api user imo
@aparrish oh, sure. No argument from me! My view on it is mostly around the risk of what we'll lose when the inevitable cash squeeze arrives.
@aparrish I'm aware of one exception, the Allen Institute for AI publishes their methodology and training data. I don't know how their models compare to Qwen or Kimi.
@aparrish Yeah, this claim I keep hearing about local models is basically a bunch of nonsense.
@GeoffWozniak @aparrish also, “you can run the corporate torment nexus on your own device” is not the pitch they think it is.
@emenel @aparrish I'm still very unsure about the notion of these models being "ethical" in any way. I can't say it isn't possible, though.
@GeoffWozniak @aparrish i’m fairly confident it isn’t possible. we’ve had different kinds of ml models in our software for a very long time before this… this kind of model complexity can only exist with massive externalities.
@aparrish Are Mistral tescrealists? (All I know about them is that they're French and that they used to work in Facebook's AI lab.)
@aparrish Making software development contingent on the whims of large companies who stole data and are now handing it back to you for a hefty toll (whether that be tokens or GPU costs) seems like the opposite of decentralised