re: this that has been making the rounds https://www.techdirt.com/2026/03/25/ai-might-be-our-best-shot-at-taking-back-the-open-web/ i'm always struck by sentences like "the technical barrier went up" that don't attribute what happened to any cause in particular. technical barriers are not agents and they do not go up on their own (nor for that matter are "technical barriers" one monolithic thing that move in a single direction). if you're going to make a plan of action, you have to figure out *who and what* changed (the perception of) "technical barriers"
AI Might Be Our Best Shot At Taking Back The Open Web

I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturall…

Techdirt
i think you could make a good case that the "technical barriers went up" in web dev in particular due to the web becoming commercialized: when you're worrying about click throughs and seo and conversion rates, and moving at capital pace, you make code and use frameworks that sacrifice legibility for extraction and dev velocity. view source is useless nowadays because of the buildup of cruft related to those goals (at least partially, imo)
i can still teach someone how to write html and css and use sftp to upload a website in an afternoon (and honestly css makes this learning process MORE accessible, not less!). but that process is pretty divorced from the main thing people want to use the web for today (make money and run scams)
also! while i'm here! the article says that vibe coding is okay when you're making "tools where you’re the only user, where the stakes are 'my task list doesn’t sync properly' rather than 'customer data got leaked.'" and i think it sucks to downplay the stakes of personal software like this! having a synced task list can be *extremely important* in specific circumstances. and settling for a mode of software dev (ie agentic ai) in which you shrug and say "sometimes it doesn't work" really sucks
also also. the claim that generative AI is trending toward "decentralization"—using the availability of "open source" local models as evidence—seems preposterous to me. of the models mentioned, two are owned/majority funded by Alibaba Group (qwen and kimi), and another is funded by the usual silicon valley tescreal suspects (mistral). the web sites for these companies barely mention their open weight models (if at all), and instead funnel you to their apps or per-token APIs
unless i'm missing something, only the model weights are "open"—the code to train the models isn't—not that it matters, since you kinda *need* tescreal cult cash to train one of these things, and the hardware to do so is increasingly difficult for anyone but the biggest players to buy. so even if you're using it locally, you're still reliant on the big corps to train and distribute those models. hardly seems "decentralized." imo the open models are just PR stunts

@aparrish not disagreeing with you, but the important point about the 'open' models is that unlike online services, the open/local models provide a minimum baseline for capability.

Apart from all of the other terrible things, there's an absolutely horrendous risk involved in getting locked in to an AI service behind an API which can be arbitrarily changed or removed.

@LyallMorrison i get that, but the article seems to understand and advocate for local models as a product that gets updates (eg "six months behind the latest models" implies that the open weight models are still fundamentally in the race). if you're depending on the open weight model vendors to release updates so your workflow can "keep up," you're just as locked in as a per-token api user imo
@aparrish oh, sure. No argument from me! My view on it is mostly around the risk of what we'll lose when the inevitable cash squeeze arrives.