re: this that has been making the rounds https://www.techdirt.com/2026/03/25/ai-might-be-our-best-shot-at-taking-back-the-open-web/ i'm always struck by sentences like "the technical barrier went up" that don't attribute what happened to any cause in particular. technical barriers are not agents and they do not go up on their own (nor for that matter are "technical barriers" one monolithic thing that move in a single direction). if you're going to make a plan of action, you have to figure out *who and what* changed (the perception of) "technical barriers"
AI Might Be Our Best Shot At Taking Back The Open Web

I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturall…

Techdirt
i think you could make a good case that the "technical barriers went up" in web dev in particular due to the web becoming commercialized: when you're worrying about click throughs and seo and conversion rates, and moving at capital pace, you make code and use frameworks that sacrifice legibility for extraction and dev velocity. view source is useless nowadays because of the buildup of cruft related to those goals (at least partially, imo)
i can still teach someone how to write html and css and use sftp to upload a website in an afternoon (and honestly css makes this learning process MORE accessible, not less!). but that process is pretty divorced from the main thing people want to use the web for today (make money and run scams)
also! while i'm here! the article says that vibe coding is okay when you're making "tools where you’re the only user, where the stakes are 'my task list doesn’t sync properly' rather than 'customer data got leaked.'" and i think it sucks to downplay the stakes of personal software like this! having a synced task list can be *extremely important* in specific circumstances. and settling for a mode of software dev (ie agentic ai) in which you shrug and say "sometimes it doesn't work" really sucks
also also. the claim that generative AI is trending toward "decentralization"—using the availability of "open source" local models as evidence—seems preposterous to me. of the models mentioned, two are owned/majority funded by Alibaba Group (qwen and kimi), and another is funded by the usual silicon valley tescreal suspects (mistral). the web sites for these companies barely mention their open weight models (if at all), and instead funnel you to their apps or per-token APIs
unless i'm missing something, only the model weights are "open"—the code to train the models isn't—not that it matters, since you kinda *need* tescreal cult cash to train one of these things, and the hardware to do so is increasingly difficult for anyone but the biggest players to buy. so even if you're using it locally, you're still reliant on the big corps to train and distribute those models. hardly seems "decentralized." imo the open models are just PR stunts
regardless, we *already have* the ability to create powerful software in a decentralized fashion—it is called the "personal computer." that's the status quo you need to be comparing your "open models" with imo

@aparrish not disagreeing with you, but the important point about the 'open' models is that unlike online services, the open/local models provide a minimum baseline for capability.

Apart from all of the other terrible things, there's an absolutely horrendous risk involved in getting locked in to an AI service behind an API which can be arbitrarily changed or removed.

@LyallMorrison i get that, but the article seems to understand and advocate for local models as a product that gets updates (eg "six months behind the latest models" implies that the open weight models are still fundamentally in the race). if you're depending on the open weight model vendors to release updates so your workflow can "keep up," you're just as locked in as a per-token api user imo
@aparrish oh, sure. No argument from me! My view on it is mostly around the risk of what we'll lose when the inevitable cash squeeze arrives.
@aparrish I'm aware of one exception, the Allen Institute for AI publishes their methodology and training data. I don't know how their models compare to Qwen or Kimi.
@aparrish Yeah, this claim I keep hearing about local models is basically a bunch of nonsense.
@GeoffWozniak @aparrish also, “you can run the corporate torment nexus on your own device” is not the pitch they think it is.
@emenel @aparrish I'm still very unsure about the notion of these models being "ethical" in any way. I can't say it isn't possible, though.
@GeoffWozniak @aparrish i’m fairly confident it isn’t possible. we’ve had different kinds of ml models in our software for a very long time before this… this kind of model complexity can only exist with massive externalities.
@aparrish Are Mistral tescrealists? (All I know about them is that they're French and that they used to work in Facebook's AI lab.)
@aparrish Making software development contingent on the whims of large companies who stole data and are now handing it back to you for a hefty toll (whether that be tokens or GPU costs) seems like the opposite of decentralised
@aparrish "i don't need to actually maintain it" - i assure you, if you're writing it in the first place, you do
@jplebreton @aparrish or, conversely, if you don't need to maintain it, you don't actually need or want it.
@aparrish i'm partial to the idea of "if you're only hurting yourself you can do it however much you like"; otherwise i'd have to get up in arms about many more weird workflows people use than those including AI
@whitequark @aparrish "don't come crying to me when..." in advance can be doing someone a favour though!
@flippac @whitequark @aparrish my buddy has a policy of issuing one warning, telling people the right or safe way to do a thing, then letting them go. It has merits.
@flippac @whitequark @aparrish I’m an idiot, and keep telling people they’re making a mistake until they start quoting my spiel back to me…
@flippac @whitequark @aparrish there are so many of the world's problems we can bury ourselves under the weight of, we don't really find our colleagues' tooling choices to be high-priority for that purpose
@ireneista @flippac @whitequark i mean, this guy can use llms to write his software until he passes out from pleasure, whatever. what i take issue with is this piece he wrote that *advocates* for this particular workflow using outrageous and baffling arguments

@aparrish @flippac @whitequark (the environmental damage and the theft are externalities that do still bother us, personally, we think that goes beyond personal choice. we're just leaving that aside for the sake of focusing on something else right this moment)

that makes sense, yeah

@ireneista @whitequark @aparrish I mean, sometimes it's me I'm doing a favour when I say "don't come crying to me..."
@ireneista @flippac @aparrish (I do, in the sense that some of them ought to stop being colleagues)
@whitequark @ireneista @flippac @aparrish Fully agreed, and also other people's tooling choices can carry externalities that affect me. At that point I start caring quite a bit.
@xgranade @whitequark @flippac @aparrish yeah we care about externalities, we just aren't going to fight about text editors. we think a person's computer is kind of like their underwear - there are situations in which we might end up borrowing it but only with a very good reason, and we have no right to complain that it doesn't fit us
@ireneista @whitequark @flippac @aparrish Yeah, agreed as well. Telling the difference is wisdom...

@aparrish

They probably meant that you don't hurt anyone's profits if something goes wrong

@aparrish I don't understand this logic at all. If I'm making a tool where I'm the only user: why shouldn't I make it the best it can be? Why shouldn't I do that for myself?
@lunarloony @aparrish ‘I’ll just make myself a piece of crap that I don’t understand and doesn’t work when I want it to” is not an enticing way to spend my time and sort of defeats the point of building a tool in the first place.

@aparrish Yeah, I think there's a lot of danger in trusting these tools... and probably more so the more you trust and less you understand the piles of slop they're cranking out and are capable of debugging or fixing it if need be...

I guess if one's baseline for software is "it can break or go away at any time, taking all your data with it unrecoverably" (because it's a mysterious black box), maybe that's okay... but I sure don't think it is.

@aparrish yea 100%. none of it has become less accessible or harder! if anything everything is easier to build, easier to learn, more affordable, and full of options. but the average goalpost has been moved to mars

@alice @aparrish I write documentation and knowledge base articles using HTML, which is ultimately what it was originally built for, creating documents at CERN easy enough for physics researchers, who were not primarily IT people, to write. Its still perfectly well suited to this task. I'm pretty sure I could teach my 9yo to write a basic article using HTML, and back in the day lots of people of varying skill level made MySpace and GeoCities pages with just HTML.

I happen to think that Markdown-style is even easier for knowledge management, and I've never written Gopher to compare against, but Markdown doesn't have the features to make webapps, and HTML/CSS/JS has pretty much completely replaced native platform toolkits for GUI app development.

I also develop some basic webapps too, not React-style SPAs because as you say the goalposts are on Mars, and basic forms are still doable, esp with all the things modern HTML5 input elements provide that don't require any JS to use, like the pattern attribute for client-side validation. Most things which are actually useful for a computer to do dont require the latest technology, personal computers were essentially feature complete by the end of the 2000s, they just need someone who cares to learn and to not get hung up trying to compete with million dollar commercial development teams. CGI still exists and is viable if you dont need to care about monopolistic scalability.

@aparrish this juxtaposition is going to live in my head for a very long time 🙃
@aparrish i always think about what you were saying about knitting or crocheting being just as difficult as programming

@aparrish around the time web 2.0 was getting into full swing, I read a thinkpiece by some Very Smart Web Guy who said "HTML is a terrible language for making web pages" and I'm still mad about it

I'm also still mad at David Siegel for unleashing his "creating killer web sites" book upon the world, because I feel like it was almost single-handedly responsible for the mindset that a web site is supposed to be a beautiful shiny brochure above all

@aparrish I like the juxtaposition of reading your thread alongside https://tarakiyee.com/on-the-enshittification-of-audre-lorde-the-masters-tools-in-tech-discourse/, on the importance of understanding what structural forces led us here and which ones are being reproduced by the systems we've built
On The Enshittification of Audre Lorde: "The Master's Tools" in Tech Discourse

🖼️Cover Photo: Train at the Nairobi terminus of the Mombasa–Nairobi Standard Gauge Railway. It runs parallel to the Uganda Railway that was completed in 1901. The first fare-paying passengers boarded the "Madaraka Express" on Madaraka Day (1 June 2017), the 54th anniversary of Kenya's attainment of self-rule from Great

Do Flamingos Know They're Pink

@aparrish I also think this attitude (of the web being way too complex) is weird with various improvements to HTML, CSS, Javascript, various web APIs, the "evergreen" browsers being much more homogeneous, etc.

I firmly believe that the solution to something becoming hard to understand because of giant piles of middleware and libraries and frameworks is not *more* middleware and autogenerated piles of slop on top of it... but rather the opposite.

@aparrish I wonder if there’s also a deliberate deskilling aspect to the promotion of those frameworks

@texttheater @aparrish

Encourage junior software engineers to learn how to vibe instead of how to think, make their employers dependent on your LLM, and when things go wrong, they can consult with your small staff of SW engineers who learned how to think — for a premium…

@texttheater @aparrish
“What do you mean we can’t write our software without paying for Claude? Can’t we just hire some SW engineers?”
“The last non-captive one died in 2046. My neighbor’s kid has written a little code on his own, though; it’s kind of a hobby…”
@texttheater @aparrish
.. but, this has probably happened before, with other technologies…

@Red_Shirt_no2 @texttheater @aparrish well yeah. nobody designs car engines for fun today

and please note that there was a time when people did

@Red_Shirt_no2 @texttheater @aparrish (the word "car" is load-bearing there. model engineers absolutely design toy steam engines.)
@texttheater @aparrish This. Seems the idea is to hire generic junior engineers or people fresh from a framework bootcamp, and then they don’t have to teach them about browsers and users and progressive enhancement and all that jazz.

@aparrish That was very much my reaction too. The difficulty of building a 90s-era website is very much the same now as it was back then - it's still basically the same HTML and CSS under the hood (except, perhaps, for the dreaded marquee).

In some ways it's easier because's there's a bunch of free hosting services and tutorials, in some ways it's harder because it's hard to wade through the slop to find good tutorials (although notably, this is very much the fault of AI).

What has changed is the expectations. If you want your website to look like a "modern" commercial website, then yeah, doing that with simple HTML is hard, and agentic AI incidentally appears to be good at this style (to the point where I suspect a poorly-functioning site in that style is slop). But if you think that's the only way to go about things, then, well... let's just say Neocities wants to have a word with you

@brib @aparrish i think you're basically right about "expectations", but imo it's not (just?) about the look

the "large company commercial website" tooling _really_ did make *some* things easier (in exchange for usually making something else harder)

i remember how much _excitement_ there was (whether artificially amplified or not) as language-specific package managers (e.g. npm, although there was bower for a bit) and javascript bundlers (most prominently browserify back then, probably webpack nowadays) first took off, and made *that specific style* of code reuse way easier. people were doing things like "i can import an entire 3d graphics framework (three.js) and physics library (ammo.js?) and make something with it"

documentation certainly hasn't kept up, and i'm still generally looking for ways out of the current mess

@aparrish I suspect these technical barriers will always, in an induced-demand fashion, rise to meet the current capabilities of well-funded corporations – because they are always incentivized to set "the bar" above what everybody else can do.