Submitted without commentary: "AI Might Be Our Best Shot At Taking Back The Open Web" by Mike Masnick https://www.techdirt.com/2026/03/25/ai-might-be-our-best-shot-at-taking-back-the-open-web/
AI Might Be Our Best Shot At Taking Back The Open Web

I remember, pretty clearly, my excitement over the early World Wide Web. I had been on the internet for a year or two at that point, mostly using IRC, Usenet, and Gopher (along with email, naturall…

Techdirt

@cwebber I don't understand people claiming they can self-host an independent "AI agent". It doesn't square with how I understand they work. It doesn't square with the immense data centers being built.

And while at first I'm sympathetic to his complaint about losing the open Web, he goes off the rails. You can still create web pages. It's just text. HTML tags are optional. He's fantasizing about singlehandedly competing with Microsoft or something.

@foolishowl I disagree with much of his analysis, but he mentions Ollama for running local models. I’ve found it easy to setup as well, though any models you can run locally currently aren’t as “capable” as the cloud-hosted models. But the gap is narrowing for sure

@basetwojesus Honestly, this puzzles me.

If Ollama can be run entirely locally, and is almost as "capable" as the commercial services, then, what's with the colossal scale of the commercial services?

Most of the last five years I've had jobs in data centers that were building out HPC systems. We weren't allowed to know what the systems were doing, but our understanding was it was all about "AI" research. The compute and data storage capacity of a small data center dwarfs what a personal computer can do.

So if that kind of scale isn't actually necessary for "AI" (and I'm opposed to data center construction and "AI" anyway for many reasons), then what's all that capacity for?

@foolishowl I wouldn't say local models are almost as capable as the cloud models when comparing their upper limits. But many use cases don't really need all that juice. For example, if you have Ollama installed you can use a command like this to check a blog post for grammar:

`ollama run lfm2 "Check the following text for grammar and spelling, leaving the style and substance intact $(cat post.md)"`

For a code assistants, etc. though, cloud models are probably still necessary

@foolishowl @cwebber His description of CSS makes it sound like it is inscrutable, when browser dev tools like Inspect Element (the successor to "view source") have never been better...
@foolishowl @cwebber the more i think about this article the angrier i get. when i was a kid doing "view source" there was no integrated web IDE that would help you understand the DOM and even edit it in real time. what exists today is light years ahead of that. it's never been easier to learn about how websites work!

@vv @foolishowl @cwebber one of my favorite js packages is this, which brings inspection to mobile too!

https://eruda.liriliri.io/

On silly appending debug=true to any query string will pop it

Eruda - Console for Mobile Browsers | Eruda

Eruda Documentation

@vv @cwebber Yes.

Like, there are real difficulties with the open web. But, it's not that hard to self-host a website. An SBC or a used computer is less expensive than a computer capable of web hosting was in the 90s.

But, he's the Techdirt guy. He already has a website. So what's he complaining about? Not being able to create new, more complicated applications, apparently.

He's complaining about the loss of an open web the way some USonian writers complained about the closing of the frontier in the early 1900s. It's probably easier to go horseback riding in Wyoming now than it was then. But that's not what he's missing.

@foolishowl @cwebber
1) Inference costs are cheap compared to training. That's the industry jargon. Basically - building massive neural networks is hard and intensive. But actually doing the walk from a start to a next word prediction is cheap (relatively). Obviously smaller networks are cheaper to walk.

2) Most of the small models designed for running locally are very large models which are then pruned down so that they don't have as much. There's a bunch of ways to do that. But it means that people advocating for small local models either don't understand them, or are fine with the destructive large models being created.