The Web Scraping Consent Model Was Always Broken. AI Just Made It Obvious.

https://lemmy.world/post/44273489

The Web Scraping Consent Model Was Always Broken. AI Just Made It Obvious. - Lemmy.World

Lemmy

I completely disagree. The road to hell is paved with good intentions. What he proposes is just another flavor of technofascism, control disguised as ethics. These so-called humane scraping barriers will end up blocking humans, not machines. We have seen this before with CAPTCHAs, reCAPTCHAs, and all those “human verification” gimmicks, always bypassed by bots and always annoying to real people. Personally, I have nothing against crawlers; they should do their job and keep the web interconnected. Trying to wall yourself off is just meh. And those fancy licenses or “ethical use” terms won’t change anything either. The web is not the United States, and nobody really cares about someone’s imaginary social contracts. Maybe it’s time to accept a simple fact: once something goes on the web, it becomes public territory, and no one can still pretend to control the flow of information.

I agree with you on some points here. The problem is that these crawlers are hostile to the point of DDOSing sites

So the problem is not that someone archives your public accessible data, the problem is that in doing so either breaks your site or makes you pay for the excess traffic.

I think the web is now broken beyond repair. The commercialisation killed it and the tech monopolies are all that’s left.

So I think small invite only fully encrypted enclaves are all that is left, until someone comes up with a “new Internet”, that can resist the " Techbros", but for now I don’t see that.

Also I don’t see the Fediverse as a solution, it’s just under the radar for now, but if it gets bigger it will be coöpted and sunk.