I get the distrust of AI bots but I think discussions to sabotage crawled data go too far, potentially making a mess of the open web. There has never been a system like AI before, and old assumptions about what is fair use don’t really fit. But robots.txt still works! No need to burn everything down yet.
The machine stops

Self-hosted sabotage as a form of collective action.

@manton And yet in many cases robots.txt, and other valid attempts to block AI bots are ignored. AI companies are not playing fair, and are a clear and present danger to the open web.
Neatnik Notes · Gotta block ’em all

Neatnik Notes

@paulrobertlloyd I see it a little differently. I'd like to see more effort by AI companies to credit sources, so that the balance between crawling and publishing is closer to what we have with search engines. AI companies are not going to go away, but we need to push them in the right directions.