AI companies are violating a basic social contract of the web and and ignoring robots.txt
AI companies are violating a basic social contract of the web and and ignoring robots.txt
Put something in robots.txt that isn't supposed to be hit and is hard to hit by non-robots. Log and ban all IPs that hit it.
Imperfect, but can't think of a better solution.
If it doesn’t get queried that’s the fault of the webscraper. You don’t need JS built into the robots.txt file either. Just add some line like:
here-there-be-dragons.htmlAny client that hits that page (and maybe doesn’t pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.
server {
name herebedragons.example.com; root /dev/random;
}
/dev/urandom through, as that is non blocking. See here.
People not intending to follow it is the real reason not to bother, but it’s trivial to track who downloaded the file and then hit something they were asked not to.
Like, 10 minutes work to do right. You don’t need js to do it at all.