FOSS infrastructure is under attack by AI companies
FOSS infrastructure is under attack by AI companies
Great write-up by Niccolò.
I actually agree with the commenter on that post, the lack of quoting and using images is pretty bad, especially for screen-readers (which I use), and not directly linking sources (though they are made clear regardless) is a bit of a pain.
There’s two prongs to this
Caching is an optimization strategy used by legitimate software engineers. AI dorks are anything but.
Crippling information sources outside of service means information is more easily “found” inside the service.
So if it was ever a bug, it’s now a feature.
They’re absolutely not crawling it every time they nee to access the data. That’s an incredible waste of processing power on their end as well.
In the case of code though that does change somewhat often. They’d still need to check if the code has been updated at the bare minimum.
If you’re wondering if it’s really that bad, have this quote:
GNOME sysadmin, Bart Piotrowski, kindly shared some numbers to let people fully understand the scope of the problem. According to him, in around two hours and a half they received 81k total requests, and out of those only 3% passed Anubi’s proof of work, hinting at 97% of the traffic being bots
And this is just one quote. The article is full of quotes, people all over reporting their can’t focus on their work because either the infra they rely on is constantly down, or because they’re the ones fighting to keep it functional.
This shit is unsustainable. Fuck all of these AI companies.
Sad there’s no mention if running an Onion Service. That has built-in PoW for DoS protection. So you dont have to be an asshole and block all if Brazil or China or Edge users.
Just use Tor, silly sysadmins
Proof of work is what those modern captchas tend to do I believe. Not useful to stop creating accounts and such, but very effective to stop crawlers.
Have the same problem at work, and Cloudflare does jack shit about it. Half that traffic uses user agents that have no chance to even support TLS1.3, I see some IE5, IE6, Opera with their old Presto engine, I’ve even seen Netscape. Complete and utter bullshit. At this point if you’re not on an allow list of known common user agents or logged in, you get a PoW captcha.
If I was a bot author intent on causing misery I’d just use the user agent from the latest version of Firefox/Chrome/Edge that legitimate users would use.
It’s just a string controlled by the client at the end of the day and I’m surprised the GPT and OpenAI bots announce themselves in it. Associating meaning on the server side is always going to be problematic if the client can control the value
In a blogpost called, “AI crawlers need to be more respectful”, they claim that blocking all AI crawlers immediately decreased their traffic by 75%, going from 800GB/day to 200GB/day. This made the project save up around $1500 a month.
“AI” companies are a plague on humanity. From now on, I’m mentally designating them as terrorists.
One of my sites was close to being DoS’d by openAI’s crawler along with a couple of other crawlers. Blocking them made the site much faster.
I’d admit the software design didn’t exactly help (this is a FOSS software used for hundreds of sites, and this issue likely applies to similar sites) but their rapid speed of requests turned this from pointless queries into a negligent security threat.
IP based blocking is complicated once you are big enough or providing service to users is critical.
For example, if you are providing some critical service such as health care, you cannot have a situation where a user cannot access health care info without hard proof that they are causing an issue and that you did your best to not block the user.
Let’s say you have a household of 5 people with 20 devices in the LAN, one can be infected and running some bot, you do not want to block 5 people and 20 devices.
Another example, double NAT, you could have literally hundreds or even thousands of people behind one IP.
Let’s say you have a household of 5 people with 20 devices in the LAN, one can be infected and running some bot, you do not want to block 5 people and 20 devices.
Why not, though? If a home network is misbehaving, whoever is maintaining that network needs to: 1) be aware that there’s something wrong, and 2) needs to fix it on their end. Most homes don’t have a Network Operations Center to contact, but throwing an error code in a web browser is often effective since someone in the household will notice. Unlike institutional users, home devices are not totally SOL when blocked, as they can be moved to use cellular networks or other WiFi networks.
At the root of the problem, NAT deprives the users behind it of agency: they’re all in the same barrel, and the maxim about bad apples will apply. You’re right that it gets even worse for CGNAT, but that’s more a reason to refuse all types of NAT and prefer end-to-end IPv6.
IP based blocking is complicated once you are big enough
It’s literally as simple as importing an ipset into iptables and refreshing it from time to time. There is even predefined tools for that.
LLM scraping is a parasite on the internet. In the actual ecological definition of parasite: they place a burden on other unwitting organisms computer systems, making it harder for the host to survive or carry out their own necessary processes, solely for the parasite’s own benefit while giving nothing to the host in return.
I know there’s an ongoing debate (both in the courts and on social media) about whether AI should have to pay royalties to its training data under copyright law, but I think they should at the very least be paying to use infrastructure while collecting the data, even free data, given that it costs the organisation hosting said data real money and resources to be scraped, and it’s orders of magnitude more money and resources compared to serving that data to individual people.
The case can certainly be made that copying is not theft, but copying is by no means free either, especially when done at the scales LLMs do.