Some updates on my website maintenance woes:
Starting last July, I built a new wiki for my translations of German folk tales. And soon after I started doing so, it started to experience frequent, hours-long outages. I started to research possible causes, but eventually concluded that the primary cause were so many requests from anonymous #scraper bot networks deserpate for new scraps of data to feed into their #LLM models that the wiki simply couldn't cope. Even when I increased my hosting plan _twice_ last September, this only served to make the outages less common - not to stop them.
In March, I drastically reduced the amount of work I did on the wiki, as it was functionally complete - I had added more than 700 folk tales to it by that stage. Sure, there are always further tales to add - I didn't stop translating those tales, after all. But now I am adding 10-20 tales per month, not 100+.
And funnily enough, I haven't noticed any major outages for this past month - or even minor ones. I guess the scraper bot networks noticed that I don't have that much new data to steal, and largely moved on to new prey they can harass.
So, what can we conclude from this?
If you are maintaining a website that produces lots of new content on a regular basis, you _will_ get hammered by these scrapers. robots.txt will do nothing - these use anonymous, ever-changing IP addresses. Maybe you can thwart them with #Cloudfare or similar technologies which I haven't tried out (I am a rank beginner when it comes to website administration, to be frank).
Otherwise you will either have to slow down the publication of new content, pay lots of money for an oversized hosting plan, or live with periodic outages until the #AIBubble bursts, and there is no longer a trillion dollar business case for scraping every website a thousand times a month.
https://wiki.sunkencastles.com/wiki/Main_Page