Working on some poison-as-a-service (PaaS). Looking to launch in the next few days.

#AI #enjoythinking

Also working on a zip bomb, to randomly scatter in among the links.

Thanks to @anaiscrosby I came across this excellent method, using LZ77:

https://natechoe.dev/blog/2025-08-04.html

TBH I was just going to `dd if=/dev/urandom` my way to a titanic RAM flooding *.gz, but am getting great results with the above, and with bonus site data honey inside to keep bots on the chase.

natechoe.dev - A googol byte zip bomb that's also valid HTML

@anaiscrosby After seeing ChatGPTBot blow 123 seconds on my drip-feed poison tarpit and then never come back, I got reading on how modern LLM scrapers might employ mechanisms to detect tarpits and blacklist.

During research I came across this tarpit evading scraper that provides some interesting insights into how modern LLM scrapers might do this.

https://github.com/Draconiator/Ipema

This gives me pause and has me looking at other solutions for counter-detection.

The GeoCities CSS is going nowhere.

GitHub - Draconiator/Ipema: A script designed to counter the Nepenthes tarpit - designed with the help of A.I. itself.

A script designed to counter the Nepenthes tarpit - designed with the help of A.I. itself. - Draconiator/Ipema

GitHub

@anaiscrosby Running a non-Markov tarpit for half an hour on one public link, and already have Claude lost in my swamp. Waiting to see if it runs into my ZIP bomb

---
216.73.216.124 - - [07/Apr/2026:03:28:49 +0200] "GET /tarpit/until/same/drive/harmattan_leftmost_intranscalency_few_ministries_few_between HTTP/2.0" 200 10132 "-" "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; ClaudeBot/1.0; [email protected])" "-"
---

@anaiscrosby It hit it, but I guess decompressed in a thread. It's a 127M archive that decompresses to 128GB. The bot kept scraping for a bit and then dropped off. Difficult to know if it was a discouragement.

Strange is that soon after other IPs were reaching statistically non-guessable randomly generated URL paths, without touching the webroot or another other tarpit URL prior. They all had iOS UA strings (readily forged).

It is quite wild how persistent Claude is, and an eerie feeling watching it just roam ever deeper into the endless rhizome of generated linked pages. It's been like this for a couple of hours now, and is not touching any other pages on the server, solely those in the tarpit. So that PoC does seem to check out.

CPU spikes are worrying, so will need to work the threading a bit and provision a couple more cores.

It has a rhythm of ~10-15s gorging, then a pause for 20-30s, and then at it again

Claude is still going. There is now a robots.txt with clear `User-agent: ClaudeBot [...] Disallow /` and it is ignored.

I will say there's a contradiction in setting up a tarpit like this. Sure these crawlers are DoSing anyway - they're uninvited ultra-demanding company - but when you have an infinite maze it feels like volunteering for an exhaustion contest.

My end is CO2e neutral, or at least on traceable renewables. But the other end, who knows. That dimension of it cannot be avoided.

ClaudeBot crashed my tarpit. Working on some rate limiting at the reverse proxy to buy me time to improve the threading.
Rate limit in place. Seems stable and a little less like siege warfare now. ClaudeBot at least, still very much captive.

Still solely ClaudeBot, a page every 2 seconds, but a new src IP of 216.73.216.37. The crawler at that addr has been at it all night. Ceaseless zombie walk through infinitely-hyperlinked randomly generated babble.

My non-Markov text seems far stickier to the ClaudeBot.

In fact now ClaudeBot has stopped reading, it switched to Anthropic's Claude 'searchbot' during the night. It seems it either gave up or decided to respect the robots.txt.

I misread that in the logs a few hours ago. The above address is in fact that of "[email protected]" not "[email protected]"

An interesting development.

I moved the project to a giant of a server, getting ready for launch.

Within 15mins (no exaggeration) of bringing up the reverse proxy, with one link in a wiki, OpenAI's gptbot has found the link and hooked into the maze. Very aggressive, so still some rate limiting to do, threading massage before it's good to go.

GPTbot is still at it, throughout the night. It seems to have a different pattern to ClaudeBot. It pack-feeds from 2 or 3 different endpoints, and is slower, as though processing content in some way in the course of each page scrape. Its rhythm changes too, but I've not yet looked into correlation between payload size or complexity, and pattern. ClaudeBot is far faster, and has an even step as it moves through the babble. Seems to me it is solely concerned with collection.
@JulianOliver I noticed both bots are blind to JavaScript files; they only hoover up html and follow links from html (ignoring meta robots tags nofollow, etc)