Pleased to share a page and explainer for the AI tarpit project Science is Poetry, with legal statement, rationale(s), and a few deployment notes:

https://julianoliver.com/projects/science-is-poetry/

The page may grow a bit. Just wanted to get it out the door.

#AI #bigtech

Science is Poetry

If you're interested in learning more about implementations of resistance in this era of unchecked Big AI, direct action strategies and the techno-politics therein, be sure to check out ASRG's site (https://algorithmic-sabotage.gitlab.io/asrg/) and give them a follow here on Mastodon (@[email protected]).

They've put a lot of heartbeats and neurons - human stuff - into this area.

ASRG

Algorithmic Sabotage Research Group (ASRG)

ASRG

A newcomer frantically lost in the Caves of Babble.

----
3.215.221.125 - - [16/Apr/2026:06:14:25 +0200] "GET /noodles/images/primigenous/orchiepididymitis/Lord/havent.png HTTP/1.1" 200 111459 "-" "Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Amazonbot/0.1; +https://developer.amazon.com/support/amazonbot) Chrome/119.0.6045.214 Safari/537.36" "-"
----

About AmazonBot

Customer facing page of Amazonbot crawler which all web content publishers can refer to.

Developer Portal Master

Do you have an unused domain that you would be happy to donate to a counter-offensive against unchecked & unregulated AI crawlers that scrape human-made content to simulate & deceive for profit?

If so, pls reply to this post. Your domain would become an entrypoint to the AI tarpit & Poison-as-a-Service project below, allowing concerned public to choose to use it on their sites, helping make the project more resilient to blacklisting.

https://julianoliver.com/projects/science-is-poetry/

#ai #bigtech #tacticalmedia

Science is Poetry

A bit over half a million page reads a day by crawlers rn. Just to say the server is doing some good work.
Thanks all for the fine domains! I've decided to spin up a new VM and do all the site configs and TLS chain for them at once - more efficient, less prone to error. I will get onto that on my tomorrow and report back here.
SEANCE IS POTTERY

I have only linked them here and on the landing page, and already it's gone nuts.

These are *solely* the new domains you've donated, all in one log. These do not pertain to the project domain.

I've started to harvest a list of AI crawler endpoint addrs for your blacklisting pleasure.

I'll try to keep it updated. I've been fastidious with ensuring I'm only pulling those related to the known user agent, so as not to have any false positives

https://scienceispoetry.net/files/parasites.txt

It is at the same path for all contributed domains.

For instance:

https://carrot.mro1.de/files/parasites.txt

It's approaching DoS at this point. This just one of the VMs, and just OpenAI's parasite.

Threading's holding up but need some more tuning of rate limits and burst. Trying sending 429's now to ask them to play nice.

To think the www was built for people.

And here we are

Even faster now.

Again, these pages are randomly generated, and each line is a page request from a crawler.

To think of the energy expended at a global scale, the waste. All the money, water & minerals thrown at this. These AI companies are near DoS'ing the human web as they deep-sea trawl our content.

Computationally, infrastructurally, & culturally, it's an obscenity,

- Mum, if you made a chain out of all the endpoint addresses of AI crawlers, how far would it reach?

- All the way to the moon, darling. All the way to the moon.

https://scienceispoetry.net/files/parasites.txt

Here's a thing I did in a couple of mins to ban all IPs in the parasites.txt serverside. You could ofc REJECT rather than DROP to send a message.

---
#!/bin/bash

while read parasite;
do
if [[ "$parasite" == *"."* ]]; then
iptables -I INPUT -s "$parasite" -j DROP
elif [[ "$parasite" == *":"* ]]; then
ip6tables -I INPUT -s "$parasite" -j DROP
fi
done < /path/to/parasites.txt
---

@JulianOliver Just gonna set up a fail2ban jail with that list as a log.
@ned Sweet, lock the bastards out.
@JulianOliver Oh I have. Though, I'm seeing a lot less bot traffic since I made the first (hidden) link on my site a link to your tarpit.
Hi @ned @JulianOliver,
thanks for the 🍝 noodle recipe, I'll do likewise with πŸ₯• carrots.

@JulianOliver

Are you still looking for domains?

Somehow www.qaz.red is pointing at 95.216.76.85. Should I add an AAAA record, too?

@elithebearded Oh hey thanks! I'll add it today. An AAAA would be great if you have a moment.

@JulianOliver

Done. Copied from tender.horse, if it matters

@elithebearded You are live and listed here :)

https://scienceispoetry.net/

SEANCE IS POTTERY

Hi @JulianOliver,
indeed an act of #hygiene blocking #bot​s: https://doi.org/10.17487/RFC8890 "The Internet is for End Users"
Information on RFC 8890 Β» RFC Editor

@JulianOliver "parasites" is a great name for this

@netopwibby @JulianOliver just joking/inspired here, but "parasitoids" would be more telling, if referred to the AI training companies:
while a parasite has a vested interest in the survival of the host, parasitoids just use the host/prey for one of their life phases, killing the host and moving out.
As AI is being embedded (with little or no possibility to opt-out) in all digital interactions, the open web can be bot-swarm-scraped to death to move to the next stage of exploitation, with direct feed from apps, wearables, and appliances.

https://en.wikipedia.org/wiki/Parasitoid

p.s. THANKS for fighting back, and THANKS for involving others in the fight!

Parasitoid - Wikipedia

I have a couple. Do you have a A and AAAA record for me to point them at?

.co.nz namespace

@JulianOliver

@futuresprog Great, and yes.. precisely!

Here you go:

A: 95.216.76.85
AAAA: 2a01:4f9:2b:c83::2

@JulianOliver @futuresprog ah, cool to know, thanks. can config a few from this ...
@vortex @futuresprog Thanks a lot Adam! Please let me know when you're done. By DM is also fine too.
@JulianOliver I've got a few underutilised domains that I'd happily loan to such a cause...

@lightweight Fantastic, thanks Dave!

A: 95.216.76.85
AAAA: 2a01:4f9:2b:c83::2

@JulianOliver ok - will set that up for outgoing.nz and unbreak.nz both of which I acquired for possible initiatives I no longer remember.
@lightweight Thanks again Dave, I'm going to have some homework today adding all these beaut new domains

This has reminded me that I’ve got a site I setup in DigitalOcean then broke the database, half setup again in QEMU but then decided to move to ProxMox.

I will get onto it, really! I will!

@lightweight

@JulianOliver @lightweight do you need to be told the domains for it to work? Or can I just define a record in my DNS and job done
@JulianOliver do they have to be root domains? Could I set up a subdomain or two for you?
@narthur A subdomain is just fine, yes!

@JulianOliver ok. So what do I do? Just point them at those IPs?

If they all share the same IP, won’t that make them easy to block?

@JulianOliver poetry.rainskit.com and poetry.narthur.com. But the 'http' versions just give an nginx welcome page, and the https versions don't have valid SSL. Maybe you want CNAMEs instead? Or reverse proxies? And do you need some other page to link to those domains?

@narthur For now they would all share the same IP.

Both domain and IP are naturally able to be filtered at the crawler end, but as numerous sites can be hosted behind one IP, it is my belief that they will drop a domain first.

Further, it's my hope to have instances of the project running on other dedicated hosts down the road.

@JulianOliver @narthur It seems to me that it would be fairly easy for them to consider that the content is junk and to treat accordingly everything coming from the same IPs. They probably keep a log of where the content used to train the LLMs is coming from (maybe with some kind of hash / pseudonymous maneer), and likely have ways to reject content from the same server IF they detect a problem from several domains linked to this server. IMO a bunch of reverse proxies / various IPs could help : they might be dumb and make it easy to polute their dataset, but probably aren't.

@monochrome @narthur

I am looking to populate the tarpit to other hosts, but for now the bots just keep chewing, and have been for days at one end-point.

I suspect there are so many crawlers spawned, and that they have so much in the way of resources at hand to do this scraping, that it is largely automated with little oversight.

@JulianOliver im down. What's the first step?

@lazzarello

Thanks for joining in!

If you could just add these two records for the domain you wish to gift, and then let me know the domain (either DM or public) once done πŸ™‚

A: 95.216.76.85
AAAA: 2a01:4f9:2b:c83::2

@JulianOliver That is brilliant. Tie up those slop bots! πŸ‘
@JulianOliver I like the term PaaS. Well done.