my ipv4-only client
-> ipv4-to-ipv6 sniproxy port 443
-> ipv6-only vm
-> haproxy to conditionally unwrap proxy protocol
-> prosody xmpp server
... experiment is working โจ๐คฉโจ
my ipv4-only client
-> ipv4-to-ipv6 sniproxy port 443
-> ipv6-only vm
-> haproxy to conditionally unwrap proxy protocol
-> prosody xmpp server
... experiment is working โจ๐คฉโจ
calling it now: even though haproxy has lots of sharp edges, i like it, or its configuration mechanism, way more than caddy's
i think i'm going to be stuck using both for a while
migrating xmpp services from my old vps to my colocataires vm is the last thing remaining to do before i'm able to delete the old account (and stop paying for it)
(dreamcompute hasn't been bad, but i like colocataires way better)
now that i've proven to myself that it can work over port 443 and ipv6-only, it's time to configure it properly
next: see if i can move the service over without xmpp clients complaining
but first, sleep ๐ด
all the clients we usually use on android and linux are now connecting to my new xmpp server at colocataires, with no settings changes, on the https port so it looks like website traffic ๐ฎ
i don't have stun/turn turned back on yet, so voice and video calls probably won't work just yet
i don't have the conversations.js web client set back up yet but that's mostly for emergency use
this may be success enough to decommission the old servers! ๐
https://conversejs.org/ is back up on my domain and working just fine ๐
old dreamcompute vps is turned off, sitting there just in case for a bit, then i can delete my account ๐
(i don't hate dreamcompute but i like https://colocataires.dev way better)
think i'm just gonna leave stun/turn not running for now. if both parties are on ipv6-capable networks, calls should work. let's see how often that's an issue
if i want to set up stun/turn, i should abandon my somewhat irrational ipv6-purist intentions and pay the loonie for an ipv4 address
if i'm gonna do that then maybe i can keep almost all of my vm ipv6-only, except for one container that runs coturn?
if i'm gonna do that then maybe i can figure out how to make that container, that only does whatismyipaddress and proxy video calls, shareable with my datacenter neighbors 
my prosody xmpp setup in its new server mostly works great (assuming ipv6 capable network) but somewhere i've introduced a timeout that closes the connection after a uniform number of seconds
pretty sure it has something to do with haproxy, though from a skim of the docs these timeouts are supposed to apply to the initial connection setup, not inactivity
also after a chat with someone more knowledgeable i think i'm resigned to eventually acquire that ipv4 address
i think i might have fixed haproxy closing the socket on my long-lived idle xmpp connections by setting timeout tunnel 1h
i'll check again in several hours to be sure
wish i knew more precisely why this fixes the issue. are clients and/or the server sending keepalive messages more often than 1h but less often than 10m? is the tcp keepalive stuff not being used? someday perhaps but more likely i'll leave it unexamined as long as it works
recent impulses have been like
"this is too long for toots i should blog about it"
"but i don't want to put anything new or deploy a new website until i have something installed to block the scrapers like iocaine"
"so let's install it"
"ehh not enough brain rn, maybe next time"
so i set up some rules to block a good portion of bots (until they smarten up)
which frees me up to actually post some blog ๐
i'll install iocaine properly after that
i want to set up a photo storage server
photoprism seems like a good browsing interface but what i'm more concerned about rn is the upload
so a client on each android phone that backs up photos to the server
but i want to be able to turn the server off for a while, as a normal/expected thing one does, and not have the clients moan about it. they should just retry occasionally until the server comes back online
anybody have a setup like this running already?
musing about how to do high(ish)-availability systems on the cheap, goblin style
...
but wait i have some tradeoffs that might enable some tricks:
so in this specific case i think i could do client-side js failover. maybe even a service worker?
wait, how often does the whole region disappear anyway?
that was never a concern multiple employers ago when i got to help out at the datacenter
they did redundant everything inside the rack, regular cable-yank failover tests and everything, but no geographical redundancy iirc
maybe i'll inquire about a vm on another host within the same rack when i get closer to dragging clients on board and just forget about higher availability than that for now
embarrassed to admit that i've today taken one halfhearted step toward learning wtf snmp is by way of (re)reading the rrdtool tutorial
no, not smtp the email sending thing. snmp the monitoring of hardware status thing
all because i want to put up some pretty charts of computer doing inscrutable computer thing
(accuracy? that's like number seven or twelve down the list of nice-to-haves)
well, actually,
my ipv4-only client
-> colocataires' ipv4-to-ipv6 sniproxy on port 443
-> my ipv6-only vm
-> haproxy to unwrap proxy protocol
-> prosody xmpp server
... experiment is not working โจ๐โจ
so:
did it never work and i mistakenly thought that it did?
or
did it work at first but i broke it?
an easy fix would be to get an ipv4 address which obviates the need for sniproxy. but dammit before i do that i want answers: is this setup possible? if so, what'd i mess up?
(maniacal cackling)
i have finally got iocaine installed. wasn't even hard, just needed to sit down and do the steps and brain is real good at not that sometimes
hooked it up to the apt-installable anarchism faq for its markov corpus and the biggest canadian flavored apt-installable wordlist i could get
feels good. like the invulnerability you get from your favorite winter gloves and jacket before going out to play in the blizzard
now it's safe to blog again ๐
i um only just now noticed that the apt-installable anarchism faq, in uncompressed markdown format, which i fed to iocaine for its markov corpus,
is twelve megabytes. of text.
almost 1.9 million words.
iocaine seems to be doing just fine so far
accidentally set caddy to syslog every request sent to iocaine 3 and oh gosh my website is pumping so much poison markov trash into chatgpt and claude rn ๐ ๐
and it's using less cpu and memory than systemd-journald to do so
might need to look into setting bandwidth limiters on this thing
i'm still casting around for anti-cloud(flare) mechanisms of regional failover. like if the cable to the datacenter i use gets cut, or there's political upheaval, how to automatically shunt traffic to a different datacenter faster than a dns update would propagate through caches
i'm vaguely aware of this technology called anycast but i don't know much
https://grebedoc.dev/ uses https://rage4.com/ to do it
yeah eat it, ai scraper assholes
(gradually improving my monitoring, iocaine stats newly added to my collectd/rrdtool dashboard)
tiddlywiki doesn't come with a basic to-do feature, to make checkboxes and tick them off without having to tediously edit the page and type some [x]s
but it does have a plugin mechanism. found two plugins (both by the same author) that do checklists: Kara and Todolist
installation instructions made me nervous though, since i'm using tiddlyPWA that is rather different on the backend...
i haven't put any rate limiters on here yet (i definitely will), but seems like claude and chatgpt limit themselves to 25 requests per second to my websites. i wonder how they picked that number, and if they'll ramp it up. and if i ratelimit, will they send more requests from other ip addresses. etc.
feels so good to know these assholes' language models are chugging down low-effort ungrammatical poison after ignoring my robots.txt
should i do traffic shaping using tc, haproxy, or shove yet another plugin into caddy?
should i slow the response down to a trickle for all the llm scrapers, or randomly drop their connections? ๐
despite it being part of linux since version 2.2, which is about as long as i've been daily-driving it, i hadn't heard of tc until this past month. that's "traffic control," a tool to control the kernel's network traffic limiting, smoothing, and prioritization
and for a command with such a tiny name wow it's a lot
i only want to restrict the bandwidth of one process so i think i'll look for easier mechanisms before i attempt to swallow this whole burrito
til: trickle, a lightweight userspace bandwidth shaper
could i just wrap iocaine with this and be done?
... except trickle doesn't work on statically linked executables, like iocaine. womp womp
i guess i could do a trick like wrap socat with it, then talk to iocaine through that,
but that feels more complicated than just switching back to haproxy and using its builtin traffic shaping features
what bits of haproxy, lighttpd, nginx, caddy, static-web-server should i string together?
requirements:
dang i gotta draw up a feature matrix or something
it's pretty weird that it took me this long to actually do but
tonight i have set up for the first time a program running on a computer inside my home, that people may access like a normal website, without learning my cable modem's ip address in the process, and if someone starts ddosing me i can just unplug and let the household continue watching videos unaware
(i'm having my @colocataires vps proxy traffic through a tailscale vpn to my closet fileserver)
safe(r) home-hosting by reverse proxy from a little computer in a datacenter is one of those things that seems like complex esoteric engineering from afar
but once you've experienced it, and then again when you've set it up yourself, all of a sudden it makes sense and is totally normal and a whole mess of possibilities for what you can cheaply and casually build on the internet blasts wide open
like the first time you experience nerd astral projection
llm scrapers ignoring my robots.txt and pounding on my small website 28 times per second, 24/7. 600kbps of my available bandwidth wasted just on markov trash
it's easy to imagine how they'll ddos any service that does a bit of compute on each request
it's not super exciting but if you're the kind of weirdo who wants to look at my vm's gauges, they are viewable here:
https://telemetry.orbital.rodeo/
i have been cobbling it together using collectd, rrdtool, and scripts instead of the far more reasonable and popular prometheus / grafana combo. because it might be more lightweight? haven't measured
for now it updates only when i run the command, so don't sit there wondering
no light mode or explanatory text (yet) soz
oof, it's somewhat heavy though. went from about 3% avg cpu use to about 6%
the throttling that haproxy does just gets buffered up by caddy in front of it and the result is a long initial delay before a fast transmission of data. like latency
which could probably be implemented more simply with a sleep statement somewhere
i wonder what other strategies i can use to slow down crawlers. thinking random connection drops or http errors 429, 402, and 451
problem: once detected, how best to slap back at ai scrapers? return poison quickly? tarpit? throttled poison drip? drop their ip's packets at the firewall?
idea: drop packets during business hours to free up bandwidth for legit visitors; fast poison otherwise to collect ip addresses for next day's ip ban. "party all night sleep all day" strategy
lots of bot traffic hitting port 80 (http) on my vm just to get redirected to port 443 (https) where they get a "go away, bot" error
who am i keeping port 80 open for?
who types in "orbital.rodeo," lands on http, and doesn't know to or can't try https instead?
many people use hsts and abandon 80
caddy auto-magically puts a redirect on 80 for my sites but i'm increasingly annoyed by its magic. wanna go back to haproxy
think i'll shut it
oh, whoopsie
if an ipv4->ipv6 proxy is telling my webserver the clients' ipv4 addresses using proxy protocol, blocking those ipv4 addresses at the webserver firewall isn't going to do much ๐คฆ
i have a v4 address now, i was just lazy about reconfiguring dns to send v4 web traffic direct to my vm instead of through the v4-v6 proxy. time to get on that
๐ค i wonder how many innocents i'll accidentally shut out if i adopt a policy of, "any /24 prefix with 3 or more scrapers within it dooms the lot"?
๐ค i could set up a "pls let me back in" automation. tell me my biceps are eleven out of ten in this web-form and you get added to an inclusion list that takes effect before the block list
i could implement both of those defense mechanisms
reduce bookkeeping on my part by being a bit overeager about blocking whole prefixes instead of individual ip addresses
definitely want to do something like @alex's butlerian jihad where i block all networks from any ASN abusing my sites
but also, have a cooldown that sends traffic from blocked prefixes to a "let me back in" form that allowlists individual addresses
haha oops i accidentally banned my own ip. fixed it but guessing i'll have to flush the ban lists and rebuild in case i caught any more i shouldn't have
one super nice thing i'm doing this time around is using a wireguard-based vpn for all my ssh'ing. so even when i blocked my own ip address my ssh session was unaffected and i could fix it. and zero log spam from vulnerability scanners constantly trying the door ๐
i want to block any requests from google and facebook; also i want to block any isp who would tolerate scrapers
the database of ip range ("prefix") assignments is downloadable but it's big. 590 entries just for as32934 (facebook). too big to just dump into the firewall
but there's often nothing between multiple records for any given asn. maybe i could treat that as a single range, which would let me express the set of ranges to block more concisely ๐ค
ooo python's builtin ipaddress library has collapse_addresses and address_exclude functions, and pyasn uses those. if i study those functions i think i should be able to come up with a "collapse_addresses" variant that absorbs unallocated gaps between allocated subnets for a more concise specification
https://github.com/python/cpython/blob/3.14/Lib/ipaddress.py#L304
haha while searching ASNs for "AWS" to block i learned of the existence of AS214513 EEPYPAWS and AS401962 CUDDLE-PAWS
i wonder what other fun autonomous system names are out there, and what they're doing
when it often feels like the internet is like six giant websites consuming everything, it's great to feel lost in a massive database of tiny organizations doing a niche highly technical thing like registering an autonomous system for internet shenanigans
so, iiuc, i shouldn't be blocking crawl bots from google, facebook, etc. network ranges, because then i can't feed them poison urls, whereupon i won't be able to identify the more carefully-disguised requests from residential botnets masquerading as browsers
but i do want to very much limit the bandwidth they may consume
so instead of
ip saddr @miscreants drop
let's try
ip saddr @miscreants limit rate over 1/second drop
update: rate limiting packets at the firewall from networks controlled by my biggest bot offenders (facebook, microsoft, google, apple) has accomplished exactly what i wanted: i continue logging and feeding them poison, but their bandwidth is greatly reduced
my implementation might be causing their connections to close mid-request but i don't mind very much. lots of sockets (50) in SYN-RECV state compared to ESTAB (3) rn which could eventually be a problem
oh wait whoops. lots of sockets in SYN-RECV wasn't the fault of my inexpert ratelimiting. an asn outside my "big tech" filtered set was sending me SYN packets and not following up with--
oh my gosh, was i being syn-flooded? was someone angrily trying to deny service to the maybe 3 legit people that want to see my website??
anyway i added them to the limiter so now they're holding around ~3 sockets in SYN-RECV state instead of ~40
graph: sockets in SYN-RECV state. if i understand correctly, this occurs when somebody says "hey let me connect" and my server says "ok you can connect" and then they just never reply
eventually it stops waiting but until then the socket is in use. so if some miscreant fires off a ton of "hey let me connect" without replying they can clog up the pipes
in previous experiments this line averaged either 40 or 0
love being able to just turn the firehose off
setting sysctl tcp_synack_retries=1 or even 2 down from the default of 5 seems to have significantly cut down on the sockets sitting around in SYN-RECV state.
5 means the server will try for about a full minute to reply with a RECV packet to the client that sent it a SYN. 2 brings it down to only about a second or two before it closes a socket that doesn't complete the TCP handshake
i'm kindof playing on easy mode wrt the slopmachine scrapers: since i utterly despise microsoft, amazon, google, facebook, and (slightly less?) apple, easy step zero is permanently block their corporate ip ranges at the firewall
their spiders were the majority of my bot traffic back in december and it's gone, no drawbacks
but if i had business brainworms such that i wanted to appear in google search results etc, i might have a harder time of it
short term project goal:
automate creation of incus containers in my vps to hold each new cursed project that lands on my list; make it easy to stand em up and knock em down on a whim
run one script, get subdomain, ipv6, debian container
no docker, no kubernetes
i think it's totally possible, maybe even easy, if only i were better versed in incus, networks, bridges, etc.
trying to calibrate my sense of how many requests/second is "a lot"
ooooh, I should add incus to the fork-drawer

seems from experimenting like the answer to this is "no"; you can't put a tls terminating reverse proxy in front of unencrypted xmpp and have it become xmpps. like you can with http and other protocols
closest i got was instructing my xmpp clients to use "legacy tls" mode- they then could successfully tls handshake and connect but wouldn't authenticate my user
but why. starttls is more complex and less secure, why is it so prevalent in xmpp
too big to just dump into the firewall
whoopsie, that was a wrong assumption on my part based on a bad time i had with way too many iptables firewall rules created by fail2ban many years ago
these days i'm using nftables and its set structure to hold ip addresses, which uses radix trees just like the routing tables do, and you can dump addresses in there all day long, it will manage merging them into ranges and auto expiring them if you want, works great
so upthread i was surprised that you can just shovel truckloads of ip addresses into nftables' "set" structure, for blockin' purposes
but i want to do stuff like detect if several addresses within some autonomous system's range are coordinating for shenanigans, and block the whole damn asn
this example, on the nftables wiki itself, loads a whole ass maxmind geoip db into nftables' "map" structure and my first reaction was "surely not"
https://wiki.nftables.org/wiki-nftables/index.php/GeoIP_matching
woah cool i just learned about the nftables feature concatenations
i'm already ๐คฉ about nftables' very fast sets and maps but today i learned that you can store essentially tuples of data in them
which in some cases can let you test multiple conditions at once, replacing multiple rules with a fast set-membership check
about 1.5 days after asking iocaine to not just poison but also block ai scrapers masquerading as browsers, i have about 36000 ip addresses blocked at the firewall
this is for a site that is not advertised anywhere, disliked by search engines, and contains maybe 10 blog posts that rarely change. AND which preemptively blocks several whole gafam corporate ASNs so not even counting them
so i expect more popular sites are seeing many multiples of this traffic
anyway, thinking again about how to analyze this ever growing set of blocked ai scraper addresses, most of which are probably "residential ips."
calculate for each asn the percentage of its ip range that i've blocked, and above a certain threshold block the whole range? (that would be more efficient than recording every single bad address)
ideas contd.:
have an unblocked subdomain where a legit user of a blocked ip might fill out a form and click a "let me back in" button to get onto an allow-list
double extra forever-ban anybody that uses the "get me back in" button then starts snarfing down poison again
also, at some point, i want to bring iocaine to work. i'm on easy mode now because idgaf about my site's visibility to search engines
but what to do when boss requests that when customers ask their ai bullshit to order from our website on their behalf, maybe i shouldn't reply with an HTTP redirect into the fucking sun with gigabytes of foul invective zip bomb for the response body
ideas contd.:
live-updated status page listing all the ip addresses i've blocked, in nice formats for easy import into firewalls, tools for consuming and contributing to said databases
serious looking landing page for blocked addresses "your ip address is sending malicious traffic to this domain and has been reported. check for compromise immediately."
live-updated ASN leaderboard naming and shaming those with the most ip addresses used by ai scrapers
ideas contd.:
undo my mild mitigation against syn flood, crank the synack retries back up, and collect the ip addresses guilty of doing it. for blocking
โ make caddy 'abort' the connection after one ioproxy poison reply, which closes the socket and blocks ip addresses faster
ideas contd.:
i'm extremely doubtful that most isps will give any shits at all about complaints that llm bots are using their network to destroy websites
i was thinking upthread about an error message to show to legit users of residential ips who get blocked from services; showing them a scolding message like "your ip has been sending malicious traffic"
but maybe more effective will be to direct them, with contact info, to their own isp's abuse line
ideas cont'd.:
poison url generator that encodes the spider's address, so when the headless browsers on residential ips begin scraping them we know which big tech cos are buying access to residential ip address proxies to disguise themselves
there are so many! 133k addresses in my firewall now. starting to wonder if maybe ip blocklists are untenable and i need a blocked-by-default-policy with a request-access mechanism instead
i've got iocaine set to block only new connections when an ip requests a poison page. that keeps their ip from returning, but doesn't kick them off my server immediately
i tried adding an abort to the end of iocaine's handle_response block in caddy, (and rebooted)
i think what i'm seeing now is scrapers successfully getting kicked out at the first request, but their sockets now get stuck in fin-wait-1 state until they time out
@pho4cexa Interesting. We only see 176 routes active from AS32934 on our edge router.
```
bird> show route protocol bgp_he_v4 where 32934 ~ bgp_path all count
176 of 1038011 routes for 1038011 networks in table master4
```
Also, it's a 1 litre Lenovo MiniPC and it can handle over 1 million IPv4 routes without a sweat -- have you considered creating null routes for all of the networks you don't like instead of firewall rules?
@insom the database i downloaded has lots of redundant entries for some reason; many of the records of networks assignments are subnets of other records
i hadn't considered null routes before! are they more efficient than firewall rules for the same purpose? i'd naively assume that a null route would would make replies to malicious networks impossible, but would still allow requests from them to arrive; i guess that's not the case? i'll read up about it!
@pho4cexa Yup, the initial packet could arrive (SYN) but you'd never send a (SYN+ACK) so a session wouldn't be established.
The Linux kernel uses a radix tree to efficiently store the routing table / make routing decisions which is pretty compact and low CPU.
I suspect that every line in iptables would be iterated over for every single packet arriving and I don't think there's any structure more advanced than a linked list at work there.
It'd be fun to try!

@pho4cexa Yeah, I have an allowlist based on the follows and follow-requests of my account on the single-user fedi instance I run, for example. Havenโt updated it in a while but the idea is I want to block Hetzner and OVH and all that without damaging my fedi experience.
Now that I think about it, checking my followers would make sense, too.
Anyway, the allow list must be based on something โ MX records of the email addresses in your contacts would be a candidate, too. That kind of thing. I just havenโt heard of anybody affected by it.
@alex i haven't bothered with expiring bans yet, facebook's ip range can get fucked forever ๐
but if i do decide i want them i plan to read more about how to use nft set element timeout and expiry. hoping that will give me all the tooling i need:
https://wiki.nftables.org/wiki-nftables/index.php/Element_timeouts
nft table that fail2ban creates contains sets without flags interval; so prefixes weren't allowed. I added the answer to https://alexschroeder.ch/view/2025-12-23-santa-bots