grebedoc had its highest share yet of serving garbage requests yesterday (a wave peaking at 150 req/sec)
these waves are getting bigger and bigger which is somewhat concerning. it's nowhere near the hardware capacity yet but i'm hitting some software bottlenecks that i've never thought would be relevant
@whitequark like what bottlenecks?
@solonovamax i send an S3 request to Wasabi every time there's a cache miss, including for domains that have never been served by grebedoc. if i'm getting, say, 100k requests to 100k domains i've never seen in a row, these start to really plug up in the worker process. i still have good latencies overall but only on most of the waves, not every single one anymore
@whitequark @solonovamax yecch part of me wants to flail at it with a bloom filter but the rest resents [expansive gesture] externality

@garthk @solonovamax i knew what i was getting into, that's why git-pages has so many layers of defense woven into it from the start

i just haven't expected people to send millions of requests to domains that don't even resolve to grebedoc