@CorvusVolvens I don't really need much time, 'cos I wrote a blog post about it 
You can get pretty far with just nginx or caddy, without having to touch iocaine, and achieve ~90% of what iocaine does with a handful of lines in your reverse proxy config.
@jerry @tehstu Ah, that was the worst case scenario I had in mind. I had good experience for a rather vanilla system, so I thought I mention it.
For our company website, scrapers are reaching the 50% mark and it approaches a critical point. So this is a topic, I am very interested.
Will look into Anubis, thanks!
I was disappointed to read Cory Doctorow's post where he got weirdly defensive about his LLM use and started arguing with an imaginary foe.
@tante has a very thoughtful reply here:
https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
A few further comments, ๐งต>>

Life is complicated. Regardless of what your beliefs or politics or ethics are, the way that we set up our society and economy will often force you to act against them: You might not want to fly somewhere but your employer will not accept another mode of transportation, you want to eat vegan but are [โฆ]
Discord claims "most users" will never go through an age verification process because they're already monitoring your behavior.
For the majority of adult users, we will be able to confirm your age group using information we already have. We use age prediction to determine, with high confidence, when a user is an adult. This allows many adults to access age-appropriate features without completing an explicit age check.
Gotta say, constant behavior analysis is not the warm and fuzzy blanket they seem to think it is.
https://discord.com/safety/how-discord-is-building-safer-experiences-for-teens
I keep seeing stories about LLMs finding vulnerabilities. Finding vulnerabilities was never the hard part, the hard part is coordinating the disclosure
It looks like LLMs can find vulnerabilities at an alarming pace. Humans aren't great at this sort of thing, it's hard to wade through huge codebases, but there are people who have a talent for vulnerability hunting.
This sort of reminds me of the early days of fuzzing. I remember fuzzing libraries and just giving up because they found too many things to actually handle. Eventually things got better and fuzzing became a lot harder. This will probably happen here too, but it will take years.
What about this coordinating thing?
When you find a security vulnerability, you don't open a bug and move on. You're expected to handle it differently. Even before you report it, you need at a minimum a good reproducer and explanation of the problem. It's also polite to write a patch. These steps are difficult, maybe LLMs can help, we shall see.
Then you contact a project, every project will have a slightly different way they like to have security vulnerabilities reported. You present your evidence and see what happens. It's very common for some discussion to ensue and patch ideas to evolve. This can take days or even weeks. Per vulnerability.
So when you hear about some service finding hundreds of vulnerabilities with their super new AI security tool, that's impressive, but the actually impressive part is if they are coordinating the findings. Because the tool probably took an hour or two but the coordination is going to take 10 to 100 times that much time.