Sometimes I see lately from time to time posts about blocking AI, bots, scrapers from your web pages, or feeding them garbage, etc. But that's wrong & misinterpreted root-cause approach.
Because:
- it leads to raping your browser via running on it non-consensual bad js
- toxic cookie trackers
- blocking wide shared ipv4 ranges of users behind nat, geoblocks
- blocking small floss non-bigtech cli browsers (like w3m emacs ...)
fundamental misunderstanding that
- a browser is an indexer & a scraper, your website could be browsed & distributed offline.
- not all A(g)I robots are equally bad, some of them are better than some biological wrongposters.
the world wide web designed as open with hypertext, but modern philosophical flaws misguiding towards gatekept closetd segregated world
what do #gopher #gemini #telnet #bbs folks think about this? Interested to hear from them, because explicit open philosophy of their protocols eliminates issues in root design.
I recently had to solve "What is a square root of -1?" captcha. I had no idea. It took me a a bit of time & research. But it was possible without JS.




