@sethmlarson At a glance, it looks like the extension can pre-vouch. In other words, it doesn't verify that the vouchee has a human.json.
Could be worth filing an issue: https://codeberg.org/robida/human.json/issues

I have a lot of websites that I know are written by humans, but that don't yet support `human.json`. Is it okay to "vouch" for these websites, too? The specification doesn't disallow this (to my knowledge) but also doesn't explicitly say it's an okay thing to do. Would love some clarification, as...
@atna Good question. I don't know.
My first thought is to add additional blocking to the site itself—just as you would without human.json. robots.txt for compliant scrapers, stronger defenses (proof-of-work, captchas, tarpits) for non-compliant ones.
Could be worth filing an issue, though I don't know if that's in scope for this project: https://codeberg.org/robida/human.json/issues
@EvanHahn @atna just jumping in here to say that i imagined something sort of similar to human.json but the inverse, where you vouch for clients in a web of trust and web servers can use those to decide whether to serve content or not (or rate-limit or not, or whatever)
one big downside is that it hurts anonymity but i don't see an easy way to preserve anonymity within a web of trust that lets us block scrapers [1]. it would at least give web hosts more options
[1] the proof of work systems represent sort of a cat and mouse game but also break the web for non-js browsers and people with older computers
This is interesting, but it appears that only one ownership-asserting URL is allowed, and protocol (http vs https) matters. Given that what I'm putting on the Web is mostly non-interactive documents, it doesn't make sense for me to require TLS.
Am I just misunderstanding something, or can I not use the one humans.json file to assert ownership of the site, whether served over http or https?