"human.json is a protocol for humans to assert authorship of their site content and vouch for the humanity of others. It uses URL ownership as identity, and trust propagates through a crawlable web of vouches between sites." I've added it to my website! https://codeberg.org/robida/human.json
human.json

A lightweight protocol for humans to assert authorship of their website content and vouch for the humanity of others.

Codeberg.org
@EvanHahn how to best avoid making this a signal to scrapers to add an unauthorised copy of your words to their training data? i want to signal to humans, but not to the fascist bootlickers making plausible sounding lie generators

@atna Good question. I don't know.

My first thought is to add additional blocking to the site itself—just as you would without human.json. robots.txt for compliant scrapers, stronger defenses (proof-of-work, captchas, tarpits) for non-compliant ones.

Could be worth filing an issue, though I don't know if that's in scope for this project: https://codeberg.org/robida/human.json/issues

human.json

A lightweight protocol for humans to assert authorship of their website content and vouch for the humanity of others.

Codeberg.org

@EvanHahn @atna just jumping in here to say that i imagined something sort of similar to human.json but the inverse, where you vouch for clients in a web of trust and web servers can use those to decide whether to serve content or not (or rate-limit or not, or whatever)

one big downside is that it hurts anonymity but i don't see an easy way to preserve anonymity within a web of trust that lets us block scrapers [1]. it would at least give web hosts more options

[1] the proof of work systems represent sort of a cat and mouse game but also break the web for non-js browsers and people with older computers