If you meet *all* the following criteria, please let me know here!

* I know you

* You have a blog, which is not on substack

* You do not use AI in any way as part of creating your blog

* You are willing to be listed in a humans.json file by me (https://codeberg.org/robida/human.json)

human.json

A lightweight protocol for humans to assert authorship of their website content and vouch for the humanity of others.

Codeberg.org
I'm sure well intentioned, but at this stage, do we really want to give the robots more guidance of what kind of performance to emulate and which identities to appropriate? I'd be inclined to fill such a file with bot references instead.
@neil

@osma @neil I agree. We should be approaching this pragmatically:

The most critical task is for the open internet to survive, and we canโ€™t do that if nobody can afford to run servers anymore, because they die under the staggering load of AI company crawlers.

We need community-run detection, databases and block lists first.

Big Tech stopped playing nice and embraced fascist techniques a long time ago.

#fediverse #internet #openaccess

@gimulnautti @osma

> We need community-run detection, databases and block lists first.

Great!

Does me having a human.json file on my website inhibit this? Or am I misunderstanding?

My concern is that the .json is pointing bots towards content not meant for the bots. That seems counterproductive. If blocking the bots was effective, I'd have less concern, but at this time, it is not.
@neil @gimulnautti

@osma @gimulnautti Yes, I understand your concern. It is not one which worries me too much, but I understand it.

That feels rather different though to saying that other tools should come "first", as if this is somehow competing.

Sure. That wasn't my argument, so I won't defend it either.
@neil @gimulnautti