wrote this about a question i've been considering lately: now that we have free unlimited digital artifacts on tap thanks to LLMs, will it become harder to get published in literary magazines or contribute to open source?

https://ankursethi.com/blog/genai-gatekeeping/

Generative AI and the era of increased gatekeeping — Ankur Sethi's Internet Website

Ankur Sethi's Internet Website

@s3thi the proof-of-work idea is interesting.

while we're brushing shoulders with crypto-bro terms, i wonder what proof-of-stake would look like, here. (being 100% serious.)

like, could you create an open format/protocol for a rating/reputation system that "trusts the writer" --- the author of a journal or a maintainer of an open source repo is the "writer" and the journal or the repo is the "database." requesting submission is a form of mutation, before you even get to submit your article or patch.

the writer could evaluate you any way they want. you pair program with them for a day on a patch you want to submit, proving you understand their creation. you point them to non-slop articles you've written in other magazines. the proof of stake is your reputation within the reputation system, with these examples as citations for your "stake."

dumb idea, maybe. but hopefully there's at least *some* way to ring the doorbell on the gate.

@deobald i'd be very very afraid of any kind of computer-based reputation system. those things will immediately dehumanize and lock out the most vulnerable people, as they always do.

some journals and lit mags already require a submission fee, which is sort of like proof of stake already? but this is not common practice because, once again, it locks out people who can't afford to pay $50 to five different publications every month.

maybe a system like the Lobsters invite system might be a good idea? https://alexjacobs08.github.io/lobsters-graph/. when you invite somebody to Lobsters, your profile gets linked to theirs. the idea is that this prevents people from inviting bad actors into the community because they wouldn't want to be associated with them. and it also makes it easy to prune entire bad branches of the tree if a group of bad actors are found out.

but Lobsters is a tiny community. not sure how something like this would work at scale.

Lobste.rs User Graph

@s3thi i do like invite trees, though it's worth calling out that this just a reputation system based on a binary variable that's exposed up the tree:

bad_actor = true

...affects those they are connected to, damaging their reputation score. maybe it's the least-worst form of reputation scoring, though?

i'm reminded of shirky's "a group is its own worst enemy", here. he explicitly warns against reputation systems, since reputation lives inside everyone's heads. but it's hard to reconcile that with a need for guardrails.

GitHub - mitchellh/vouch: A community trust management system based on explicit vouches to participate.

A community trust management system based on explicit vouches to participate. - mitchellh/vouch

GitHub

@s3thi nushell, huh.

anyway, i guess somebody had to do it.