@mauve @jonny @akhileshthite you may want to see that.

#utopeer is a decentralized #bittorrent tracker based on #nostr for publishing uploads, reviews, using #webOfTrust. Nostr relays are indexers.

Thoses torrents could be websites đŸ€”

Sorry spec is in french đŸ„– but it follows the downfall of the biggest french trackers.

Cc @rakoo

@nono2357

@glynmoody
If Txwitter has any use left, it must be as either pulling tweets from an individual by a direct link, or with some means of only seeing tweets from a list of people which is made by people you trust introducing them to you.

A Web of Trust.

Heading for a Rampant Bot Net Ecology (ROBE) as in Anathem, I think.

#WebOfTrust

@uncomfyhalomacro we had that.

#WebOfTrust (#WoT) was it called...

Das Interesse an einer GPG-Party zur #CLT2026 ist da, aber viele sind noch unentschlossen (30%). đŸ€” An die "Vielleicht"-Fraktion: Was braucht ihr fĂŒr ein festes "Ja"?
#ChemnitzerLinuxTage #GPG #OpenPGP #Linux #OpenSource #Privacy #ITSec #Datenschutz #Keysigning #WebOfTrust
Einen Slot ohne Konflikte 📅
35.3%
Hilfe beim Vorbereiten 🆘
0%
Kurz & schmerzlos (<1h) ⏱
47.1%
meet & greet anstalt festen Termin đŸ€
17.6%
Poll ended at .

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

The worst examples are when bots can get through the “ban” just by paying a monthly fee.

So-called “AI filters”

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn’t generated by a chat bot, when every “detector tool” has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today’s “AI algorithms” are “more AI” than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don’t like the person
  • To pretend a bot is a person, because the authorities like the bot
  • To pretend bots have become “intelligent” enough to outsmart everyone and break “AI filters” (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it’s nothing new, it was the bots doing it the whole time, don’t look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

It’s also worth mentioning some of the reasons why the authorities might dislike certain people and like certain bots.

For example, they might dislike a person because the person is honest about using bot tools, when the app tests whether users are willing to lie for convenience.

For another example, they might like a bot because the bot pays the monthly fee, when the app tests whether users are willing to participate in monetizing discussion spaces.

The solution: Web of Trust

You want to show up in “verified human” feeds, but you don’t know anyone in real life that uses a web of trust app, so nobody in the network has verified you’re a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the “verified human” tag too.

They will now see your posts in their “tagged human by me” feed.

Their followers will see your posts in the “tagged human by me and others I follow” feed.

And their followers will see your posts in the “tagged human by me, others I follow, and others they follow” feed


And so on.

I’ve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you’d think.

The tag should have a timestamp on it. You’d want to renew it, because the older it gets, the less people trust it.

This doesn’t hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn’t as good as a weak “AI filter.”

If your goal is to scroll through a feed where none of the creators used any software “smarter” than you’d want, this isn’t as good as an imaginary strong “AI filter” that doesn’t exist.

But if your goal is to survive, while others are trying to drive the planet to extinction


If your goal is to be able to tell the truth and not be drowned out by liars


If your goal is to be able to hold the liars accountable, when they do drown out honest statements


If your goal is to have at least some vague sense of “public opinion” in online discussion, that actually reflects what humans believe, not bots


Then a “human tag” web of trust is a lot better than nothing.

It won’t stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people’s screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is “dark pattern design” too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false “human tags” to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying “ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.”

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can’t resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren’t late-gen Synths from Fallout. Take away the screen, put us face to face, and it’s very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter’s “dark pattern design” is quite different from the weak filter’s. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

(Weak "AI filters" are dark pattern design & "web of trust" is the real solution)

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

whoeverlovesDigit (npub1wa
6u3l2) on Nostr

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

The worst examples are when bots can get through the “ban” just by paying a monthly fee.

So-called “AI filters”

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn’t generated by a chat bot, when every “detector tool” has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today’s “AI algorithms” are “more AI” than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don’t like the person
  • To pretend a bot is a person, because the authorities like the bot
  • To pretend bots have become “intelligent” enough to outsmart everyone and break “AI filters” (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it’s nothing new, it was the bots doing it the whole time, don’t look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

It’s also worth mentioning some of the reasons why the authorities might dislike certain people and like certain bots.

For example, they might dislike a person because the person is honest about using bot tools, when the app tests whether users are willing to lie for convenience.

For another example, they might like a bot because the bot pays the monthly fee, when the app tests whether users are willing to participate in monetizing discussion spaces.

The solution: Web of Trust

You want to show up in “verified human” feeds, but you don’t know anyone in real life that uses a web of trust app, so nobody in the network has verified you’re a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the “verified human” tag too.

They will now see your posts in their “tagged human by me” feed.

Their followers will see your posts in the “tagged human by me and others I follow” feed.

And their followers will see your posts in the “tagged human by me, others I follow, and others they follow” feed


And so on.

I’ve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you’d think.

The tag should have a timestamp on it. You’d want to renew it, because the older it gets, the less people trust it.

This doesn’t hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn’t as good as a weak “AI filter.”

If your goal is to scroll through a feed where none of the creators used any software “smarter” than you’d want, this isn’t as good as an imaginary strong “AI filter” that doesn’t exist.

But if your goal is to survive, while others are trying to drive the planet to extinction


If your goal is to be able to tell the truth and not be drowned out by liars


If your goal is to be able to hold the liars accountable, when they do drown out honest statements


If your goal is to have at least some vague sense of “public opinion” in online discussion, that actually reflects what humans believe, not bots


Then a “human tag” web of trust is a lot better than nothing.

It won’t stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people’s screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is “dark pattern design” too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false “human tags” to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying “ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.”

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can’t resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren’t late-gen Synths from Fallout. Take away the screen, put us face to face, and it’s very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter’s “dark pattern design” is quite different from the weak filter’s. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

(Weak "AI filters" are dark pattern design & "web of trust" is the real solution)

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

whoeverlovesDigit (npub1wa
6u3l2) on Nostr
A riveting saga of nerds spending a year rearranging deck chairs on the Arch Linux #Titanic đŸšąđŸ’». Spoiler: it involves more acronyms than a government agency đŸ„±. But hey, at least the Web of Trust and the Berblom algorithm are now free to roam the wilds of irrelevance đŸ€–.
https://devblog.archlinux.page/2026/a-year-of-work-on-the-alpm-project/ #ArchLinux #WebOfTrust #BerblomAlgorithm #NerdLife #TechSaga #HackerNews #ngated
A year of work on the ALPM project

An overview of the work done on the ALPM project in 2024 and 2025.

Arch Linux Dev Blog

I'm going to be in #Florence and #Rome for a couple days, followed by #Zermatt for a few more. I've never actually signed anybody's #pgp #gpg keys, but hey! Perhaps this could be a chance to learn how to do that *and* add some trans-atlantic edges to that web of trust!

#Italy #Switzerland #OpenPGP #WebOfTrust

Pretty proud of my second patch sent to the #ClawsMail team.

Hopefully, the next version of this MUA will have a largely improved #E2EE #UX:

  • a new config option in the #PGP plugins enable automatic online discovery of PGP keys (according to your existing gpg.conf auto-key-locate
  • whenever you recieve a mail signed by a public key missing (or expired) in your #GPG keyring, you'll have a button to trigger an online search for the key (either through #WKD or the older #keyserver based approach).
In the age of #ChatControl, I think it's time for PGP based end-to-end #encryption to be enabled by default in #email clients.

Most arguments against the complexity of the #WebOfTrust are moot, when applied to mail comunications. And given how easy is to deploy WKD protocolÂč, key autodiscovery could seriously increase the amount of encrypted mails over the network, increasing people #privacy and heavily reducing the power of passive #surveillance.

#HTTPSEverywhere did not reduced global surveillance, but #PGP could!

___

Âč an Italian tutorial about wkd is in the making, but... #programming was more funny. 😝

@ArneBab Not really, all it does is increase the cost for legitimate users, as spammers and fraudsters just see this as a cost of operation.

#PhoneNumers are PII because more often than not they require #KYC!

  • And it's not always feasible or even possible.to provide users with a fresh & untained number, because inactive numbers get recycled!

Stop the Escalating Commitment to Schemes that fall flat on the face outside of #EliteProjection from #SiliconValley...

But that too is a #privacy invasion!