0 Followers
0 Following
2 Posts

for me it adds nothing (like most userdb fields as i don’t use them) but equally doesn’t remove or compromise anything, userdb is optional

i’m absolutely not acting like it’s being added for no reason, did you read my reply? it’s being added (and i just wrote it) to maliciously comply with CA upcoming laws. you instead just acted like a optional field is the same as MS no-offline setup. “Windows would implement it in an identical way”. do you even use linux?

you claim there’s plenty of evidence and this is not a slippery slope because the goal is deanonymization. this is not how you prove to not be in a logical fallacy. “legalize gay marriage and they’ll marry dogs”, “oh i have plenty of evidence queer folks are against nuclear family”. the second statement is true (per this queer folk) but it doesn’t make the first less of a slippery slope.

Meta pushes for age verification? i believe that, not contested. systemd will violate privacy? this is the slippery slope. i know meta wants privacy violated. you’re claiming that having an optional field is a dead giveaway systemd wants to let meta do this.

how? wouldn’t systemd rely on meta services, or third party stuff like persona, to id you if they really wanted to make sure who you are? i see no api calls, i see no system lockdown when not complying, i see no data being sent away.

i see an optional field that nothing uses, that prevents nothing, that is strictly on your device.

you say it’s “just” compliance, but how does it verify? if this is compliance with age verification, it sure lacks a lot of verification and seems to just be age. thus why this is malicious compliance: the bare minimum to be lawful and not compromise user privacy. seems desirable to me

not who you replied to but makes linux systems maliciously compliant so that you can still use them (say, in schools) without having your privacy violated.

your slippery slope argument could apply to any field of userdb: real name will require an id, location will require geolocation!

slippery slope is a logical fallacy, complain when systemd requires an id, not when it does the bare privacy-respecting minimum to comply with a silly law

TLDR: an e2ee channel means “everything passing over this channel is super secure and private, but it needs some keys for this to work”. e2ee means something: you can not care about most issues with delivery and protection and such, but you need to care about the keys. if you don’t do that, you are probably ruining the security of such e2ee channel

end-to-end-encryption solves one issue: transport over untrusted middleware, doesn’t mean much by itself. it’s being flung around a lot because without proper understanding sounds secure and private.

it’s like saying that i ship you something valuable with a super strong and impenetrable safe. but what do i do with the key? e2ee is the safe, solves the “how can i send you something confidential when i dont trust those who deliver it”, and it means much! it’s a great way to do it.

but it solves one problem giving a new one: what to do with the key? this usually can be combined with other technologies, such as asymmetric encryption (e.g. RSA), which allows having keys which can be publicly shared without compromising anything. so i send you an impenetrable code-protected safe with an encrypted code attached, and only your privkey can decrypt the code since i used your pubkey!

(note: RSA is used for small data since encryption/decryption is cpu intensive. usually what happens is that you share an AES key encrypted with RSA, and the payload is encrypted using that AES key. AES is symmetric: one key encrypts and decrypts, but AES keys are small. another piece of technology attached to make this system work!)

but now comes the user-friendliness issue: very few are big enough nerds to handle their keys. hell, most folks don’t even want to handle their passwords! so services like matrix offer to hold your keys on the server, encrypted with another passphrase, so that you don’t need to bother doing that, just remember 2 passwords or do the emoji compare stuff. it’s meh: compromising the server could allow getting your keys and kinda spoils e2ee, but it’s convenient and reasonably secure.

what does whatsapp do? i don’t know! but it kind of magically works. if they do e2ee, where are the keys??? how does meta handle reports if messages are e2ee???

also, e2ee works if you can trust the key you’re sending to! as mentioned in the ‘activitypub keys’ section before, if you ask a middleman the key for your recipient, can you trust that’s the real key? e2ee doesn’t cover that, it’s not in its scope

so what does e2ee mean? it means: super strong channel, ASSUMING keys are safe and trusted. e2ee as a technology doesn’t solve “all privacy” or guarantee that nobody snoops in per se. it offers a super safe channel protected by keys, and lets you handle those keys how you more see fit. which meaning deciding who you trust to send, how you let others know how to encrypt for you (aka share your pubkey) and how you will keep your privkey safe.

nothing per se, depends on implementation

hi! sorry for throwing this here without explaining much, explaining a bit seems definitely due diligence!

so, i need to make some things clear, skip if you know these already:

fediverse

the fediverse is not a single software, rather a collection of softwares speaking a common language (sharing a protocol: activitypub). the classic example is mail: on gmail you can email folks on outlook. they just know how to send messages to other instances/servers/deployments, and how to receive. for example, email (SMTP) expects data formatted in a certain manner (lots of headers and a body, kinda) on port 25. Activitypub expects activities (json-ld documents) coming over inboxes (POST to http endpoints).

compatibility

now, say emissary sends an encrypted message to a mastodon user. mastodon doesn’t know what to do with that document! it’s a garbled mess of encrypted data, what is mastodon supposed to do with it? there are no rules for this in the spec! the post claims “federated” (aka, across multiple servers) e2ee messaging, and that already exists with multiple solutions. what they mean to me is either

  • they are making a new e2ee chat: great! emissary users will get a way to message other emissary users. but that’s it: you need to be on emissary, like with matrix you need to be on matrix
  • they are making a fediverse e2ee chat: this isn’t easy! you can’t just make it for yourself, you need to clearly define how it works, and everyone must implement it too. otherwise mastodon or lemmy won’t know what to do with the message you sent

spec

they link two specs: MLS (an IETF spec defining scalable e2ee messaging), and activitypub-e2ee. the first one is great: i think matrix wants to move their encryption to that? it’s good, but it’s a spec: you need to adapt it to your use. the second one is how MLS can be applied to Activitypub communication: the thing we care about! unfortunately the later spec is just a draft, so it needs more work and it’s unlikely that it will see adoption in this state.

asymmetric encryption

so now i need to go a bit into asymmetric encryption, in this case RSA. there’s a lot of great examples if you put “asymmetric encryption” or “rsa” into google, but i’ll try my best here. imagine 2 folks trying to communicate, Alice and Bob, but they need to have a postperson deliver their messages. they don’t want such postperson reading them! how to do that? A and B both get two “keys”: one private and one public. these keys are related to each other: a pubkey “has” a privkey, and vice versa. these keys are also “magic” (math, good luck if u wanna dig in here, if you’re not into math just trust me the keys are magic). using a public key, you can encrypt a message so that only the related private key can decrypt it. and using a private key you can encrypt a message so that only its public key can decrypt it. the second case is for identity proofing, we care about the first one: if A and B make their public keys public (heh), they can both use such keys to create messages meant only for either A or B, assuming they still hold their private keys, and nobody else. because math magic

activitypub keys

in activitypub every actor holds a private and public key. this is how the protocol does “authorized fetch”, meaning making sure an activity truly comes from the actor claiming to send it. so we can use these keys for doing e2ee!

Alice <—> A’s server <—> B’s server <—> Bob

Alice can ask her server to get Bob’s public key from Bob’s server, and then encrypt a message for Bob and send it via the servers without anyone snooping in. Great? NO!

  • A’s server can lie about bob’s key, give a random key, decrypt the message, then encrypt it with bob’s real pubkey and send it. this way bob knows nothing and A’s server can read the message. Same way, A’s server can give bob a fake pubkey for alice, so read the message and then encrypt and re-send to alice with her real key. So trust is broken! the spec offers 3 solutions to this:
  • trusting your server, which is kind of the starting point and we don’t want that
  • having a third party validate keys (either a centralized solution which Alice and Bob ask, or some yet-to-invent federated way to handle keys. we’re kinda back at point one)
  • having alice and bob exchange keys themselves (maybe send them on matrix or signal, delegating the “identify and trust” issue to those services)

“knowing irl”

some users compared the issue with “knowing each other irl” but it’s not the same. on signal, i trust you to be you, and our conversation to be private. if i search you by username, i can just message you. trusting your username is you is a meaningless discourse here: you are your username. i’m writing this to “Abundance114”, i don’t care who you are, i just want this to reach “Abundance114”. so on signal i plug your user and our keys automagically reach each other safely. this spec doesn’t explain how this happen: i would need to first identify and trust you, Abundance114, and then find a way to safely communicate with you so we can exchange keys.

i hope this wan in-depth enough! i’m not an encryption expert, if any is here i’m open to critics, but this seems reasonable to me with my protocol and encryption experience. basically i believe this post is hype bait: whatsapp is e2ee, but who has the keys? do you trust meta? sure, the message travels encrypted, but who can read it? only you? an e2ee system is not just its encryption tech, but the way keys are securely shared

this is misleading and sensationalistic. if emissary implements e2ee, it’s not “e2ee for the fediverse”, it’s " e2ee for emissary users". did mastodon talk about e2ee? did lemmy?

also the MLS draft (supposedly "better than signal “) proposes for trusted key exchange either " trust the server” (lmao), use a centralized key authority (wow) or have users manually verify their keys out of band (so basically use matrix to assure your chat is encrypted)

fedi devs need to stop clickbaiting, and fedi users should learn a bit more about their protocol to avoid getting misled this way

what os are you going to use on your smartphone if you remove software from google and apple?

aosp, fdroid, no gservices

what VR headset

not into vr so can’t say

what telecom

sadly, not a good one. i wish i had a choice, but this isn’t software

are you only shopping in local food markets?

sort of? i get fresh stuff from actual markets when i can and when i go for groceries i avoid ultra processed stuff from big multinationals, making sure of the provenance and the maker of the stuff i get, supermarkets also sell stuff from local producers

lemmy creators are bigots

eh, im still leeching off some other person hosting, im not going to host lemmy and im slowly making my own thing

also can you provide examples? i heard it multiple times, I’m not contesting it, just kinda want to see myself, like with vaxry, and not only trust second hand accusations

i don’t want to be a cop and background check

no absolutely fine i don’t check all my software too, but when i hear a callout i dont hide behind “art and artist” mentality and move off the bigot’s stuff

your argument is a bit extreme, it doesn’t need to only be software from nice folks, it just needs to not be software made by not nice folks

apart from sqlite, i think everything is replaceable with a bit of compromise

what things made by not nice folks are you locked into?

taking care of bad servers is instance admin business, you’re conflating the user concerns with the instance owner concerns

generally this thread and previous ones have such bad takes on fedi structure: a federated and decentralized system must delegate responsibility and trust

if you’re concerned about spam, that’s mostly instance owner business. it’s like that with every service: even signal has spam, and signal staff deals with it, not you. you’re delegating trust

if you want privacy, on signal you need to delegate privacy to software. on fedi to server owners too, but that’s the only extra trust you need to pay

sending private messages is up to you. if i send a note and address it only to you, i’m delegating trust to you to not leak it, to the software to keep it confidential, and to the server owner to not snoop on it. on signal you still need to trust the software and the recipient

this whole “nothing is private on fedi” is a bad black/white answer to a gray issue. nothing is private ever, how can you trust AES and RSA? do you know every computer passing your packet is safe from side chain attacks to break your encryption? you claimed to work in security in another thread, i would expect you to know the concept of “threat modeling”

lemmy’s approach still relies on audience targeting for privacy, just like mastodon. using a distinct object type (which is off spec btw) is “more secure” just because nobody else knows what lemmy is doing