It's bizarre watching people realize slowly, in real time, that tech companies do not, in fact, have their backs.

They never did. They only pretended to because it was fashionable.

They would kill you and your entire family if it made their growth in profits increase by 0.1% this quarter, and they'd do it with a song in their hearts.

And they would do so without fear of prosecution, because they've basically bought out the entire political system through lobbying and can blatantly bribe Supreme Court Justices without consequence.

Why would they be incentivized to actually protect your privacy? Especially when your data is so valuable for growing their profits?

To a lot of business types, encryption isn't a question about privacy. It's about access controls. And they implicitly believe they get access.

So, too, will their buddies in the government.

You cannot, and should not, expect billionaires to have your backs. They don't give a fuck about you. They never will. Don't believe them.

Today we heard the US government is planning to invest half a trillion dollars into an "AI Infrastructure" project.

Hey, didn't @matthew_d_green just write about this topic?

https://blog.cryptographyengineering.com/2025/01/17/lets-talk-about-ai-and-end-to-end-encryption/

Now, despite all the things we call "oracles" in cryptography, none of us can see the future. This is just the totally foreseeable consequences of the system as it existed yesterday.

I'd like to share a few thoughts on this matter.

The people should absolutely learn to break AI systems. I feel that this will become crucial to online privacy in the coming years.

But I also implore you to keep AI 0days secret. Don't disclose them publicly--especially to AI companies!

Feel free to share them privately with your friends (over E2EE chats) and only use them if they can help people.

And, to be clear, this is coming from Mr. "I drop 0day on my furry blog" himself.

Let’s talk about AI and end-to-end encryption

Recently I came across a fantastic new paper by a group of NYU and Cornell researchers entitled “How to think about end-to-end encryption and AI.” I’m extremely grateful to see th…

A Few Thoughts on Cryptographic Engineering
neveragain.tech

Today we stand together to say: not on our watch, and never again.

Also, if you plan on doing anything even shaped like a crime please leave me the fuck out of it
@soatok having been witness to and nearly dragged into them twice?
Omit the please.
@soatok no be gay do crime? 
@spud @soatok
Being gay IS the crime, it was always a one step process. The conservative majority on the Supreme Court have already expressed that they think Lawrence v. Texas was wrongly decided.
@soatok @matthew_d_green

If the client's plaintext is sent to the AI before encryption, then there can be no talk of end-to-end encryption. This creates a similar threat to the presence of keyloggers and sniffers, making the client environment vulnerable.

Thus, the question of whether AI will pose a threat to end-to-end encryption is irrelevant, as end-to-end encryption does not address the issue of malware. Other protective measures must be employed to deal with this threat model.
@Seyd @matthew_d_green Correct, but it's not the same as PC malware as we think about it. Phones have much more sensitive data (location data at specific times, for example) and the AI could ingest it completely locally only to snitch on you later.
@matthew_d_green @Seyd @soatok As some one who does malware det, I would argue the data on PCs is just as sensitive, just of a different nature.
The problem is that the question is initially incorrect. End-to-end encryption is mentioned merely for effect. It could be replaced with "a flight to Mars," "red wolves and lynxes," or "yiff drawings," and the meaning would not change. The author should have renamed the article to "Let’s talk about AI and yiff drawnings" which would have attracted more attention, but the essence would remain the same.
@Seyd @soatok @matthew_d_green The answer is really simple: that "AI" is smoke and mirrors garbage. Yet another instance of fascists intentionally making the world worse in hopes they can profit from it. The sooner the bubble pops the sooner we can put this gratuitous threat behind us.
I don't understand why to write here about certain things, but please don't explain it to me. I don't want to give a reason for the continuation of any ideological discussion. By the way, your message resembles an AI product — a lot of fluff, but little substance.

The issue of privacy is much deeper, and AI doesn't fundamentally change anything here. Even without AI, you can't be sure about the security of your phone. The software has grown to enormous sizes, and auditing tens of gigabytes is difficult. Users can install additional software, and there are many specialized processors in phones that can affect privacy.

Cybercriminals sell infected devices on marketplaces. Often, manufacturers themselves install firmware with malware on devices.

https://iz.ru/1823322/dmitrii-bulgakov/nanesti-zarazenie-kak-tehnika-s-marketpleisov-stanovitsa-istocnikom-virusov

https://www.wired.com/story/android-tv-streaming-boxes-china-backdoor/

AI is just another ingredient in this complex issue.
Нанести заражение: как техника с маркетплейсов становится источником вирусов

Вредоносное ПО крадет финансовую информацию и личные данные

Известия

@Seyd The issue I'm talking about is not the whole of privacy and malicious user devices, but specifically the supposed demand for AI having access to private user data and that being used as an excuse for client-side backdoors bypassing e2ee.

That excuse goes away when the AI scam does.

Right now, on a planetary scale, personal data is being collected, financial information is being stolen, and devices are being used for DDoS attacks or cryptocurrency mining. This is a common problem. Will AI become a disaster for end-to-end encryption? No. The issues with end-to-end encryption lie in the realm of computational complexity, not in whether your data is being stolen by a regular keylogger or AI.

By the way, my avatar was created by AI, and this message was also translated by AI. Has this knowledge changed your attitude towards the drawing and the text?
@Seyd It's changed my attitude towards you.
@soatok What I personally think is more important is that we don't want things like Nightshade to become a bigger story, so be very careful with the poison datasets you use with that cause if you feed a "illegal imagery" dataset to it on purpose, EVERYONE gets harmed by it.
@JackRacc
first, context: I do not know how nightshade works internally.
I am interested in the topic, but I am having trouble understanding your post, do you mean that the datasets used to train and evaluate nightshade itself? It might be that I am unclear on what the last two “it” refer to. Or if “illegal imagery” is serious or sarcastic.
@crypticcelery the poison prompt requires source datasets to be baked into the poisoned image. Nightshade works so that the more poisoned images are entered into someone else's dataset, a prompt for one thing will turn up the poison prompt instead. (Prompt for cat would return a poison prompt for dog) How this could really end up bad is if the poison prompt and poison source dataset baked into poisoned images are actually illegal.

@soatok Belated thanks to you, and @matthew_d_green for your columns. Utterly terrifying despite the light tone. How to opt out?

I also appreciate the callback to #JamesMickens' ;login: column (https://www.usenix.org/system/files/1401_08-12_mickens.pdf), it really brightened my day.

When someone says "assume that a public key cryptosystem exists," this is roughly equivalent to saying "assume that you could clone dinosaurs, and that you could fill a park with these dinosaurs, and that you could get a ticket to this 'Jurassic Park,' and that you could stroll throughout this park without getting eaten, clawed, or otherwise quantum entangled with a macroscopic dinosaur particle." With public key cryptography, there’s a horrible, fundamental challenge of finding somebody, anybody, to establish and maintain the infrastructure. For example, you could enlist a well-known technology company to do it, but this would offend the refined aesthetics of the vaguely Marxist but comfortably bourgeoisie hacker community who wants everything to be decentralized and who non-ironically believes that Tor is used for things besides drug deals and kidnapping plots. Alternatively, the public key infrastructure could use a decentralized "web-of-trust" model; in this architecture, individuals make their own keys and certify the keys of trusted associates, creating chains of attestation. "Chains of Attestation" is a great name for a heavy metal band, but it is less practical in the real, non-Ozzy-Ozbourne-based world, since I don’t just need a chain of attestation between me and some unknown, filthy stranger — I also need a chain of attestation for each link in that chain. This recursive attestation eventually leads to fractals and H.P. Lovecraft-style madness.

I'm a little surprised Mickens didn't dub his gnu-metal band "Chains of Attestation" instead of XXYM (https://tentimesyourmaster.com/).