It's bizarre watching people realize slowly, in real time, that tech companies do not, in fact, have their backs.

They never did. They only pretended to because it was fashionable.

They would kill you and your entire family if it made their growth in profits increase by 0.1% this quarter, and they'd do it with a song in their hearts.

And they would do so without fear of prosecution, because they've basically bought out the entire political system through lobbying and can blatantly bribe Supreme Court Justices without consequence.

Why would they be incentivized to actually protect your privacy? Especially when your data is so valuable for growing their profits?

To a lot of business types, encryption isn't a question about privacy. It's about access controls. And they implicitly believe they get access.

So, too, will their buddies in the government.

You cannot, and should not, expect billionaires to have your backs. They don't give a fuck about you. They never will. Don't believe them.

Today we heard the US government is planning to invest half a trillion dollars into an "AI Infrastructure" project.

Hey, didn't @matthew_d_green just write about this topic?

https://blog.cryptographyengineering.com/2025/01/17/lets-talk-about-ai-and-end-to-end-encryption/

Now, despite all the things we call "oracles" in cryptography, none of us can see the future. This is just the totally foreseeable consequences of the system as it existed yesterday.

I'd like to share a few thoughts on this matter.

The people should absolutely learn to break AI systems. I feel that this will become crucial to online privacy in the coming years.

But I also implore you to keep AI 0days secret. Don't disclose them publicly--especially to AI companies!

Feel free to share them privately with your friends (over E2EE chats) and only use them if they can help people.

And, to be clear, this is coming from Mr. "I drop 0day on my furry blog" himself.

Let’s talk about AI and end-to-end encryption

Recently I came across a fantastic new paper by a group of NYU and Cornell researchers entitled “How to think about end-to-end encryption and AI.” I’m extremely grateful to see th…

A Few Thoughts on Cryptographic Engineering
@soatok @matthew_d_green

If the client's plaintext is sent to the AI before encryption, then there can be no talk of end-to-end encryption. This creates a similar threat to the presence of keyloggers and sniffers, making the client environment vulnerable.

Thus, the question of whether AI will pose a threat to end-to-end encryption is irrelevant, as end-to-end encryption does not address the issue of malware. Other protective measures must be employed to deal with this threat model.
@Seyd @matthew_d_green Correct, but it's not the same as PC malware as we think about it. Phones have much more sensitive data (location data at specific times, for example) and the AI could ingest it completely locally only to snitch on you later.
The problem is that the question is initially incorrect. End-to-end encryption is mentioned merely for effect. It could be replaced with "a flight to Mars," "red wolves and lynxes," or "yiff drawings," and the meaning would not change. The author should have renamed the article to "Let’s talk about AI and yiff drawnings" which would have attracted more attention, but the essence would remain the same.