So Apple has introduced a new system called “Private Cloud Compute” that allows your phone to offload complex (typically AI) tasks to specialized secure devices in the cloud. I’m still trying to work out what I think about this. So here’s a thread. 1/
Apple, unlike most other mobile providers, has traditionally done a lot of processing on-device. For example, all of the machine learning and OCR text recognition on Photos is done right on your device. 2/
The problem is that while modern phone “neural” hardware is improving, it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers. 3/
But if you send your tasks out to servers in “the cloud” (god using quotes makes me feel 80), this means sending incredibly private data off your phone and out over the Internet. That exposes you to spying, hacking, and the data hungry business model of Silicon Valley. 4/
The solution Apple has come up with is to try to build secure and trustworthy hardware in their own data centers. Your phone can then “outsource” heavy tasks to this hardware. Seems easy, right? Well: here’s the blog post. https://security.apple.com/blog/private-cloud-compute/ 5/
Private Cloud Compute: A new frontier for AI privacy in the cloud - Apple Security Research

Secure and private AI processing in the cloud poses a formidable new challenge. To support advanced features of Apple Intelligence with larger foundation models, we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. Built with custom Apple silicon and a hardened operating system, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale.

Private Cloud Compute: A new frontier for AI privacy in the cloud - Apple Security Research
TL;DR: it is not easy. Building trustworthy computers is literally the hardest problem in computer security. Honestly it’s almost the only problem in computer security. But while it remains a challenging problem, we’ve made a lot of advances. Apple is using almost all of them. 6/
The first thing Apple is doing is using all the advances they’ve made in building secure phones and PCs in their new servers. This involves using Secure Boot and a Secure Enclave Processor (SEP) to hold keys. They’ve presumably turned on all the processor security features. 7/
Then they’re throwing all kinds of processes at the server hardware to make sure the hardware isn’t tampered with. I can’t tell if this prevents hardware attacks, but it seems like a start. 8/
They also use a bunch of protections to ensure that software is legitimate. One is that the software is “stateless” and allegedly doesn’t keep information between user requests. To help ensure this, each server/node reboot re-keys and wipes all storage. 9/
A second protection is that the operating system can “attest” to the software image it’s running. Specifically, it signs a hash of the software and shares this with every phone/client. If you trust this infrastructure, you’ll know it’s running a specific piece of software. 10/

Of course, knowing that the phone is running a specific piece of software doesn’t help you if you don’t trust the software. So Apple plans to put each binary image into a “transparency log” and publish the software.

But here’s a sticky point: not with the full source code. 11/

Security researchers will get *some code* and a VM they can use to run the software. They’ll then have to reverse-engineer the binaries to see if they’re doing unexpected things. It’s a little suboptimal. 12/
When your phone wants to outsource a task, it will contact Apple and obtain a list of servers/nodes and their keys. It will then encrypt its request to all servers, and one will process it. They use a load balancer to make sure no single server processes all your requests. They’re even using fancy anonymous credentials and a third part relay to hide your IP. 13/
Ok there are probably half a dozen more technical details in the blog post. It’s a very thoughtful design. Indeed, if you gave an excellent team a huge pile of money and told them to build the best “private” cloud in the world, it would probably look like this. 14/
But now the tough questions. Is it a good idea? And is it as secure as what Apple does today? And most importantly: can users opt out of this thing? 14/
Quick intermission: I also posted this thread on Twitter, and then this happened right about this part. It was… uncomfortable.

Back to the substance.

I admit that as I learned about this feature, it made me kind of sad. The thought that was going through my head was: this is going to be too much of a temptation. Once you can “safely” outsource tasks to the cloud, why bother doing them locally. Outsource everything! 16/

As best I can tell, Apple does not have explicit plans to announce when your data is going off-device for to Private Compute. You won’t opt into this, you won’t necessarily even be told it’s happening. It will just happen. Magically.

I don’t love that part. 17/

Finally, there are so many invisible sharp edges that could exist in a system like this. Hardware flaws. Issues with the cryptographic attestation framework. Clever software exploits. Many of these will be hard for security researchers to detect. That worries me too. 18/

Wrapping up on a more positive note: it’s worth keeping in mind that sometimes the perfect is the enemy of the really good.

In practice the alternative to on-device is: ship private data to OpenAI or someplace sketchier, where who knows what might happen to it. 19/

And of course, keep in mind that super-spies aren’t your biggest adversary. For many people your biggest adversary is the company who sold you your device/software. This PCC system represents a real commitment by Apple not to “peek” at your data. That’s a big deal. 20/
@matthew_d_green It doesn't have to be perfect, it only has to be more costly to compromise this new system than to compromise the phone itself (for instance, through an exploit on the update servers). IMO, the biggest risk would be when compromising this "cloud" would compromise many phones at once, since that would divide the cost by the number of phones affected.
@matthew_d_green Or, will something simply not work because there is no connectivity? Will it quietly work degraded? Or will it work degraded but warn you it's having trouble connecting to a PCC?
@matthew_d_green That sounds like a *fantastic* feature request. Ideally if you can convince a senator to ask for it, maybe.
@matthew_d_green Hi Matthew, looks like Apple feels confident about the security of using their Private Cloud Compute server for “Generative AI” tasks? However, if you want to do something which involves ChatGPT, Apple will ask for your permission first. https://www.youtube.com/watch?v=pMX2cQdPubk
Talking Tech and AI with Tim Cook!

YouTube
@matthew_d_green oh my gods I’d have to take a shower if he replied to me
@matthew_d_green you are safe from this around here...
@matthew_d_green I started calling Tweets "Xcretions" after the man-child took over. His are the smelliest; Musky, if you will.

@matthew_d_green I completely understand this is not the point but it adds extra irony given how much data Teslas suck up about their users. They don’t seem to allow users to opt out of their data collection…

Excellent thread though. You bring up a lot of good points

@matthew_d_green and is it even worth it? By this I mean - apple still doesn’t get some other ease of use processing like recognizing shipping numbers, which afaik only ever worked in the US. So now they have this whole setup for better Memojis and some text generation?
@fl0_id @matthew_d_green It works outside the US, but could be language setting related, I’m one of the weirdos who think computing devices have to be set to English.
@matthew_d_green that third part relay feels like a nice target
@matthew_d_green It might be suboptimal, but it's the same as the code running locally on the phone, which AFAIK you also have to reverse engineer.
@cesarb That’s true but the threat model is different. Assume Apple isn’t deliberately backdooring things (in which case you’re hosed.) A security vulnerability in the code on your phone might be exploitable, but probably requires a targeted attack against millions of devices. A vulnerability in this code might allow an attacker to achieve similar results after compromising a handful of servers.
@matthew_d_green @cesarb So, once again, it boils down to who you trust. Personally, I don’t see any reason to not trust Apple, especially when comparing them to the alternatives as opposed to theoretical perfection

@matthew_d_green I read something, somewhat recently, that having the source code was not that important to security researchers. I don’t remember who wrote it, nor the details, but it was well argued and very interesting, especially given how unintuitive the affirmation was.

Have you read it? Knowing my luck, you probably wrote it and it’s the reason I’m following you. Anyway, I’m not trying to argue, your comment just reminded me of it. 🙂

@matthew_d_green related to Microsoft's "Why Should I Trust Your Code?" paper? https://cacm.acm.org/practice/why-should-i-trust-your-code/
Why Should I Trust Your Code? – Communications of the ACM

@matthew_d_green Step 4 in that screenshot is very much hit-and-miss; I've read far too many reports from researchers that Apple either didn't want to deal with them at all or, if they did, they didn't pay out.
@matthew_d_green and not using their own silicon since they failed to include contant-time operations properly, I hope.
@matthew_d_green This just occurred to me: Might we see a return of the Xserve—running Apple Silicon?
@matthew_d_green

Interesting proposal that seems very well thought through. Let’s see what business people make of it 😏 And, at least, nobody is talking about homomorphic encryption.

@matthew_d_green

Thanks for the great thread:)

Just wondering what this will look like in the context of the EU's new ambitions and its blatant surveillance extremism. They want ‘access by design’.

In a classified report dated May 22, the EU expert group, which is dominated by security authorities, lists 42 recommendations:

https://cdn.netzpolitik.org/wp-upload/2024/06/2024-05-22-Recommendation-HLG-Going-Dark-c.pdf

@matthew_d_green Is there information on the environmental impact of this feature?

@matthew_d_green Suppose, only for the sake of argument, that these technical measures succeed, and Apple‘s system is as secure as we want it to be. Here‘s my concern.

Years down the line, will their management still be as strongly committed to those goals? It sounds like this comes at considerable cost and effort. Will they *never* give in to the temptation to cut corners?

While reading the thread, I thought of Boeing. Once a model of engineering and safety; look what happened.

@slimhazard @matthew_d_green

I think a more interesting question to ask about Boeing is, "what can we do differently so that failures like this are less likely?"

It's all very well and good to point out potential future problems. The specific problem of what happens if Apple's corporate culture changes is one that sometimes keeps me up at night too—I don't really understand why it's as good as it is, and I want it to stay that way or even improve.

What can we do to facilitate that?

@abhayakara @matthew_d_green In Boeing‘s case, it‘s usually attributed to management with a certain worldview that took over after the MD merger. Which makes it feel like there‘s always a risk. Who can say what will be in someone‘s head, at some unknown point in the future?

Change the incentives, so that it‘s sure to stick? So that sacrificing privacy is never more profitable than “shareholder value“?

Meaning: legislative and regulatory mandates for computer security.

@abhayakara @slimhazard @matthew_d_green "I think a more interesting question to ask about Boeing is, "what can we do differently so that failures like this are less likely?""

Make much more thorough use of anti-trust law. U.S. companies can commit all kinds of atrocities and walk away with a slap on the wrist equivalent to a few hours of doing business because they spend billions to put their friends in government regulation positions. Boeing was no different. They had friendly people in the FAA that looked the other way.

John Oliver did an excellent episode on this: https://www.youtube.com/watch?v=Q8oCilY4szc&t=1

Boeing: Last Week Tonight with John Oliver (HBO)

YouTube

@Avitus @slimhazard @matthew_d_green Right. I don’t mean theoretically what should have been done, though. I mean what can _we_ do, individually or collectively, to either support a better regulatory regime or do other things to reward good corporate behavior.

This is not a hypothetical question. I think many people have the model of governance as something someone else does, but that’s how we got here.

@abhayakara @slimhazard @matthew_d_green
@pluralistic did an informative piece about Boeing, listing lot of things that went wrong and could have been prevented, especially by regulators: https://pluralistic.net/2024/05/01/boeing-boeing/#mrsa
Pluralistic: Boeing’s deliberately defective fleet of flying sky-wreckage (01 May 2024) – Pluralistic: Daily links from Cory Doctorow

@bloc @abhayakara @slimhazard @matthew_d_green @pluralistic But even more especially by Engineers and not know-nothing MBA's running an Engineering Company!

@slimhazard @matthew_d_green Related parallel: the proposals to identify child sexual abuse materials shared using messaging systems by creating hashes to "fingerprint" them.

The same technology potentially opens the door to fingerprinting ANYTHING an authoritarian govt doesn't want disseminated, chilling free speech or even criminalising it, including political manifestos, evidence of human rights abuses, environmental info etc.

@matthew_d_green Another aspect is that relying on remote servers means that your phone has diminished functionality whenever you have no service. That feels like a potential problem if this technology becomes too central to phone function.
@KimSJ @matthew_d_green To me it sounds no worse than Apple's OS being proprietary.... Or at least in the same same ballpark of troublesome...
@matthew_d_green exactly what I was looking for :D

@matthew_d_green thanks for posting this here in the fedi.

i wonder how this will play out when china demands access to those servers, apple fell over previously for the ccp...

@matthew_d_green The blog post answer whether it can protect from external attacks, but it doesn't answer the biggest question: will Apple actually keep its word and don't sell their client's data, as they did in the past
https://fossbytes.com/apple-data-collection-explained/
@matthew_d_green they could call it "recall" now that name is free.

@matthew_d_green
Nice thread thanks!

I wonder if they used sel4 (for which Apple has been a member of the foundation for some months now) in their PCC tech stack

https://sel4.systems/news/#member-apple

News about seL4 and the seL4 Foundation | seL4