Ok, I just don’t see the relevance to this post then. Sure, you’re fine to rant about Apple in any thread you want to, it’s just not particularly relevant to AI, which was the technology in question here.
I hear good things about GrapheneOS but just stay away from it because of all the stranger. I love Olan’s.
Security and privacy in 2024 is unfortunately about trust, not technology, unless you are able to isolate yourself or design and produce all the chips you use yourself.
They have designed a very extensive solution for Private Cloud Computing: security.apple.com/blog/private-cloud-compute/
All I have seen from security persons reviewing this is that it will probably be one of the best solutions of its kind - they basically do almost everything correctly, and extensively so.
They could have provided even more source code and easier ways for third parties to verify their claims, but it is understandable that they didn’t, is the only critique I’ve seen.

Secure and private AI processing in the cloud poses a formidable new challenge. To support advanced features of Apple Intelligence with larger foundation models, we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. Built with custom Apple silicon and a hardened operating system, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale.
To be honest, I’m not sure what we’re arguing - we both seem to have a sound understanding of what LLM is and what it is not.
I’m not trying to defend or market LLM, I’m just describing the usability of the current capabilities of typical LLMs.
It goes a tad bit beyond classical conditioning… LLM’a provides a much better semantic experience than any previous technology, and is great for relating input to meaningful content. Think of it as an improved search engine that gives you more relevant info / actions / tool-suggestions etc based on where and how you are using it.
Here’s a great article that gives some insight into the knowledge features embedded into a larger model: transformer-circuits.pub/…/scaling-monosemanticit…
That’s fair, but you are misunderstanding the technology if you’re bashing the AI from Apple for making macOS less secure. Most likely, it will be just as secure as for example their password functionality, although we don’t have details yet. You either trust the OS or not.
Microsoft Recall was designed so badly, there’s no hope for it.
That’s why it’s on the OS-level. For example, for text, it seems to work in any text app that uses the standard text input api, which Apple controls.
User activates the “AI overlay” on the OS, not in the app, OS reads selected text from App and sends text suggestions back.
The App is (possibly) unaware that AI has been used / activated, and has not received any user information.
Of course, if you don’t trust the OS, don’t use this. And I’m 100% speculating here based on what we saw for the macOS demo.