It just clicked in my brain. What I haven't been able to articulate about why I'm so anxious about #Windows Recall. I'm sure others have already gotten to where I am.

It's worse than "a system that tracks everything you do" and stores that info in a basic database that could be easily compromised.
It's worse than a nanny surveillance tool for companies to spy on their employees.

It's inescapable.

It doesn't matter if I make a dozen "how to disable recall" tutorials. The second YOUR data shows up on someone ELSE'S screen, it's in THEIR recall database.

It won't matter if you're a master #security expert specialist. You can't account for EVERY other computer you've ever interacted with. If a family member looks up an old email with your personal data in it, your data is now at risk.

If THEIR system is compromised YOUR data is at risk.

I just went from "vague feeling of unease" to "actively writing templates to canvas elected officials, regulators, and attorneys general."

@SomeGadgetGuy remember the "timeline" feature in Windows 10? That was a bit creepy as well, this is timeline on steroids
@mountdiscovery completely unhinged and un-containable.

@SomeGadgetGuy @mountdiscovery Microsoft depends on institutional clients, especially government agencies, many of which have stringent legal rules about access controls. There's no way Recall can be compatible with those rules.

What's astonishing me right now is that I would have expected a whole lot of Microsoft's clients to push back, "We'll have to stay with Windows 10", but I'm not seeing any hint of Microsoft backing down.

@foolishowl yup, and even though this feature is only available for newer PC with NPUs folks already hate this feature. No way it rolls out in the near future.

Microsoft should just fork Windows for consumer and professional i.e enterprises

A new OS from the ground up with all these gimmicks and Co-pilot bloatware and the other the continued legacy OS that is still compatible with all kinds of software and services.

@mountdiscovery @foolishowl how can it not ship?

Machines are rolling out of shops on June 18th. No way they are not already tanked with the shipping version. Hardware release is a slow process, you can't roll back on a whim.

@gigantos "roll out" as in product messaging from MS after this PR blowback

The end user is too busy with real life to care about this and how it works.

@gigantos @mountdiscovery I get a lot of advertisements from Lenovo, and all their new laptops have NPUs.

Intel has been saying that going forward, all their processors will have NPUs. I haven't looked as closely at AMD, but I know they're advertising gaming laptops with NPUs.

I think we have to treat hardware made after 2023 as unreliable. Probably after 2022 as well.

Also the Linux Foundation and OSI have public AI projects, and Red Hat, so I expect the Linux kernel will enable NPUs.

@gigantos @foolishowl @mountdiscovery NPU's aren't inherently a negative. They're just a specialized co-processor for handling low precision float math quickly in parallel and with relatively lower power consumption than using a CPU or GPU for that task. From a hardware perspective it's a non issue. What software you choose (or are forced) to run that takes advantage of an NPU is where the potential problems are.
@mnemonicoverload @gigantos @foolishowl @mountdiscovery Indeed, NPUs aren't the problem per se, the problem is Windows forcing this "revolutionary idea" on everyone by default, making Windows machines with NPUs a high level risk, Linux would still be reliable for newer machines as long as distros don't put "AI desktops" as they're main focus.
@gigantos @foolishowl @NullTheFool @mountdiscovery Yeah, exactly. There's also nothing stopping Microsoft from rolling out this same "feature" to Windows PCs that don't have NPUs. It would burn a lot of CPU cycles and negatively effect performance to some extent, but there's nothing exclusive to NPUs here.
@mnemonicoverload @gigantos @NullTheFool @mountdiscovery I don't really understand what NPUs do. I'd thought they used GPUs for generative AI processing. What's most struck me is that Intel is making such a big deal of emphasizing that they will now include NPUs on their processors, implying they're as important as GPUs for consumer hardware. So part of my worry is seeing that they're going all in on this on the hardware side.

@foolishowl yes they are...

NPU becomes a great marketing tactic and differentiating factor from other chip manufactures.

@foolishowl @mnemonicoverload @NullTheFool @mountdiscovery the simplified explanation is that if you want to do the basic math operations that is used by LLMs, image processing, or machine learning, the NPU will do using less power than a CPU or GPU.

@gigantos @foolishowl @NullTheFool @mountdiscovery There's a lot of confusion of the use case for NPUs among the general public right now because they are useful for accelerating some specific "AI" tasks but the general AI hype train is all about LLMs. The current NPUs available in production CPUs are nowhere near complex enough to be useful to run complex LLMs on local hardware, we still need data centre level grunt for that.

For a better representation of what kinds of tasks are well suited to running on NPUs we can look at what Apple is currently using their very similar NE cores on their own silicon for in iOS, like facial and object recognition for automated tagging of photos and video, handwriting recognition, image upscaling, automated audio tagging, voice recognition, background noise removal, etc. All (potentially) useful stuff that falls well outside the current AI buzzword hype in the public consciousness.

@mnemonicoverload @foolishowl @NullTheFool @mountdiscovery this is true, but the 10 TOPS NPU on apple M1 can run quantized llama at 30 tokens per second on a friends machine. The ones required by windows are supposed to be 4x faster. And with Microsofts research into their Phi model, it is absolutely feasible to run a reasonably capable LLM locally.
@gigantos @foolishowl @NullTheFool @mountdiscovery I'll take your word for it since that's well outside my wheelhouse. The point I was trying to make was more that NPUs are broadly useful for a variety of tasks that people don't typically think of when hearing the word "AI". For the general public (at least in the midst of the current hype) AI = LLM and therefore NPUs must be a bad thing that can't be trusted to exist on our computers when in reality they're just a type of coprocessor that does certain types of math more efficiently than a general purpose CPU can.

@mnemonicoverload @gigantos @foolishowl @NullTheFool @mountdiscovery I mean, not data center-level grunt -- you can run Llama 3-70B at home on a ThreadRipper + 2 A100 - 80 GB GPUs, which run 710 watts after the water cooling, but it'll cost you $50k for the hardware.

You could plug it into the wall socket in your kitchen.

@mnemonicoverload @gigantos @foolishowl @NullTheFool @mountdiscovery The rest of your argument is rock solid, however, all the rest of the AI is generally loved by the audience.