Microsoft's AI wants to be your medical middleman, but is a "Secure by Design" promise really enough for Copilot?
Microsoft's AI wants to be your medical middleman, but is a "Secure by Design" promise really enough for Copilot?
Betteredge’ law my beloved
(It isn’t statistically true in practice, though 😔)
Catherine Tate’s guest appearance was always the funniest one:
Literally adjacent in my feed:
Microsoft’s AI wants to be your medical middleman, but is a “Secure by Design” promise really enough for Copilot? Would you trust Microsoft with the “puzzle” of your medical records?
Short answer? No, and no.
Microsoft’s push to make Copilot a kind of AI medical middleman—especially through the newly announced Copilot Health—raises a real tension: the company is loudly promoting a Secure by Design philosophy, but the sensitivity of health data means the bar is far higher than a general security promise. The short version is that Secure by Design is necessary, but nowhere near sufficient for something that sits between you, your clinicians, your medical records, and your wearables.
Source code of what? Unfortunately, none of the above is anywhere near enough.
We need locally available ai models that can run off-line. Also: the ai context and history must be kept separately from the model itself.
If the ai model needs to communicate with the outside world, user needs 100% transparency and control what data the ai sends.
Source code of what?
The AI agent. Also, a way to see all of its training data.