Microsoft 365's buggy Copilot 'Chat' has been summarizing confidential emails for a month — yet another AI privacy nightmare
Microsoft 365's buggy Copilot 'Chat' has been summarizing confidential emails for a month — yet another AI privacy nightmare
Unless someone has released something new while I haven’t been paying attention, all the gen AIs are essentially frozen. Your use of them can’t impact the actual weights inside of the model.
If it seems like it’s remember things is because of the actual input of the LLM is larger than the input you will usually give it.
For instance lets say the max input for a particular LLM is 9096 tokens. The first part of that will be instructions from the owners of the LLM to prevent their model from being used for things they don’t like. Lets say the first 2000 tokens. That leaves 7k or so for a conversation that will be ‘remembered’.
Now if someone was really savvy, they’d have the model generate summaries of the conversation and stick them into another chunk of memory, maybe another 2000 tokens worth, that way it will seem to remember more than just the current thread. That would leave you with 5000 tokens to have a running conversation.
That is kind of assuming the worst case scenario though. You wouldn’t assume that QA can read every email you send through their mail servers ”just because ”
This article sounds a bit like engagement bait based on the idea that any use of LLMs is inherently a privacy violation. I don’t see how pushing the text through a specific class of software is worse than storing confidential data in the mailbox though.
That is assuming that they don’t leak data for training but the article doesn’t mention that.
This is some pathetic chuddery you’re spewing…
You wouldn’t assume that QA can read every email you send through their mail servers ”just because”
I absolutely would, and Microsoft explicitly maintains the right to do that in their standard T&C, both for emails and for any data passed through their AI products.
www.microsoft.com/en-us/servicesagreement#14s_AIS…
v. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.
That seems to be the terms for the personal edition of Microsoft 365 though? I’m pretty sure the enterprise edition that has the features like DLP and tagging content as confidential would have a separate agreement where they are not passing on the data.
Unless this boundary has actually been crossed in which case, yes. It’s very serious.
This part applies to all customers:
v. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.
And while Microsoft has many variations of licensing terms for different jurisdictions and market segments, what they generally promise to opted-out enterprise customers is that they won’t use their inputs to train “public foundation models”. They’re still retaining those inputs, and they reserve the right to use them for training proprietary or specialized models, like safety-filters or summarizers meant to act as part of their broader AI platform, which could leak down the line.
That’s also assuming Microsoft are competent, good-faith actors — which they definitely aren’t.