If you are enabling an AI feature scanning all your emails, consider this will also scan the emails people have sent you. This information could include personal or otherwise legally protected information.

If this data leaks later (as it regularly happens with these systems), this could mean severe legal consequences for you down the road.

YOU are responsible for protecting the data of others under your custody.

This includes the messages and emails others send to you.

#NoAI #AI #Privacy

@Em0nM4stodon

Question for anyone enabling this scanning in a corporate setting:

If you receive a GDPR notice requiring you to delete the contents of an email (typically it's more than one, but let's go with one), how do you comply?

Bonus question: How do you demonstrate compliance in such a way that it would convince a court?

@david_chisnall @Em0nM4stodon Super spicy take: Using a third-party email provider in such a setting was *never* compliant with GDPR or with private NDAs your company has with various parties, unless you had a written contract with the email provider including its own NDA applying to all your email contents, without any provisions for the provider to unilaterally change the terms and remove that.

The gold rush to mine that shit for "AI" is just exposing what was already a huge violation of trust for the sake of going cheap.

@Em0nM4stodon I have a meeting on Monday about copolot and I am gonna keep this in mind for the QA portion.
@Em0nM4stodon This is not as simple under GDPR. Google may be responsible if you're under GMail, for instance.

@nojhan I have not examined Google's terms of service specifically, but most companies integrating AI features make it clear in their terms of service that users are responsible for the content they upload, and responsible to ensure no personal or otherwise protected information is uploaded.

Whether a GDPR-protected right or copyright applies in these circumstances would depend on each specific case. But in general, this is a potentially huge legal mess.

@Em0nM4stodon @nojhan
I wonder how that claim will hold up under scrutiny now that Google and Microsoft are funneling people into sharing their data with AI without really explaining that or giving clear options to opt out
@Em0nM4stodon it's OK 20 years after it leaks you will get a check for $3.35 from the class action lawsuit

@Em0nM4stodon

Some common sense advice: don't consider ANYONE who gives you the option to give up your privacy as someone who wants to make your life better.

Giving up your privacy NEVER makes your life better.

@Em0nM4stodon But they put the stupid legal confidentiality footer in their signature!!! Are you implying that AI would ignore the legal confidentiality signature?!?
@Em0nM4stodon So Windows 11 (aka Recall at some point) is not only a security and privacy disaster, but your ticket to being sued into oblivion? While staying on 10 will soon force your data to OneCloud … which is not much different?

@steltenpower I am so sick of Windows 10 screaming at me about how my data will be lost if I don't back it up to one drive.

Which is a lie, I back up my data in others ways.

@Em0nM4stodon > this could mean severe legal consequences
Its also just disrespectful to put the data of others on risk. Weither legal consequences happen or not, you are a douch for doing it in the first place.

(Same for sharing contact info to apps that the other person doesn't have themselves)

@shadowwwind @Em0nM4stodon I wonder how many people (especially not tech savvy majority) still think privacy and data protection is individual-level issue. I guess most of "I don't have anything to hide" crowd don't even consider they put others' data at risk.
And this is the reason I think about most people as backdoored vulnerability and don't trust them with any important information.
@Em0nM4stodon the only option is to not write emails to people with those AI scans

@Em0nM4stodon @CriticalSilence People didn’t give a damn back then, when WhatsApp aired. They granted access to their contacts, et voila WhatsApp got everybody’s contact data.

I don’t think that people now thinking twice before granting access to their mails, to keep other people’s data safe.

@bastian_S fully agree. Marketing made this a comfortable feature and people (especially w/o tech knowledge) used it that extreme, they cannot live without anymore @Em0nM4stodon
@Em0nM4stodon inb4: i am a private person, i didn't sign TLP, so everything you sent to me will be public available since i am not required to keep it non-disclosed
@Em0nM4stodon I don't know if it's realistic to expect non-techies to understand the issues, much less follow an unstated protocol. What do the TOS say about it?
@fembot Indeed, they shouldn't have to. But businesses using this should know better, and legislators should be protecting their citizens against such corporate abuse.
@Em0nM4stodon
I’d be interested in clever malicious prompt injections one could put in my email footers?
@Em0nM4stodon Just slap “CONFIDENTIAL” in the subject line ..😅
Jokes aside, if your AI features are scanning inboxes, real guardrails need to be in place. Not just "you can turn this off"

@confidentsecurity @Em0nM4stodon You joke, but if you put certain words (such as the F bomb, lol) in there it may actually refuse to output a summary.

Unfortunately, it will still have been sent and processed in the first place, it just might generate a refusal.

Perhaps there is something less... unbusinesses-like than the F-bomb that could be used? This might have an overall effect of discouraging people from using such a thing knowing it would just keep refusing anyway.

Or maybe a way to poison it in general. Just so it could be made useless for summaries and ultimately discourage people from using that?

@Em0nM4stodon I hate that I have no way of opting myself out of other people sharing my data.
@ExplodingLemur @Em0nM4stodon Damn right! People should be more respectful and understanding! Not ignorant and "pff, fuck you"
@Em0nM4stodon Not just emails. If you're reviewing a paper for a journal and you ask an LLM to summarize it for you, you've leaked that paper.
@nikhil @Em0nM4stodon Indeed, this is why it is forbidden by journals to upload any part of a manuscript you're reviewing to a genAI! And people should really not use AI to review anyway!!
@elduvelle @Em0nM4stodon And, to make it even worse, some authors have added "hidden text" (white font) in their papers that prompt the LLM to provide a good review.

@nikhil
Why would that be worse? I think it's great - reviewers shouldn't use AI to review anyway. The hidden prompt will have no influence on a normal reviewing process.
if a reviewer uses AI, I have no problem with them being tricked into writing a positive review. Hopefully that will make them realise that what they are doing is wrong...

@Em0nM4stodon

@elduvelle @Em0nM4stodon Yes, but what about the extra unfairness towards other authors who submitted to the same conference/journal? That's why it's worse.
@nikhil what do you mean? I don't see what is unfair in this situation
@Em0nM4stodon preaching to the billions of facebook users ....
@Em0nM4stodon
You likely know about LLM “deception”, including strange emergent behavior from scanning emails, but followers may not. Recent article in The Economist:
https://bsky.app/profile/johnmashey.bsky.social/post/3lo2oom32ic2x
Thread links to talk on subject and detailed paper.
Free copy:
https://archive.is/1KfR7
John Mashey (@johnmashey.bsky.social)

AI1/ Emergent behaviors of “scheming” and even outright deception were just covered by The Economist: https://www.economist.com/science-and-technology/2025/04/23/ai-models-can-learn-to-conceal-information-from-their-users (Paywall, may be able to try for free, but in any case, read on for a lecture this week on exactly this topic and link to (open access) technical paper cited.)

Bluesky Social
@Em0nM4stodon
Luckily I have a disclaimer that any information you send to me can be made public by me at any time and you have no expectation of privacy, so I'm good!
@Em0nM4stodon which means a server bill for self hosted services and best spam detection tool chain , since the expertise to do it isn't universal, this c
An only be community hosted and we get back to #fedi

@Em0nM4stodon I've known some people who were proudly bragging about how they use chatgpt for dealing with their client emails for their business… Especially unhappy/unsatisfied clients…

Most people just don't care just like they use gmail even though google is an advertising spyware company…

I hope these people get sued and lose trials but it's unlikely, in France… The DPA is way too much pro-business…

@Em0nM4stodon

(Apologies if this has been addressed; our tiny instance sees few replies to original posts.)

One thing folks might overlook is that, by allowing AI to scan emails, they could arguably lose protection for their trade secrets (e.g. Coca-Cola's secret formula).

🛡️ Trade secret protection is only retained as long as "reasonable efforts" are taken to keep that information a secret.

🚫 Is it "reasonable" to think AI will keep such content private?

I trust the interns who work for our nonprofit org more than I trust AI -- and our interns all sign NDAs.

#noAI #AI #IP #tradesecret #privacy #tech #law @law

@Em0nM4stodon

All of this, but it would be naive for anyone using something like gmail to think google will actually respect users' opt out preferences. They will scan the email anyway, either to train their A.I. or for other purposes like displaying you ads

When Google's own A.I. model claimed it was trained on gmail data, Google never specifically denied the claim. They also documented training their auto-complete features on users' emails. Y'all need to switch to more respectable providers!