Slack is now using all content, including DMs, to train LLMs

https://lemmy.ml/post/15741611

Slack is now using all content, including DMs, to train LLMs - Lemmy

cross-posted from: https://lemmy.ml/post/15741608 [https://lemmy.ml/post/15741608] > > They offer a thing they’re calling an “opt-out.” > > > > The opt-out (a) is only available to companies who are slack customers, not end users, and (b) doesn’t actually opt-out. > > > > When a company account holder tries to opt-out, Slack says their data will still be used to train LLMs, but the results won’t be shared with other companies. > > > > LOL no. That’s not an opt-out. The way to opt-out is to stop using Slack. > > https://slack.com/intl/en-gb/trust/data-management/privacy-principles [https://slack.com/intl/en-gb/trust/data-management/privacy-principles]

Instead of working on their platform to get discord users to jump ship they decide to go in the same direction.
Also pretty sure training LLMs after someone opts out is illegal?
Wait, discord is also doing this?
Not currently and publically at least. They’re feeding your messages into an LLM, x.com/DiscordPreviews/status/1790065494432608432 but that’s not as bad as training one with your messages
Discord Previews (@DiscordPreviews) on X

Discord has been using machine learning models to determine the gender and age group of some of its users since at least August 2022. The data can be found in the "activity/analytics/events-[...].json" file of some Discord data packages, though the exact requirements are unknown.

X (formerly Twitter)

Also pretty sure training LLMs after someone opts out is illegal?

Why? There have been a couple of lawsuits launched in various jurisdictions claiming LLM training is copyright violation but IMO they're pretty weak and none of them have reached a conclusion. The "opting" status of the writer doesn't seem relevant if copyright doesn't apply in the first place.

but IMO they’re pretty weak

Well, thankfully, it’s not up to you.

Nor is it up to you. But fact remains, it's not illegal until there are actually laws against it. The court cases that might determine whether current laws are against it are still ongoing.

If copyrights apply, only you and stack own the data. You can opt out but 99% of users don’t. No users get any money. Google or Microsoft buys stack so only they can use the data. We only get subscription based AI, open source dies.

If copyrights don’t apply, everyone owns the data. The users still don’t get any money but they get free open source AI built off their work instead of closed source AI built off their work.

Having the website have copyright of the content in the context of AI training would be a fucking disaster.

  • It’s not illegal. 2. “Law” isn’t a real thing in an oligarchy, except insofar as it can be used by those with capital and resources to oppress and subjugate those they consider their lessors and to further perpetuate the system for self gain