WARNING: Lemmy Self-Hosters, There Have Been CSAM Attacks taking place against [email protected]

https://jamie.moe/post/113630

WARNING: Lemmy Self-Hosters, There Have Been CSAM Attacks taking place against [email protected] - Jamie Lemmy

There have been users spamming CSAM content in [email protected] [/c/[email protected]] causing it to federate to other instances. If your instance is subscribed to this community, you should take action to rectify it immediately. I recommend performing a hard delete via command line on the server. I deleted every image from the past 24 hours personally, using the following command: sudo find /srv/lemmy/example.com/volumes/pictrs/files -type f -ctime -1 -exec shred {} \; Note: Your local jurisdiction may impose a duty to report or other obligations. Check with these, but always prioritize ensuring that the content does not continue to be served. ## Update Apparently the Lemmy Shitpost community is shut down as of now.

I nuked my personal instance because of this :(

Dealing with pictrs is just frustrating currently since there’s no tools for its database format and no frontend for the API. I half-expected this outcome but I hope it gets better in the future.

yeah this has got me second guessing hosting my own instance as well.
That finalized my decision to not self-host. I’m savvy enough to set it up but not enough to keep up with maliciousness like this. I’d never even considered a deliberate CSAM attack as a possibility - I thought it was just something (atrocious) users might inadvertently post.
You always gotta prepare for the worst case. It’s certainly why I am never going to bother with hosting something like this unless I’m serious about it akin to a job. If there’s even a remote chance of CASM getting on your machine, you gotta assume it will and be prepared to fight to prevent it/remove it.
Agreed, pict-rs is not ready for this. Not having an easy way to map URL to file name is a huge issue. I still don’t understand why non-block storage doesn’t just use the UUID it generates for the URL as a filename. There is zero reason to not have a one-to-one mapping.
yeah, I just spent the last hour writing some python to grab all the mappings via the pict-rs api. Didn’t help that the env var for the pictrs api token was named incorrectly (I should probably make a PR to the Lemmy ansible repo).

If you aren’t going to fully wipe your drive in horrible events like this, at the very least use shred instead of rm. rm simply removes references to the file in the filesystem, leaving the data behind on the disk until other data happens to be written there.

Do not ever allow data like that to exist on your machines. The law doesn’t care how it got there.

Was going to say the same. Windows and Linux both use “lazy” ways of deleting things, because there’s not usually a need to actually wipe the data. Overwriting the data takes a lot more time, and on an SSD it costs valuable write cycles. Instead, it simply marks the space as usable again, and removes any associations to the file that the OS had. But the data still exists on the drive, because it’s simply been marked as writeable again.

You need a form of secure delete, which doesn’t just mark the space is usable. A secure delete will overwrite the data with junk data. Essentially white noise 1’s and 0’s, so the data is completely gone instead of simply being marked as writeable.

Would rm be okay if you regularly fstrim?

No, fstrim just tells your drive it doesn’t need to care about existing data when writing over it. Depending on your drive, direct access to the flash chips might still reveal the original data.

If you want ensure data deletion, you’d need to zero out the whole drive and then fstrim to regain performance. Also see ATA Secure Erase.

Or physically destroy the whole drive altogether.

TRIM tells the SSD to mark an LBA region as invalid and subsequent reads on the region will not return any meaningful data. For a very brief time, the data could still reside on the flash internally. However, after the TRIM command is issued and garbage collection has taken place, it is highly unlikely that even a forensic scientist would be able to recover the data.

From: en.m.wikipedia.org/wiki/Trim_(computing)#Operatio…

So: probably yes.

Trim (computing) - Wikipedia

That makes perfect sense. So basically normal delete just removes the reference, but the actual data is still there until something overwrites it. I guess for sensitive files I should use a secure-delete tool. Do you recommend any specific software for that on Windows or Linux?
That makes perfect sense. So basically normal delete just removes the reference, but the actual data is still there until something overwrites it. I guess for sensitive files I should use a secure-delete tool. Do you recommend any specific so
Wouldn’t the images still be stored on Lemmy.world’s servers? I thought federation simply meant you host a link to it (still bad, but it just goes away if you rm the post).
The only 100% foolproof-way is to physically destroy the server disk where that image is stored. Do not place those drive fragments in a recycling center, landfill.
I’m not surprised. It was quite common for shitheads on reddit to make an account, post a few comments on /r/againsthatesubreddits, then post CP on other subreddits to spin the narrative that AHS was trying to shut down hate subs.
What’s a CSAM attack? Sounds so serious, but I’ve never heard of it.

It is where scum spam a site with illegal images, which can result in the site being taken down and in some instances the site owners being prosecuted.

Depending on where you live you may have a legal obligation to report the incidents and to prove actions taken to remove the content.

related in the US: safe harbor laws
Spamming pornographic depictions of minors
I had to google it but that stands for child sexual abuse material
Pedophiles ruin everything.

From what I understand from federation, shouldn’t the images still live on whatever server they were uploaded to? Sure the post gets federated and now you have links to csam on your server but you just need to delete the post, not the image, unless the attacker uploaded it to your pict-rs (how?).

To me this is a cornerstone of federation, it ensures both security for federaters and optimal disk space usage at the cost of bandwidth usage for popular hosters.

What kind of depraved piece of shit does this?

Welcome to the internet!

I do think social media (or becoming a carrier of other peoples’ data) is incredibly difficult. How do you moderate? How much liability do you take on? etc. I’m very interested to see how this gets tackled over time in the Fediverse.

Naive question here: would it be valuable to generate hashes of those images and provide them as a public database? Seems like it would be valuable to reject known images using some mechanism to prevent this from happening broadly. It wouldn’t stop someone from on-the-fly systematically editing/saving/uploading CSAM, but hashes are cheap to store and it would at least provide one barrier to entry.

@Jamie some recommended reading here for hosting ActivityPub services: https://github.com/FediFence/fedifence/blob/main/LegalRegulatory.md

Cloudflare has a free CSAM filter: https://developers.cloudflare.com/cache/reference/csam-scanning/

IFTAS is working on an opt-in CSAM scanner for service providers, follow this account to be notified

Lemmy moderators should fill out this needs assessment: https://cryptpad.fr/form/#/2/form/view/thnEBypiNlR6qklaQNmWAkoxxeEEJdElpzM7h2ZIwXA/

fedifence/LegalRegulatory.md at main · FediFence/fedifence

supporting documents for FediFence.Social. Contribute to FediFence/fedifence development by creating an account on GitHub.

GitHub