uwu.social administrator
cool guy
| [email protected] | |
| location | australia |
Quick message to all fediverse newcomers:
Moderation in the fediverse is done by humans. There are no bots that shadowban you based on detecting certain keywords. This has some important consequences:
1) If you see a rule-breaking post, click the "report" button and write a report. The actual living, breathing creatures who run your instance will review the report.
The location of the report button varies depending on your client/app software: it's usually near "reply" or "boost."
2) The text you write in a report matters, because it will be read by an actual person. The mods are not just counting the number of reports.
3) If you're talking about a sensitive topic like suicide, you should not bleep it out like "sui-de". You won't be banned for typing the actual word out. You should instead use a content warning for your post to warn your readers. Some users who don't wish to see certain topics can voluntarily apply word filters to their own timelines, and bleeping words is rude because it bypasses their filters.
4) If you get a DM saying something like
uwu.social had a degraded experience + some downtime for about 7 hours due to an issue with the hardware on the media storage server.
Some media may be missing for remote posts within the last 7 hours
We successfully migrated all mastodon media to it's new home. Please let me know if you see any serious issues with media (like no media loading at all, no incoming posts have media for more than a few minutes, uploads failing etc.). Please don't notify me about random posts not having media - this happens all the time.
For the curious, we remounted the old data disk RO and setup an overlayfs for it into a new directory on the new server (with the help of NFS). Then, in the background we copied the raw block device to it's new home. After that was completed, we deleted tombstones present in overlayfs upperdir from the volume and copied every other file over with rsync.
Total downtime was less than an hour for the initial setup, and about 1h30m for the resync and finalization.
uwu.social will be down for up to 2 hours later today (unsure when we'll start, but probably in 10 or so hours). We will be beginning the server data migration. Once the migration starts, we will restore service until the data migration finishes, then we will go down again for up to 6 hours to finalize the move.
We've decided to go with the overlayfs route, and we will be copying the raw partition over in the background rather than listing each file individually. We've tested this procedure on smaller disks and it seems to be durable.
Thanks to everyone who gave suggestions about the Linux moving thing. We'll be combining some of the tips we received to make the process smoother.
We're struggling to figure out a good way to migrate Mastodon media storage from one server to another... Mastodon stores files in millions of directories by hash (kinda like aaa/bbb/ccc/ddd) which means doing something simple like an initial rsync and then taking Mastodon down for a quick resync is impossible. We initially were going to go this route but after the initial file listing took more than 24h we cancelled it and gave up.
So now we're looking at just copying the raw filesystem over, but if we want to do it without taking mastodon down for the entire sync we need to come up with a way of copying it and resyncing the changed blocks afterwards.
One way could be to use overlayfs. Remount the old volume R/O, create a temporary upperdir and create an overlay between them. Then, copy the R/O image to it's new home, expand it or whatever, and apply the upperdir onto it. This way we only need to list the directories that actually had writes. Special care will need to be taken to ensure we delete any files that have overlayfs tombstones. IDK if anyone has ever done this before.
Another way could be to use devmapper snapshots to create a new COW-backed volume, rsync the R/O underlying block device over and then apply the COW to the new volume with snapshot-merge. We tried testing this out and caused devmapper to die horribly and spit out kernel bug log lines, so we had to reboot and e2fsck for 2 hours.
At this point it might be better to just take everything down for as long as it takes. I'm extremely annoyed at Mastodon's file structure making it impossible to move without major downtime. Their solution just seems to be "use S3 lol". It would probably take 24 hours (8TB at 1Gbps is roughly 17 hours). We could shrink it first since we don't use all the space, but resize2fs will take a while as well.
If anyone has any tips or ideas for doing it with minimal downtime I'd like to hear them. Or if you're an uwu.social user and don't care about extended downtime I'd also like to hear your thoughts too.