What is the long-term storage plan for Lemmy instances?

https://lemmy.world/post/1334724

What is the long-term storage plan for Lemmy instances? - LemmyWorld

Over time, Lemmy instances are going to keep aquiring more, and more data. Even if, in the best case, they are not caching content and they are just storing the data posted to communities local to the server, there will still be a virtually limitless growth in server storage requirements. Eventually, it may get to a point where it is no longer economically feesible to host all of the infrastructure to keep expanding the server’s storage. What happens at this point? Will servers begin to periodically purge old content? I have concerns that there will be a permanent horizon (as Lemmy becomes more popular, the rate of growth in storage requirements will also increase, thereby reducing the distance to this horizon) over which old – and still very useful – data will cease to exist. Is there any plan to archive this old data?

Pictrs 0.4 recently added support for object storage. This is fantastic, because object storage is dirt cheap compared to traditional block storage (like a VM filesystem).

I know Lemmy uses Postgres, but they should really invest time into moving towards something more sustainable for long term/permanent hosting. Paid Postgres services are obscenely upcharged and prohibitively expensive, so that’s not an option.

It’s difficult to run a DB off object storage, but letting Lemmy use SQLite instead would be amazing. If Lemmy supported SQLite, everyone could use Cloudflare R2, which is dirt cheap and doesn’t have egress fees.

Couple that with Pictrs supporting object storage, and the major instances could be saving hundreds of dollars a month off block storage fees alone.

The 700MB are the postgres data or everything including the images?

I’m under the impression that text should be very cheap to store inside postgres.

Keep in mind that you are also storing metadata for the post (i.e. creation time), relations (i.e. which used posted) and an index.

Might not be much now but these things really add up over the years.

Yes but those are in general a couple of bytes at most. The average comment will be less than 1KB. Metadata that goes with it will be barely more.

On the other hand most images will be around 1MB, or 1000x times larger. Sure it depends on the type of instance but text should be a long way from filling a hard drive. From what I’ve seen on github the database size is actually mostly debugging information, so it might explain the weirdness.

Configurable Activity Cleanup Duration · Issue #3103 · LemmyNet/lemmy

Did you check to see if this issue already exists? Is this only a single feature request? Do not put multiple feature requests in one issue. Is this a question or discussion? Don't use this, use ht...

GitHub
On average, 500MB is Postgres, 200MB is Pictrs thumbnails. Postgres is growing faster than Pictrs is.
My local instance that I run for myself is about a week old. Has 2.5G in pictrs, 609M in postgres. One of those things that’ll vary for every setup

I’m not really sure that a K/V service is a more scalable option than Postgres for storing text posts and the like. If you’re not performing complex queries or requiring microsecond latencies then Postgres doesn’t require that much compute or memory.

Object storage for pictrs is definitely a fantastic addition, though.

The largest table holds data that is only needed by Lemmy briefly. There is a scheduled job to clear it… Every 6 months. There are active discussions on how best to handle this.

On my instance I’ve set a cronjob to delete everything but the most recent 100k rows of that table every hour.

I saw that issue, and then I saw people having problems after clearing it, so I’m just going to wait until they figure that out in a stable version. Looking forward to it though!

AWS Postgres instances aren’t that expensive, and they handle upgrades and backups for you.

That said, I’m interested in distributed storage, and maybe this fall/winter when I get some time off I’ll try making a lemmy fork that’s based on a distributed hash table. There are going to be a ton of issues (i.e. data will be immutable), but I have a few ideas on how to mitigate most of the issues I know about.

Isn’t it mostly pictures and movies taking up space, posts and comments that is just text doesn’t take up much.

I would be fine with text is forever but pictures and movies are deleted after time.

Just think of all those old, helpful forum posts from years past with tinypic and Photobucket links that are dead. I agree memes can probably die out after time but anything informative would be bad to lose imo

For large instances pictures is probably the bigger consumer of space, but for small instances the database size is the bigger issue because of federation. Also, mass storage for media is cheap, fast storage for databases is not. With my host I can get 1TB of object storage for $5 a month. Attached NVMe storage is $1 per month per 10 GB.

For my small instance the database is almost 4x as large as pictrs, and growing fast.

There is a good writeup on how to do the migration here. I went through it myself since I host my tiny Lemmy instance on an AWS EC2 instance. It went pretty smoothly bu obviously larger instances will have to take a longer downtime to perform the migration.
Pro-tip: Self-hosting Lemmy? You can use object storage to back pict-rs (image hosting) to save a lot of money - federate.cc

Just thought I’d share this since it’s working for me at my home instance of federate.cc [http://federate.cc], even though it’s not documented in the Lemmy hosting guide. The image server used by Lemmy, pict-rs, recently added support for object storage like Amazon S3, instead of serving images directly off the disk. This is potentially interesting to you because object storage is orders of magnitude cheaper than disk storage with a VM. By way of example, I’m hosting my setup on Vultr, but this applies to say Digital Ocean or AWS as well. Going from a 50GB to a 100GB VM instance on Vultr will take you from $12 to $24/month. Up to 180GB, $48/month. Of course these include CPU and RAM step-ups too, but I’m focusing only on disk space for now. Vultr’s object storage by comparison is $5/month for 1TB of storage and includes a separate 1TB of bandwidth that doesn’t count against your main VM, plus this content is served off of Vultr’s CDN instead of your instance, meaning even less CPU load for you. This is pretty easy to do. What we’ll be doing is diverging slightly from the official Lemmy ansible setup [https://github.com/LemmyNet/lemmy-ansible] to add some different environment variables to pict-rs. After step 5, before running the ansible playbook, we’re going to modify the ansible template slightly: cd templates/ cp docker-compose.yml docker-compose.yml.original Now we’re going to edit the docker-compose.yml with your favourite text editor, personally I like micro but vim, emacs, nano or whatever will do… favourite-editor docker-compose.yml Down around line 67 begins the section for pictrs, you’ll notice under the environment section there are a bunch of things that the Lemmy guys predefined. We’re going to add some here to take advantage of the new support for object storage in pict-rs 0.4+ [https://git.asonix.dog/asonix/pict-rs/#user-content-filesystem-to-object-storage-migration]: At the bottom of the environment section we’ll add these new vars: - PICTRS__STORE__TYPE=object_storage - PICTRS__STORE__ENDPOINT=Your Object Store Endpoint - PICTRS__STORE__BUCKET_NAME=Your Bucket Name - PICTRS__STORE__REGION=Your Bucket Region - PICTRS__STORE__USE_PATH_STYLE=false - PICTRS__STORE__ACCESS_KEY=Your Access Key - PICTRS__STORE__SECRET_KEY=Your Secret Key So your whole pictrs section looks something like this: https://pastebin.com/X1dP1jew [https://pastebin.com/X1dP1jew] The actual bucket name, region, access key and secret key will come from your provider. If you’re using Vultr like me then they are under the details after you’ve created your object store, under Overview -> S3 Credentials. On Vultr your endpoint will be something like sjc1.vultrobjects.com [http://sjc1.vultrobjects.com], and your region is the domain prefix, so in this case sjc1. Now you can install as usual. If you have an existing instance already deployed, there is an additional migration command you have to run to move your on-disk images into the object storage. [https://git.asonix.dog/asonix/pict-rs/#filesystem-to-object-storage-migration] You’re now good to go and things should pretty much behave like before, except pict-rs will be saving images to your designated cloud/object store, and when serving images it will instead redirect clients to pull directly from the object store, saving you a lot of storage, cpu use and bandwidth, and therefore money. Hope this helps someone, I am not an expert in either Lemmy administration nor Linux sysadmin stuff, but I can say I’ve done this on my own instance at federate.cc [https://federate.cc] and so far I can’t see any ill effects. Happy Lemmy-ing!

Hey, that’s a Vultr guide! I use Vultr, thanks!

By the way, how are your costs on EC2? My understanding is that hosting on EC2 would be cost prohibitive from data transfer costs alone, not to mention their monthly rates for instances are pretty much always below the cost of a VPS.

Currently I’m just running a single user instance on a t2.micro. I’ve definitely locked it up at least twice after subscribing to a big batch of external communities so it’s definitely undersized if were to open it up to more users. I only have one other small service running on that instance though so Lemmy is definitely using the bulk of that capacity at least when it’s got work to do.

Costs are about $11.25 a month for the instance and about $2.50 for block storage (which is oversized now that pict-rs is on S3). I’m guessing that pict-rs s3 costs will be just a few pennies a day unless I start posting a lot on my own instance, probably less than a dollar a month.

Data transfer costs for me are zero though. I’m not using a load balancer or moving things between regions so I don’t expect that to change.

Just FYI, you could save about $5 a month and get 2x the performance if you moved that to a VPS. $11 a month for t2.micro is basically you being scammed if I’m being honest 😅
Yeah it’s likely that I’ll move this eventually. This instance was only setup so I had a test environment to learn AWS.

As for the data transfer costs, any network data originating from AWS that hits an external network (an end user or another region) typically will incur a charge. To quote their blog post:

A general rule of thumb is that all traffic originating from the internet into AWS enters for free, but traffic exiting AWS is chargeable outside of the free tier—typically in the $0.08–$0.12 range per GB, though some response traffic egress can be free. The free tier provides 100GB of free data transfer out per month as of December 1, 2021.

So you won’t be charged for incoming federated content, but serving content to the end user will count as traffic exiting AWS. I am not sure of your exact setup (AWS pricing is complex) but typically this is charged. This is probably negligible for a single-user instance, but I would be careful serving images from your instance to popular instances as this could incur unexpected costs.

AWS Data Transfer Charges for Server and Serverless Architectures | Amazon Web Services

To help enterprise clients increase their cloud initiatives, Sourced Group has developed a Serverless Adoption Programme for enterprises. In this post, we provide an updated version of server-focused data transfer costs that others have created and shared in the past. Sourced Group also created an entirely new visual for its own training materials that is focused on the AWS data transfer costs relevant to serverless architectures.

Amazon Web Services
I’m in a similar boat and I’m gaining about 300 MB/day on my small instance doesn’t yet have any local communities.