Just migrated my #Akkoma instance over to #OracleCloud after moving my projects around. Took nearly 4 hours to restore my #PostgreSQL #database from my #pgdump as the number of posts have gone up considerably for the past month.

Would this eventually mean in the future it could take a day to restore a #pgsql database? I sure hope not šŸ˜•...

#fediadmin
Tux Zone

@deltatux

I don't have a lot of data as I haven't been #selfHosting my instance for long, but since waves of #twitter people coming, post counts have gone down.

@deltatux it probably depends on what options you've used for your pg_dump and pg_restore. That's more of a data extract, easy to re-import in another system (or PG version). But if you're cluster size is growing that big, it might be a good idea to have a look at PG physical backups

@pgstef

I was thinking maybe in the future to just archive the whole pgsql directory into a compressed tar ball and just decompress it during migration, not sure if that would work.

During the pg_restore, I just used this command as per the #Akkoma instructions:

sudo -Hu postgres pg_restore -d akkoma -v -1 </path/to/backup_location/akkoma.pgdump>

https://docs.akkoma.dev/stable/administration/backup/

Is there any way to optimize this? I’m not a database admin so not a pgsql expert.

Thanks

Tux Zone

@deltatux tbh I don't know akkoma, but looking at it pretty quickly, those instructions seems pretty safe to me. The -1 flag for example is certainly safer but will prevent to use parallelism using multiple restore jobs. When you take the time to know a little bit how it works and what's inside the database, there are options to speed up the dumps and the restore. pg_back is a pretty good example of pg_dump management ( https://github.com/orgrim/pg_back )
GitHub - orgrim/pg_back: Simple backup tool for PostgreSQL

Simple backup tool for PostgreSQL. Contribute to orgrim/pg_back development by creating an account on GitHub.

GitHub
@deltatux that being said, the PG docs ( https://www.postgresql.org/docs/current/backup.html ) is pretty well described if you want to go deeper than SQL dumps.
Filesystem copy is possible when PG is shutdown, while there are some restrictions (simply said must be restored on same OS and PG version). There are also some options to copy the data while PG is running like pg_basebackup. But there's probably more to discuss than possible here šŸ™‚
Chapter 25. Backup and Restore

Chapter 25. Backup and Restore Table of Contents 25.1. SQL Dump 25.1.1. Restoring the Dump 25.1.2. Using pg_dumpall 25.1.3. Handling Large Databases …

PostgreSQL Documentation
@pgstef

Perfect, thanks for the documentations and hints. It's likely the -1 option is causing the hours long restore process as well. I guess there's a bit more trial & error required here as well.

Thanks so much, much appreciated!