Given Denmark's history, you'd think someone would know better.
Race car fuel melts brains.
| homepage | https://shey.ca |
| current project | https://opensourcerails.dev |
Given Denmark's history, you'd think someone would know better.
Race car fuel melts brains.
Ruby Central should be dissolved and a new organization should replace it that has sound governance policies suitable for free and open source software ecosystems.
There's no coming back from the multiple bad faith acts they have taken under the current leadership.
New post: Five Postgres anti-patterns I keep seeing in Rails apps and how to fix them.
Long time no talk. Hereās a short post on why you shouldnāt use UUIDv4 as primary keys: they destroy database performance.
I was going to share my own benchmarks, but Umang Sinhaās write-up already does a great job with benchmarks:
My write-up on the mistake I made and what Iād do instead:
For anyone deep into Postgres, this blog post on replication slots is basically required reading:
https://www.morling.dev/blog/mastering-postgres-replication-slots/
Over the last couple of years, Iāve helped dozens of users and organizations to build Change Data Capture (CDC) pipelines for their Postgres databases. A key concern in that process is setting up and managing replication slots, which are Postgres' mechanism for making sure that any segments of the write-ahead log (WAL) of the database are kept around until they have been processed by registered replication consumers. When not being careful, a replication slot may cause unduly large amounts of WAL segments to be retained by the database. This post describes best practices helping to prevent this and other issues, discussing aspects like heartbeats, replication slot failover, monitoring, the management of Postgres publications, and more. While this is primarily based on my experience of using replication slots via Debeziumās Postgres connector, the principles are generally applicable and are worth considering also when using other CDC tools for Postgres based on logical replication.
I wrote a post on reliability practice. No chaos monkeys, but quiet drills: restarting services, simulating load, seeing what breaks before it actually does.