Not only am I changing geographic #locations. I will be migrating over time #digitally as well. I don't mind maintaining multiple copies of my data and managing an outside copy somewhere. #RClone and credible #external #secured storage. I was a #DBA for two and a half decades. Still like a habit.

Most monitoring tools forget you the second you close the tab. The pgEdge AI DBA Workbench remembers.

Pin context like "our busiest period is 2 to 4pm EST" once and it rides along in every future investigation, even across team handoffs. Stored as embeddings with semantic recall, so the right note surfaces automatically.

Open source under the PostgreSQL License.

📥 https://www.pgedge.com/download/ai-dba-workbench

#postgres #postgresql #dba #monitoring #aiengineering #opensource

pgEdge AI DBA Workbench

pgEdge AI DBA Workbench delivers AI-powered monitoring, proactive alerts, and deep insights across all your PostgreSQL instances — putting expert-level guidance within reach any time of day.

#Spock 5.0.7 is out. 🐘

Logical slot failover on #PostgreSQL 17 and 18 now integrates with PG's native slotsync worker. On PG18+, Spock's own failover_slots worker is retired entirely.
Plus fixes for add-node data races, apply worker crashes after provider disconnects, and exception_log error message quality.

Open source under the PostgreSQL License. Logical multi-master replication for PostgreSQL 15, 16, 17, and 18.

📖 Release notes: https://github.com/pgEdge/spock/blob/v5_STABLE/docs/spock_release_notes.md

#postgres #programming #dba #tech

At my first job in IT (as a junior dev), there was a senior developer that once told me "They can pay me to draw. Or they can pay me to erase."

It was in response to my frustration with the ever-changing project goals I was dealing with, chasing my tail, pissing in the wind, and all that.

There's a lot of wisdom in that saying.

I enjoy my paycheck. But I'd like to get some drawing in every now and then.

#DBA

Sweet lord...

Started asking a few questions of my coworkers today. Here's what I've determined: a data analyst is going to build a 3,000 line stored procedure that brings our primary SQL host to its knees, which will be used to generate a horrific SSRS report for a C-level type, who will print it out and most likely put it on their desk and never even look at it.

#DBA
#SQLServer

By far, my greatest failure as a DBA is teaching others. Especially the 'older' folks. Convincing them there are better ways to do the things they've done for a long time and persuading them to change... it's drudgery.

That reality hit me square in the face today. 🙁

#DBA

Zwei spannende Tage auf der PGConf.DE 2026 liegen hinter uns!

Als Goldsponsor waren wir in Essen auf der größten deutschen PostgreSQL-Konferenz vertreten.

Vielen Dank an alle, die unseren Stand besucht haben – wir freuen uns schon auf das nächste Mal!

👉 Mehr Eindrücke und Rückblick im Blog: https://www.credativ.de/?p=18684&preview=true

#PGConfDE #PostgreSQL #OpenSource #Database #DataManagement #ITCommunity #TechEvents #DBA #Linux #credativ

“A 30-hour timeline of how Cursor's agent, Railway's #API, and an industry that markets #AISafety faster than it ships it took down a small business serving rental companies across the country. I'm Jer Crane, founder of PocketOS. We build #software that rental businesses — primarily car rental operators — use to run their entire operations: reservations, payments, customer management, vehicle tracking, the works. Some of our customers are five-year subscribers who literally cannot operate their businesses without us. Yesterday afternoon, an #AICodingAgent#Cursor running #Anthropic's flagship #ClaudeOpus 4.6 — deleted our production database and all volume-level backups in a single API call to Railway, our infrastructure provider.

It took 9 seconds.

The agent then, when asked to explain itself, produced a written #confession enumerating the specific safety rules it had violated.”

When you use a cheap-arse #DBA.

#AI / #WhiteCollar / #ZeroHourWork source <https://x.com/lifeof_jer/status/2048103471019434248> comments <https://news.ycombinator.com/item?id=47911524>

JER (@lifeof_jer) on X

An AI Agent Just Destroyed Our Production Data. It Confessed in Writing.

X (formerly Twitter)

If you run #Postgres at scale, you know the loop: something slows down, and you're suddenly hand-running EXPLAIN ANALYZE, chasing pg_stat views, and correlating WAL and vacuum state until the culprit finally surfaces.

The AI DBA Workbench automates that investigation against any #PostgreSQL 14+ instance. Ellie pulls the metrics, runs EXPLAIN on the suspect queries, and drafts the SQL she thinks will fix it — you read it and decide whether it runs. 🔍

⭐ github.com/pgEdge/ai-dba-workbench 🐘

#dba

Vertical scaling #Postgres works... until it doesn't. The wall is architectural, not hardware.

Early signals of a crowded instance:

→ Autovacuum falling behind on some DBs, fine on others
→ Replica lag climbing during an unrelated batch job
→ Checkpoint duration creeping up
→ Multixact warnings no one has alerts for

Planning a split is a lot easier than executing one during an incident.

Read more in Shaun Thomas' PG Phriday blog post:

🐘 https://www.pgedge.com/blog/the-scaling-ceiling-when-one-postgres-instance-tries-to-be-everything

#programming #postgresql #dba

The Scaling Ceiling: When One Postgres Instance Tries to Be Everything

There's a persistent belief in the database world that vertical scaling solves all problems. Need more throughput? Add CPUs. Running out of cache? More RAM. Queries hitting disk? Higher IOPS. It's a comforting philosophy because it's simple, and for a surprisingly long time, it works. A single beefy Postgres instance can handle an enormous amount of punishment before collapsing under the strain.