Один «странный» случай индексного сканирования

Эта история началась с исследования проблем производительности на высоконагруженной базе данных Postgres. Табличка, которая была предметом исследования, довольно небольшая (~100,000 записей), но очень активно используемая. В процессе исследования я увидел, что Postgres использует индексный доступ по абсолютно неселективному критерию, фактически это был "INDEX FULL SCAN" в терминологии Oracle. Интуиция, наработанная на другой промышленной базе, вопила: "что-то здесь не так!" Но что?

https://habr.com/ru/companies/gnivc/articles/992660/

#execution_plan #postgresql #базы_данных #pgsql #sql #postgres

Один «странный» случай индексного сканирования

Эта история началась с исследования проблем производительности на высоконагруженной базе данных Postgres. Табличка, которая была предметом исследования, довольно небольшая (~100,000 записей), но очень...

Хабр
#pgsql PostgreSQL @ #SeaGL2025
++ DB Migration from MySQL to Postgres ++

I had always wanted to get into PgSQL ever since MySQL was sold several years back. I would procrastinate because the MySQL Community version remained reliable.

Having migrated several DBs, such as NC, LMS, Mail data stores, I scolded myself for delaying it until last week.

Some services would however only work best with MySQL as the official DB e.g. Yourls (mysql/sqlite). They would remain like that.

In summary, I am more than impressed with PgSQL. Memory footprint is tiny - 60GB utilised by all services on this server despite additional DB servers. The memory consumption has remained approximately that amount for years. MySQL would not do the same if there were additional instances of it.

#pgsql #mysql #engineering

pgxWrappy — потому что жизнь слишком коротка для ручного сканирования в Go! �

Если вы устали от бесконечного rows.Scan(), ручного разбора вложенных структур и километрового boilerplate-кода — pgxWrappy создан для вас! 🚀 Делаем неудобное удобным! Жить с удобством

https://habr.com/ru/articles/931242/

#Go #golang #pgx #sql #gorm #orm #postgres #pgsql #database #opensource

pgxWrappy — потому что жизнь слишком коротка для ручного сканирования в Go! �

🚀 Запарило ручное сканирование в pgx? Встречайте pgxWrappy - супер-удобную обертку для PostgreSQL в Golang! 🐘💨 🔥 Больше никакого boilerplate-кода! // 🚫 БЫЛО (кошмар на 10 строк): rows, _ :=...

Хабр

Professionnellement, je commence doucement à me projeter dans l'après CDD, c'est à dire au printemps-été 2026.
J'aimerais aller vers #ClermontFerrand pour me rapprocher de gens que j'aime.

Donc si vous êtes dans un service public, une association ou une SCOP du coin qui recherche ou est susceptible de chercher un géomaticien, n'hésitez pas à me faire signe.

#PGSQL #RStats #Python #Flask #QGIS #ecologie #leaflet #geoserver #SIG #logicielLibre

奇怪的 PostgreSQL benchmark (看起來還沒解?)

上個禮拜在 Lobsters 上看到「Benchmarking is hard, sometimes ...」這篇,當下看到還沒解就先放著,過一個禮拜後好像還是沒解,但覺得蠻有趣的。 作者遇到的情況是在 176 cores (352 threads) 的機器上跑 pgbench,發現 22 clients 前都正常,但是超過後就卡住,但再上升到 100+ clients 的時候又莫名其妙的恢復了: 研究了蠻多東西,但看起來還是沒找到原因,應該還會繼續定時回頭看一下看看有沒有新的進展...

Gea-Suan Lin's BLOG
Installer Dotclear sur FreeBSD 13

Barman woes on OVH

So, due to a cascade of Barman’s errors and corrupted backups due to running out of backup space, I had to pretty much clear out my Barman backup storage.

After resetting everything, however, I was not able to make a full backup, because Barman was not able to receive write-ahead logs from PostgreSQL.

2025-06-04 22:00:21,209 [523687] barman.cli ERROR: [Errno 5] Input/output error: '/backup/barman/pg/wals/00000001000000CD/00000001000000CD00000009.tmp'
See log file for more details.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/barman/cli.py", line 2390, in main
args.func(args)
File "/usr/lib/python3/dist-packages/barman/cli.py", line 1600, in archive_wal
server.archive_wal()
File "/usr/lib/python3/dist-packages/barman/server.py", line 2651, in archive_wal
self.backup_manager.archive_wal(verbose)
File "/usr/lib/python3/dist-packages/barman/backup.py", line 847, in archive_wal
archiver.archive(verbose)
File "/usr/lib/python3/dist-packages/barman/wal_archiver.py", line 213, in archive
self.archive_wal(compressor, wal_info)
File "/usr/lib/python3/dist-packages/barman/wal_archiver.py", line 356, in archive_wal
shutil.copystat(src_file, tmp_file)
File "/usr/lib/python3.12/shutil.py", line 388, in copystat
_copyxattr(src, dst, follow_symlinks=follow)
File "/usr/lib/python3.12/shutil.py", line 338, in _copyxattr
os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
OSError: [Errno 5] Input/output error: '/backup/barman/pg/wals/00000001000000CD/00000001000000CD00000009.tmp'

Input/output error? That’s odd, but the stacktrace tells us a lot, as from the function name (os.setxattr), we can deduce that it’s trying to set xattrs on our WAL files. The underlying storage for my backups is OVH’s Backup Storage, accessible over NFS. And NFS, for most of its life, was not able to support xattrs even if the underlying filesystem does. The support has been added to NFS 4.2, while OVH (still) uses 4.1.

So, how to fix this?

Initially, I thought of downgrading Barman, because things had worked before. But that did not help, and so I had to go digging into the source code (which was painful as I am not a Python guy).

# Perform the real filesystem operation with the xlogdb lock taken. # This makes the operation atomic from the xlogdb file POV with self.server.xlogdb("a") as fxlogdb: # If the content has changed, it means the file was either compressed # or encrypted or both. In this case, we need to update its metadata if content_changed: shutil.copystat(src_file, current_file) stat = os.stat(current_file) wal_info.size = stat.st_size

So, if content_changed is true, we use copystat from shutil.py, which copies xattrs from the original file.

# If the bits of the file has changed e.g. due to compression or encryption content_changed = False # Compress the file if not already compressed if compressor and not wal_info.compression: compressor.compress(src_file, tmp_file) files_to_remove.append(current_file) current_file = tmp_file content_changed = True wal_info.compression = compressor.compression # Encrypt the file if encryption: encrypted_file = encryption.encrypt(current_file, dst_dir) files_to_remove.append(current_file) current_file = encrypted_file wal_info.encryption = encryption.NAME content_changed = True

Ah, so therein lies the rub: Barman assumes that it needs to do this if the content is either being compressed or encrypted. And it just so happens that I’ve also enabled GZIP compression so as not to run out of space again. Well, we have to deal with this the old-fashioned way (by lowering the retention policy). After disabling compression, Barman was able to make backups again.

Hope this helps someone, because it sure as fuck would’ve helped me.

#Barman #lighthearted #NFS #NFSv4 #OVH #pgsql #PostgreSQL #SelfHosting

With enough search iterations, I was able to find enough references to cobble together what I needed. So, now I have my minimally-permissioned query-user.

Told the person who submitted the automation-request, "hey: I worked out creating a minimally-permissioned
#PGSQL user to support this effort. I wrote it up and it's available at ".

They responded, "oh. We already have a query-user. It's ".

Curious to see if I simply re-did what they did, I fired up a permissions-query:

psql -c "SELECT * FROM information_schema.role_table_grants WHERE grantee = '';"
For each. My user had 40 grants,
all at just the SELECT level. Their user had nearly 170 grants at various levels. Probably going to recommend that they go with my user since it's sufficient for the tasks they're wanting to accomplish but has significantly fewer grants associated with it (and I'm the kind of person who errs towards the "smaller permission-sets are better" #security posture).