Looks like not only backups but also my obsession^Wpassion to write detailed entries to my "selfhosting journal" pays back. Any change, I made in my main home server, has a date and a detailed description of changes made. Also, the process of #NetBSD installation and service installation is documented too, alongside with documented list of running services, opened ports, cronjobs, etc.
At one bad day, my main server started to hangup at near 18:00 and at nea 08:00. There weren't any cron (or any another) jobs at this time. In the logs and monitoring the problems with mosquitto (MQTT server) were visible — somehow it eats at near 100% of CPU, then monit restart it, then things become working, then (after some time) the server hangs completely. Stopped it to see if the problem disappear. But the same problem happens with Prosody. At the end, the root cause of processes slowdown was my PostgreSQL. Investigation showed that write to my second ZFS disk (where the PostgreSQL DB lives) were extremely slowed, so ZFS panicked, crashed and crashes the kernel 
[ 204836.661198] wd0d: device timeout writing fsbn 123148477 of 123148477-123148478 (wd0 bn 123148477; cn 122171 tn 1 sn 46), xfer 38, retry 1
[ 204863.837664] wd0: soft error (corrected) xfer 38
[ 206810.672323] wd0: autoconfiguration error: wd_flushcache: status=0x5128<TIMEOU>
[ 212327.420695] SLOW IO: zio timestamp 211326864412007ns, delta 1000556283358ns, last io 211280726737075ns
[ 212327.420695] panic: I/O to pool 'zfs' appears to be hung on vdev guid 1299234741086050345 at '/dev/wd0'.
[ 212327.420695] cpu0: Begin traceback...
[ 212327.420695] vpanic() at netbsd:vpanic+0x183
[ 212327.420695] panic() at netbsd:panic+0x3c
[ 212327.420695] vdev_deadman() at zfs:vdev_deadman+0x15e
[ 212327.420695] vdev_deadman() at zfs:vdev_deadman+0x31
[ 212327.420695] spa_deadman_wq() at zfs:spa_deadman_wq+0xe0
[ 212327.430704] workqueue_worker() at netbsd:workqueue_worker+0xef
[ 212327.430704] cpu0: End traceback...
At the same time, I hear a strange metal noises from server at near 08:00 too, so the destiny of second drive was specified.
The server restoration will take some time, but since anything were written in the log file, I'm able just to replay some actions and get all systems up as soon as possible 
#selfhosting #HomeServer