Customers locked out, transactions missing, and a month of chaos. 🤯 BNF Bank's recent system update in Malta turned into a nightmare. Article analyzes the critical missteps, from potential inadequate preparation to poor communication, offering vital lessons for anyone in tech or finance.

Find out what happened: https://medium.com/@chribonn/bnf-banks-gone-wrong-system-update-3d9b57e5aa29

#BNF #BNFBankMalta #TechDisaster #CoreBankingUpgrade #Banking #LessonsLearned #DigitalTransformation #OperationsManagement #MFSA #TTMO

BNF Bank’s Gone Wrong System Update - Alan C. Bonnici - Medium

In March 2025, Maltese BNF Bank plc (€1.28 billion in assets, 40,000 customers) initiated a critical IT modernization project involving its core banking systems, digital channels, a transition from…

Medium
I'm grateful that I was able to handle a stressful situation, and maybe I've learned to find it less stressful for next time.
😊💪🧠 🧘‍♀️ 🙏 ✨
#growth #selfimprovement #stressmanagement #resilience #lessonslearned #mindfulness #gratitude
In Los Angeles, creepy weirdos are speedrunning #LessonsLearned from Charlottesville.

Lessons learned from today's match against Death Guard:

1) Celestians Sacresants are still worthless
2) Paragons in reserve is really good
3) Scout move: watch out to not expose your units
4) BSS+Palatine+Dialogus is strong

#warhammer40k #adeptasoritas #sistersofbattle #lessonslearned

In 2022, I ran a workshop on the design of participation-rich events. Here are some lessons learned from this online workshop.

https://www.conferencesthatwork.com/index.php/event-design/2022/05/lessons-learned-online-workshop

#online #workshop #LessonsLearned #ContextSwitching #Zoom #fishbowl #Keynote #eventprofs #assnchat

My nuclear physics lesson today was, “hot rock make steam, steam make boat go.”
#lessonslearned
Donald Trump Is Losing Support With Hispanics

The 2024 election saw Trump make substantial gains among the Hispanic community. But polls now suggest he is hemorrhaging support from these voters.

Newsweek

Omg, turns out, suggesting things at work comes with a hidden clause: 'You break it, you buy it'. In this case, 'You suggest it, you build it' 😭

#lessonslearned

Okay, so I wanted to share a little incident from a few months back that really hammered home the power of knowing your Linux internals when things go sideways. I got a frantic call, "something weird is going on with our build server, it's acting sluggish and our monitoring is throwing odd network alerts." No fancy EDR on this particular box, just the usual ssh and bash. My heart always sinks a little when it's a Linux box with vague symptoms, because you know it's time to get your hands dirty.

First thing I did, even before reaching for any specific logs, was to get a quick snapshot of the network. Instead of netstat, which honestly feels a bit dated now, I immediately hit ss -tunap. That p is crucial cause it shows you the process and user ID for each connection. What immediately jumped out was an outbound TCP connection on a high port to a sketchy-looking IP, and it was tied to a process that definitely shouldn't have been making external calls. My gut tightened. I quickly followed up with lsof -i just to be super sure no deleted binaries were clinging on to network connections.

With that IP and PID in hand, I moved to process investigation. pstree -ap was my next stop. It showed the suspicious process, and more importantly, its parent. It wasn't a child of systemd or a normal service. It was spawned by a build script that shouldn't have been executing anything like this. That hierarchical view was key. Then, to really understand what this thing was doing, I dared to strace -p <PID>. Watching the system calls unfurl was like watching a movie of its malicious intent: it was reading from /etc/passwd, making connect() calls, and trying to write to some odd /tmp directories. Simultaneously, I checked ls -l /proc/<PID>/exe to confirm the actual binary path (it was indeed in /tmp) and /proc/<PID>/cwd to see its working directory. No doubt, this was a rogue process.

Knowing it was a fresh infection, I immediately shifted to the filesystem. My go-to is always find / -type f -newermt '2 days ago' -print0 | xargs -0 ls -latr. This quickly pulls up any files modified in the last 48 hours, sorted by modification time. It's often where you find dropped payloads, modified configuration files, or suspicious scripts. Sure enough, there were a few more binaries in /tmp and even a suspicious .sh script in a developer's home directory. I also scanned for SUID/SGID binaries with find / -perm /6000 just in case they'd dropped something for privilege escalation. And while stat's timestamps can be tampered with, I always glance at atime, mtime, and ctime on suspicious files; sometimes, a subtle mismatch offers a tiny clue if the attacker wasn't meticulous.

The final piece of the puzzle, and often the trickiest, is persistence. I checked the usual suspects: crontab -l for root and every other user account I could find. Then I cast a wider net with grep -r "suspect_domain_or_ip" /etc/cron.* /etc/systemd/system/ /etc/rc.d/ and similar common boot directories. Sure enough, a new systemd timer unit had been added that was scheduled to execute the /tmp binary periodically. Finally, I didn't forget the user dotfiles (~/.bashrc, ~/.profile, etc.). It’s surprising how often an attacker will drop a malicious alias or command in there, assuming you won't dig deep into a developer's setup.

Long story short, we quickly identified the ingress vector, isolated the compromise, and cleaned up the persistence. But what really stuck with me is how quickly you can triage and understand an incident if you're comfortable with these fundamental Linux commands. There's no substitute for getting your hands dirty and really understanding what strace is showing you or why ss is superior to netstat in a high-pressure situation. These tools are your best friends in a firefight.

#linux #incidentresponse #blueteam #forensics #shell #bash #sysadmin #infosec #threathunting #lessonslearned