Stephan Berger

1.2K Followers
1,099 Following
516 Posts

📢 Hands-On Training: Anti-Forensics (and Anti-Anti-Forensics) Techniques for Incident Responders @ BruCON 2026

I’m excited to announce my upcoming hands-on training at BruCON 2026 in Mechelen. This in-depth technical course is designed for Incident Responders who want to understand and defeat modern anti-forensics techniques actively used by threat actors.

The training progresses from foundational anti-forensic concepts to advanced techniques observed on Windows and Linux systems, with a strong focus on real-world detection and analysis.

Key Learning Objectives:

🔹 Identify and analyze classic and modern anti-forensic techniques
🔹 Correlate specific anti-forensic techniques with telltale forensic artifacts, understanding what remains and what's altered
🔹 Learn real-world analytical methods to detect, reconstruct, and recover evidence affected by anti-forensic methods

📍 Location: Mechelen, Belgium (BruCON 2026)
📅 Training Dates: April 22–23, 2026

Register here: https://www.brucon.org/training-details/anti-forensics-

"Reverse Evidence", Log clearing, Anti-Forensics.

VoidLink – A Stealthy, Cloud-Native Linux Malware Framework discovered by Check Point this week - is equipped with techniques to delete or manipulate logs and traces, making it harder for Incident Response teams or security software to find forensic evidence.

I will be teaching my new course, Anti-Forensics (and Anti-Anti-Forensics) Techniques for Incident Responders, in Belgium this April at the BruCON Training (Spring Training 22-23 April), presenting a wide range of anti-forensic techniques and how to analyze your way around them.

Sign up to learn more about how to defeat modern threats 🤓

Here is the link to the training:
https://www.brucon.org/training-details/anti-forensics-

I recently thought about the different pop-ups I receive every day on my Mac, AND how malware does the same to trick people into entering their password.. and I wondered if I could tell a legitimate prompt from a malicious one. I found a good article, depicting exactly this topic:

"One of the primary aims of most malware is to trick you into giving it your password. Armed with that, there’s little to stop it gathering up your secrets and sending them off to your attacker’s servers. One of your key defences against that is to know when a password request is genuine, and when it’s bogus." [1]

If you are like me, don't worry no more. Read the article, and be maybe a bit safer out there :)

[1] https://eclecticlight.co/2025/12/18/how-to-recognise-a-genuine-password-request/

Neshta. The gift that keeps on giving. I wrote about Neshta two years ago, and now this week, we found traces of this malware strain on two domain controllers in a breached network. [1]

As last time, the TA brought infected files into the compromised network, helping spread the infection. The file and registry paths have not changed in our case and are still the same as in my old X post.

What's funny (not funny) is that I browsed the Malware Analysis section of VX Underground yesterday, and in 2006 (when this section started), there were only two papers about Malware families uploaded in that year. One of them was Neshta! [2]

19 years later - still alive and kicking 😂 Cheers to that!

[1] https://x.com/malmoeb/status/1646324779849482241
[2] https://vx-underground.org/Malware%20Analysis/2006/2006-01-15%20-%20Win32-Neshta/Paper

Companies frequently approach us to discuss their security posture, playbooks, architecture, etc., but I wonder how many of them also regularly check basic configuration settings? An example from a recent case:

We were investigating yet another compromised network, where we were at first puzzled by the missing logon records inside the Security event logs. Log clearing, anti-forensics?

It turned out to be something simpler. The company, for whatever reason, turned off logging for Logons, as a quick check with auditpol revealed (see image). However, "Logon and Logoff" auditing is enabled by default. [1]

You might want to consider checking your audit policy settings before writing yet another playbook 🤓

[1] https://learn.microsoft.com/en-us/windows-server/identity/ad-ds/plan/security-best-practices/audit-policy-recommendations?tabs=winclient

During a recent engagement, we reviewed the collected AutoRuns data from all endpoints on the network. In that dataset, we identified the following scheduled task:

Name: 523135538
Command Line: C:\programdata\cp49s\pythonw.exe

There are a few things odd here. First, the name of the Scheduled Task (some random numbers). Second, the installation Path (Programdata\cp49s\). Third, Python is launched without any command-line arguments or a reference to a Python script, meaning the interpreter is started by itself.

Our initial hypothesis was DLL sideloading. After examining the Python directory, we identified a file named sitecustomize[.]py:

"Python's sitecustomize[.]py and usercustomize[.]py are scripts that execute automatically when Python starts, allowing for environment-specific customizations. Adversaries can exploit these files to maintain persistence by injecting malicious code." [1]

Path: C:\ProgramData\cp49s\Lib\sitecustomize[.]py
Content: See the image below.

So, this means that every time the Scheduled Task runs, the Python interpreter is executed, effectively loading the malicious Python file named b5yogiiy3c.dll. A pretty sneaky way, and something you should watch out for during your next hunting session or IR gig. 🤓

[1] https://detection.fyi/elastic/detection-rules/linux/persistence_site_and_user_customize_file_creation/

You might have a detection gap here. I was reading about some bash history shenanigans, where I learnt that if a terminal is stopped with the `kill` command, commands are still written to .bash_history. This is because, by default, kill sends the SIGTERM signal, which gracefully kills the process and allows Bash to write to .bash_history as it closes down.

However, if the -SIGKILL switch is used, commands are not written to .bash_history. SIGKILL kills the process immediately before the commands can be written to the file. [1]

Matt, in his scenario, noted that using the -SIGKILL switch ends the terminal process and SSH session immediately, but still writes the commands run during the session to the user’s .bash_history file. This is contrary to my testing on a new Ubuntu machine (where I was connected via SSH), where SIGKILL prevents bash history from being written.

Looking at the Elastic rule for "defense evasion - tampering of Bash command line history", SIGKILL is missing. [2] However, writing effective detection logic for this specific use case might not be as straightforward as one might think, because we are passing the PID of Bash to the kill command, and that PID varies every time I open a new instance of Bash. Plus, kill is a frequently used command. What are your thoughts here?

[1] https://mattcasmith.net/2022/02/22/bash-history-basics-behaviours-forensics
[2] https://github.com/elastic/protections-artifacts/blob/cce5ebfcaf4288a77a369546f0dd21b1dd549e99/behavior/rules/cross-platform/defense_evasion_tampering_of_bash_command_line_history.toml

My team colleague, Yann Malherbe, worked on a case where the attacker used Everything [1] (locate files and folders by name instantly) to search for password files on the beachhead.

The interesting thing here is that Everything keeps track of files opened from within its interface. This information is stored in the file Run History.csv.

The file consists of Filename (self-explanatory, with the full path), Run Count (how many times have I opened the file), and Last Run Date (represented in Windows FILETIME, a count of the number of 100-nanosecond ticks since 1 January 1601 00:00:00).

Despite the classic forensic artifacts that could show the attacker opened various files of interest, this was another fun artifact to discover (and surely yet another tool attackers use to stay under the radar).

[1] https://www.voidtools.com/

The picture below depicts a (malicious) Inbox Rule. I slightly modified this Inbox Rule to protect our customer, but the gist is that it filters incoming mail from a specific bank employee, moves it to the RSS Folder, and marks it as read.

The owner of the mailbox will never see that email, because honestly, who is looking at the RSS folder anyway? At least not your regular employee.

This is a super common pattern in our investigations. Silly-named rule names (three stars in this example, a dash sometimes, three points, you get it), moving emails to specific folders (RSS, Conversation History), and marking them as read. Nothing you could not spot from your own investigation. It's also interesting how much of a giveaway such an Inbox Rule can be. Once you have found such a rule in the mailbox of one of your employees, the chance that it is a false positive is really small.

So, just based on such an Inbox Rule, you can immediately tell that the account is compromised, and you can start the full investigation circle. I still recommend the Business-Email-Compromise-Guide from PwC left and right, because it sums up all these cases around Inbox Rules well.

If you haven't read it or never heard of it, now is the time to read it <nerd face>

BEC Guide: https://github.com/PwC-IR/Business-Email-Compromise-Guide/blob/main/PwC-Business_Email_Compromise-Guide.pdf

For a new project, I started to dig into older threat reports, like for example, "The ProjectSauron APT" from 2016. [1]

The interesting thing about these old reports is that you see techniques mentioned before that are still used 10 years later.

"ProjectSauron usually registers its persistence module on domain controllers as a Windows LSA (Local System Authority) password filter. This feature is typically used by system administrators to enforce password policies and validate new passwords to match specific requirements, such as length and complexity. This way, the ProjectSauron passive backdoor module starts every time any domain, local user, or administrator logs in or changes a password, and promptly harvests the passwords in plaintext."

There are various ways to register such "password filters", but the screenshot is from a recent case (and from one of my presentations) in which the attacker registered a new NetworkProvider to steal cleartext credentials. Techniques which are 10+ years old are still working and (mis-)used by attackers.. 🤷

[1] https://media.kasperskycontenthub.com/wp-content/uploads/sites/43/2018/03/07190154/The-ProjectSauron-APT_research_KL.pdf