Patch diffing + RCA for clfs.sys can take awhile.

I gave the diff + binary to a local LLM.

It mapped the UAF path, race condition, all IOCTLs in <20 min

LLMs don't replace the work, they are momentum.

New blog post following the UAF trail of CVE-2025-29824:

https://clearbluejar.github.io/posts/how-llms-feed-your-re-habit-following-the-uaf-trail-in-clfs/

How LLMs Feed Your RE Habit: Following the Use-After-Free Trail in CLFS

Dive into how LLMs and pyghidra-mcp accelerate reverse engineering by tracing a UAF vulnerability in CLFS through a patch diff.

clearbluejar
@clearbluejar I've been considering trying local llms and this is a use case I could actually use. However all my hardware is pretty old. Do you have any recommendations for hardware for a setup like this? Or any resources you found helpful to learn about hardware requirements for local models for RE purposes? Thanks.

@rickoooooo

Depends on your goals. This Nemotron model from nvidia needs at least 26GB VRAM, which I can provide with my 32GB MacBook Pro. That being said I’ve been looking at getting a 128GB. Localllm on Reddit is a good source to get an idea of what you need. Don’t have to use a Mac. GPT-oss-20b is also another good local small model

@clearbluejar thanks for the tips. I've been avoiding using llms in general for various reasons but I think if I could self host something that was actually useful I could get into it. Hard to just dabble without investing a lot up front to get the hardware just to try it out.