515 Followers
56 Following
121 Posts
Native Hawaiian Hacker | Co-captain of @Shellphish CTF Team | PhD Student in Comp Sci @asu | Decompilation research
Bloghttps://mahaloz.re
GitHubhttps://github.com/mahaloz
Twitterhttps://twitter.com/mahal0z
We have achieved PGD (Perfect General Decompilation) internally 🚀🚀🚀!!!1!

I passed my defense! After a small celebration, it's time to get back to work.

Too many supply chain attacks in one year...

It's time to wrap things up! On Monday at 4 pm MST, I'll be streaming my PhD defense! Come one, come all!

The talk is titled "Toward a Science of Software Reverse Engineering".

Twitch: https://www.twitch.tv/maha1oz/schedule?seriesID=31a496ab-a842-483c-a825-808a53df601a

maha1oz Schedule - Twitch

Check out maha1oz stream schedule, and set reminders so you don’t miss out!

Twitch

If any of this resonates with you, feel free to reach out.

I'm excited to start this next phase of my career and make a real impact on the world.

https://zionbasque.com

Zion L. Basque

I’m also looking to collaborate more broadly, across industry, academia, and government.

Especially with folks thinking about real-world security problems, large-scale systems, or building tools people actually use.

If you’re a student interested in these problems, I’m recruiting.

I’m especially interested in people who want to go deep, whether that’s systems, human factors, or both.

Reverse engineering sits at that intersection.

There are three directions I see that need immediate work:

– how we measure understanding (humans + AI studies)
– how we represent low-level code (decompilers, debuggers)
– how we automate reasoning (AI, explanations, tooling)

And I’m sure there is even more.

One issue: we don’t have a clear notion of what “understanding” even means.

Two experts can analyze the same binary and reach different conclusions.

We rarely measure this; we just assume it as a limitation of expert-driven fields.

Reverse engineering sits right in the middle of this.

It’s how we explain systems we didn’t build, debug what’s broken, and analyze what might be vulnerable.

But much of it still relies on intuition rather than well-defined methods.

We’re getting very good at producing code and worse at understanding.

AI-generated code and growing complexity in compiled systems are pushing us toward a breaking point with real safety consequences.

What does society look like when no one understands the software it runs on?