44 Followers
3 Following
7 Posts

@kupiakos Thank you! And thanks for the detailed thoughts!

I think I basically agree with everything you said. I'm having a hard time seeing what in the post you disagree with, would you mind elaborating a bit? I certainly agree that Rust would make the bug harder to miss!

The tension between vulnerability power and exploit technique flexibility

https://pacibsp.github.io/2025/the-tension-between-vulnerability-power-and-exploit-technique-flexibility.html

The tension between vulnerability power and exploit technique flexibility

When trying to exploit a memory corruption bug on a reasonably hardened target, there’s a tradeoff around where to invest time. Would it be most efficient to try and find the most powerful, most readily exploitable bug possible? Or would it be better to stick with the first decent bug you find and invest time instead in developing a really great exploit technique that will make up for the bug’s lack of power?

PACIBSP security
“Invariant inversion” in memory-unsafe languages

One way of seeing the difference between memory-safe and memory-unsafe languages is that in a memory-safe language, the invariants used to uphold memory safety only “lean on” invariants that are enforced entirely by the language, compiler, and runtime, while in a memory-unsafe language the invariants used to uphold memory safety can “lean on” programmer-created (and thus programmer-breakable) invariants. This latter case can lead to a weird situation that I call “invariant inversion”, where code breaks a safe-looking logical invariant and ends up creating subtle memory unsafety issues.

PACIBSP security
@x43r0 Thank you! I'd love to hear, how would you frame things?
Code auditing is not the same as vulnerability research

One thing I’ve often been frustrated by while working on a security team at a large company is a seeming lack of understanding of the difference between code auditing and vulnerability research. These two activities appear superficially similar: they both involve looking at code to find vulnerabilities and improve security. But they have fundamentally different goals, different framings during the research process, and lead to different outputs. This causes problems when a project’s security goals call for one of these activities but the company effectively asks its researchers to perform the wrong one, leading to wasted work or not-useful findings.

PACIBSP security

Some thoughts on memory safety

https://pacibsp.github.io/2024/some-thoughts-on-memory-safety.html

This post briefly describes some theoretical aspects of memory safety that feel important to me but that aren't always obvious from how I see memory safety being discussed:

1. Memory unsafety is a specific instance of a more general pattern of handle/object unsafety

2. Memory unsafety is relative to a particular layer in a stack of abstract machines

3. Memory unsafety matters because it violates local reasoning about state

4. Safe languages use invariants to provide memory safety, but these invariants do not define memory safety

Also, not sure what was up with the embed in my last post, hopefully this one works.

Some thoughts on memory safety

In this post I want to share a few thoughts on some more theoretical aspects of memory safety. These points aren’t necessarily new, but I feel like they’re sometimes underappreciated.

PACIBSP security
Why exploits prefer memory corruption

Why do most in-the-wild exploits that target end-user platforms use memory corruption?

PACIBSP security