Jann Horn

@jann@infosec.exchange
2.4K Followers
140 Following
1.2K Posts

human borrow checker (but logic bugs are best bugs).
works at Google Project Zero.

The density of logic bugs (compared to memory corruption bugs) goes down as the privilege differential between attacker context and target context goes up.

homepagehttps://thejh.net
The Intel SDM's "Examples Illustrating the Memory-Ordering Principles" section:

randomly wondering: on x86, could you use INVLPGB or RAR to make all other processors go through a memory barrier, or at least something like a memory barrier limited to a specific virtual memory location? (and would it be faster than a broadcast IPI?)

A TLB invalidation has to ensure that writes to the location being invalidated become visible before the TLB invalidation completes, so as long as these mechanisms are instruction-serializing, I guess it should be possible?

fun fact: the commit with the most parents in Linux kernel history is https://git.kernel.org/linus/2cde51fbd0f310c8a2c5f977e665c0ac3945b46d , it has a whopping 66 parents
Merge remote-tracking branches 'asoc/topic/ad1836', 'asoc/topic/ad193x', 'asoc/topic/adav80x', 'asoc/topic/adsp', 'asoc/topic/ak4641', 'asoc/topic/ak4642', 'asoc/topic/arizona', 'asoc/topic/atmel', 'asoc/topic/au1x', 'asoc/topic/axi', 'asoc/topic/bcm2835', 'asoc/topic/blackfin', 'asoc/topic/cs4271', 'asoc/topic/cs42l52', 'asoc/topic/da7210', 'asoc/topic/davinci', 'asoc/topic/ep93xx', 'asoc/topic/fsl', 'asoc/topic/fsl-mxs', 'asoc/topic/generic', 'asoc/topic/hdmi', 'asoc/topic/jack', 'asoc/topic/jz4740', 'asoc/topic/max98090', 'asoc/topic/mxs', 'asoc/topic/omap', 'asoc/topic/pxa', 'asoc/topic/rcar', 'asoc/topic/s6000', 'asoc/topic/sai', 'asoc/topic/samsung', 'asoc/topic/sgtl5000', 'asoc/topic/spear', 'asoc/topic/ssm2518', 'asoc/topic/ssm2602', 'asoc/topic/tegra', 'asoc/topic/tlv320aic3x', 'asoc/topic/twl6040', 'asoc/topic/txx9', 'asoc/topic/uda1380', 'asoc/topic/width', 'asoc/topic/wm8510', 'asoc/topic/wm8523', 'asoc/topic/wm8580', 'asoc/topic/wm8711', 'asoc/topic/wm8728', 'asoc/topic/wm8731', 'asoc/topic/wm8741', 'asoc/topic/wm8750', 'asoc/topic/wm8753', 'asoc/topic/wm8776', 'asoc/topic/wm8804', 'asoc/topic/wm8900', 'asoc/topic/wm8901', 'asoc/topic/wm8940', 'asoc/topic/wm8962', 'asoc/topic/wm8974', 'asoc/topic/wm8985', 'asoc/topic/wm8988', 'asoc/topic/wm8990', 'asoc/topic/wm8991', 'asoc/topic/wm8994', 'asoc/topic/wm8995', 'asoc/topic/wm9081' and 'asoc/topic/x86' into asoc-next - kernel/git/torvalds/linux.git - Linux kernel source tree

@securepaul by the way, do you happen to know why the selinux wiki http://selinuxproject.org/page/Main_Page is down?
If you are in a large org, The #1 most useful thing you can do in security when given a seemingly crazy task you have to accomplish, is go back down the chain and find the original requirement the task came from. Then read it carefully.
1/2

Linux kernel quiz: Why is this program so slow and takes around 50ms to run?
What line do you have to add to make it run in ~3ms instead without interfering with what this program does?

user@debian12:~/test$ cat > slow.c
#include <pthread.h>
#include <unistd.h>
#include <err.h>
#include <sys/socket.h>

static void open_sockets(void) {
for (int i=0; i<256; i++) {
int sock = socket(AF_INET, SOCK_STREAM, 0);
if (sock == -1)
err(1, "socket");
}
}

static void *thread_fn(void *dummy) {
open_sockets();
return NULL;
}

int main(void) {
pthread_t thread;
if (pthread_create(&thread, NULL, thread_fn, NULL))
errx(1, "pthread_create");
open_sockets();
if (pthread_join(thread, NULL))
errx(1, "pthread_join");
return 0;
}
user@debian12:~/test$ gcc -O2 -o slow slow.c -Wall
user@debian12:~/test$ time ./slow

real 0m0.041s
user 0m0.003s
sys 0m0.000s
user@debian12:~/test$ time ./slow

real 0m0.053s
user 0m0.003s
sys 0m0.000s
user@debian12:~/test$

Digging into the drive in my NAS that faulted, I'm reminded that magnetic hard drives are preposterously magical technology.

Case in point, using Seagate's tools I can get the drive to tell me how much it's adjusted the fly height of each of its 18 heads over the drive's lifetime, to compensate for wear and stuff. The drive provides these numbers in _thousandths of an angstrom_, or 0.1 _picometers_.

For reference, one helium atom is about 49 picometers in diameter. The drive is adjusting each head individually, in increments of a fraction of a helium atom, to keep them at the right height. I can't find numbers of modern drives, but what I can find for circa ten years ago is that the overall fly height had been reduced to under a nanometer, so the drive head is hovering on a gas bearing that's maybe 10-20 helium atoms thick, and adjusting its position even more minutely than that

This is _extremely_ silly. You can buy a box that contains not just one, but several copies of a mechanism capable of sub-picometer altitude control, and store shitposts on it! That's wild.

Anyway my sad drive apparently looks like it had a head impact, not a full crash but I guess clipped a tiny peak on the platter and splattered a couple thousand sectors. Yow. But I'm told this isn't too uncommon, and isn't the end of the world? Which is, again, just ludicrous to think of. The drive head that appears to have bonked something has adjusted its altitude by almost 0.5 picometers in its 2.5 years in service. Is that a lot? I have no idea!

Aside from having to resilver the array and the reallocated sector count taking a big spike, the drive is now fine and both SMART and vendor data say it could eat this many sectors again 8-9 times before hitting the warranty RMA threshold. Which is very silly. But I guess I should keep an eye on it.

I found a Linux kernel security bug (in AF_UNIX) and decided to write a kernel exploit for it that can go straight from "attacker can run arbitrary native code in a seccomp-sandboxed Chrome renderer" to kernel compromise:
https://googleprojectzero.blogspot.com/2025/08/from-chrome-renderer-code-exec-to-kernel.html

This post includes fun things like:

  • a nice semi-arbitrary read primitive combined with an annoying write primitive
  • slowing down usercopy without FUSE or userfaultfd
  • CONFIG_RANDOMIZE_KSTACK_OFFSET as an exploitation aid
  • a rarely-used kernel feature that Chrome doesn't need but is reachable in the Chrome sandbox
  • sched_getcpu() usable inside Chrome renderers despite getcpu being blocked by seccomp (thanks to vDSO)
From Chrome renderer code exec to kernel with MSG_OOB

Posted by Jann Horn, Google Project Zero Introduction In early June, I was reviewing a new Linux kernel feature when I learned about the...

I first experimented with this idea years ago; it was really cool to be able to see information like "show me memory usage grouped by object type" and "for all allocations of this object type, show me what the most common values of each member are, and how often those values occur".

I landed an LLVM change today that plumbs LLVM's existing !heapallocsite metadata into DWARF: https://github.com/llvm/llvm-project/commit/3f0c180ca07faf536d2ae0d69ec044fcd5a78716

This associates allocator call sites (in particular calls to C++ new) with DWARF type information; see the corresponding DWARF standard enhancement proposal, which has landed in the DWARF 6 Working Draft, and Microsoft's prior work that this is based on.

If you have C++ code that allocates heap objects with operator new and use a memory allocator that records the addresses from which it is called, this can be used by debugging/profiling tools to determine the types of heap allocations at runtime.

(LLVM does not yet support this for C-style malloc() calls yet though.)

[DebugInfo][DWARF] Add heapallocsite information (#132073) · llvm/llvm-project@3f0c180

LLVM currently stores heapallocsite information in CodeView debuginfo, but not in DWARF debuginfo. Plumb it into DWARF as an LLVM-specific extension. heapallocsite debug information is useful when...

GitHub