My resolutions for this year are:
- name
- dependency
- macro, if you want to count it
- maybe a bit of type if I'm feeling cheeky
For Krabby (my very-very-WIP Rust compiler), I did find a sensible use case for my parallel name resolver; a query on thread A ("look up core::iter::Iterator") can depend on a query being processed by thread B ("parse core/iter.rs"). Small atomics + a concurrent hash table lets me detect such conflicts, but I don't want to block the thread when this happens (parsing can take a while); instead, the hash table lets thread A pass on its work to thread B, to execute when its query finishes. Thread A will simply report "the query would block, come back later" and move on to other stuff. Once thread B is done, it will re-enqueue the blocked work. I'm excited to implement this, it's going to be so cool!
My resolutions for this year are:
It's time for a little update on my very-very-WIP Rust compiler, Krabby! I haven't posted about it much recently, but I've been working at it behind the scenes. Slowly but surely, name resolution is coming together.
I've figured my approach for local nameres, and have all the right infrastructure set up. Global nameres was trickier, but I have a satisfactory plan for the underlying database. You can check it out at https://codeberg.org/bal-e/krabby/pulls/18.
A random musing about parking_lot somehow led me to rethinking the way tasks are defined and distributed across Krabby. Implementing it is going to take some time -- expect a blog post!
My next step is dependency resolution, so that the name resolver will have more data to play with. I'm just going to read Cargo.lock and resolve feature flags for now.
I've been having a lot of fun, and I'm excited to see where things go in 2026 (and in time for RustWeek!)
How can I ever explain the joy of sitting at my desk before the morning light, pulling out three whiteboards covered (on both sides!) with notes, and asking myself "what am I going to do with my compiler in the next 8 hours?".
My current side-side project is phonebook, a fast multi-threaded identifier interner for Rust. I've been banging my head against the wall for the last few weeks, trying to solve concurrent reallocation; and for the last week, I've been taking a "break" by trying to build a reader-writer split API for rustls. And while phonebook is still not feeling very appealing, the project I needed it for -- Krabby -- has plenty of other work to do. So I spent today catching up on it. I hope I can find the inspiration to tackle that concurrent reallocation issue again.
I did end up building some nice stuff; a friend had contributed progress bar support a while back, and I've revamped it using ratatui. It also includes a lot of cool detail now. Have a look!
I expect the next few days to be pretty busy, so I plan to pick up phonebook next week; I'll focus on Krabby and rustls until then.
Here's some simple math to help calculate performance changes when you're profiling some code. Suppose you're rewriting some function F within your program -- before the change, perf told you it took up N% of your runtime, and after the change it's M%.
(100 - M) / (100 - N).(1 - 100 / M) / (1 - 100 / N).I'm defining "speedup" here as the ratio between the old and new runtimes, i.e. how many times faster the new code is.
For example, I was optimizing my very-very-WIP Rust compiler, and I fixed a nasty performance bug in identifier interning. According to perf, interning went from 53% of my total runtime to 12%. By the above formulae, that implies a ~1.87x speedup of total runtime, and a ~8.3x speedup of interning itself.
One of my issues with Rust is the "weakly typed" nature of conditional compilation. The compiler can't tell you whether your code will compile under all combination of cfgs (including feature flags), or suggest whether you missed a cfg somewhere.
I spent an hour trying to design a name resolution algorithm that is cfg-independent (i.e. it returns something to which any combination of cfgs can be applied to get a resolved AST), and I don't think it's practical. cfgs are too flexible; conditionally-compiled declarations can override each other in weird ways. Such an algorithm could have provided those "strong typing" features for cfgs, so I don't think there's much hope for them.
At least for Krabby (my very-very-WIP Rust project), I'm going to write a standard cfg-dependent name resolver. I have a pretty thorough design for it already.
Update on my very-very-WIP Rust compiler: I'm in the process of my third rewrite of the task queue system internals, this time as a separate crate that I'll publish on crates.io. Krabby isn't the only highly parallel, CPU-bound application where work is divided into small, synchronous tasks, so I may as well make this useful to others. I hope to get a blog post out over this week.
Fun note about my very-very-WIP Rust compiler: almost 9% of the runtime is spent in calling std::io::stdio::_print (i.e. println! and friends), even when I'm redirecting stdout to /dev/null. While a little bit of that time is spent in contended locks, because I'm printing from many threads, I'm not even calling it that often! Lesson learnt: watch out for string formatting if you're writing high-performance Rust code.
Edit: Look at my reply for some concrete numbers.