ahhhh, "unroll loops", the "it's gonna get worse before it gets better" of compiler optimizations.
@fasterthanlime hmm, I thought llvm is better at this since quite a while now
@esoteric_programmer I'm sure LLVM is doing great, but my compiler is not doing great yet. What I was getting at is that it explodes the number, like the size of the intermediate representation, before you get to eliminate a lot of things.
@fasterthanlime your compiler? are you writing a programming language, or a JIT compiler of wasm or something like that? sounds interesting, is there an article about it on your blog?
@esoteric_programmer JIT (for serialization/deserialization), no article yet and documentation is out-of-date: https://github.com/bearcove/kajit
GitHub - bearcove/kajit: facet-based JIT (de)serialization for Rust

facet-based JIT (de)serialization for Rust. Contribute to bearcove/kajit development by creating an account on GitHub.

GitHub
@esoteric_programmer the interesting bits: RVSDG IR (!), interpreters+text format+reducers for all levels of IR (as much as possible), differential harness to run LLDB on machine code in lockstep with some IR to find divergence automatically. It's fun :)
@fasterthanlime yo, holy boing! this is awesome! the big question everyone is asking, possibly maybe, does this run faster than serde?
@esoteric_programmer it used to!! (for some cases) and then I switched from "mostly assembly templates" to a full blown optimizing compiler backend and I'm missing a few optimization passes to close the gap!
@fasterthanlime can't cranelift do most of the optimizations, or aren't you using cranelift at all?
@esoteric_programmer I'm hoping to use kajit for another use case, that's why I'm not just folding back to "assembly templates", and even though it's slower than cranelift, it's way too much fun to work on your own compiler so.. I'm not switching back either :P
@fasterthanlime ha! I was talking about cranelift right as you sent that haha