Chris Leary

200 Followers
95 Following
266 Posts
computers/compilers/accelerators, mostly · projects! XLS http://bit.ly/xlsynth TLBHit http://tlbh.it · prev: XLA ML compiler, JAX, accelerator co-design, dynlang JITs
☕️ LATTE, our little workshop on hardware design languages/compilers/etc., has 24 (!) rad-looking position papers this year. It’s on Monday, and you can attend on Zoom or in Pittsburgh: https://capra.cs.cornell.edu/latte26/
LATTE ’26

One of the cool things about our line of work is sometimes you learn a technique and really feel like you’ve “leveled up” in the video game sense.

I remember one time a colleague/friend came in to work and excitedly said “I can see in the SWAR dimension now”. Stuff like that rocks. 🤘

One of my favorite things about Rust, after properly using it for about two years now, is that it puts what I think is “the correct amount” of annoyance on using polymorphism (in its various forms).

Balling up function pointers in a struct in C days was maybe a bit too high, nice middle ground vs the languages that leaned into OOP fanaticism, and monomorphization being clean and easy is a great default people will naturally lean towards for systems programming.

De gustibus non est disputandum tho

Abstract org dynamics thought experiment: as people become arbitrarily careful things get missed that are /intuitively/ quite clear but nobody has the time to spend on the analysis to confirm beyond “background-radiation-level of doubt” that is applied to any proposition.

“Background doubt radiation” is interesting. You don’t want it to be zero, but it being high is stifling. You want it to be roughly proportional to how reversible/costly a decision is, with bias towards action so you collect empirical data. Cost is not absolute but evaluated vs an alternative (which could be the result of inaction). Mostly I’ve seen people mitigate its impact by having clear notions of being responsible for what you put in place, but often ownership and responsibility is more inherently diffuse. For example, teams often own code bases collectively and there is the “haunted graveyard” effect for a component where nobody actively champions the functionality/purpose.

Must be a theory of “satisficing for orgs” this ties to…

Remedies I believe include either staying small or lots of ingrained belief in the power of the vertical.

Every org I've seen thus far eventually asks "where can we make a hard partition for effective management purposes?!" and the answer ends up "Software | Hardware".

And that's where it all starts to go sideways.

Been thinking about "doing science" as an inner loop. One useful split: experiment-heavy vs hypothesis-heavy. When experiments are cheap, I think the ideal workflow looks different vs when they’re expensive.

🤔 (Hardware-code thoughts.) Having parameterized code is a kind of superpower, in that it gives you something to ablate.

I guess it's really the gift of having a problem space that has functional smoothness to it that you can manipulate the space requirement for.

If you feel like you've really got stuff to teach, then you should talk

But if you're talking, then you're not listening

It's a conundrum

There's a bunch of things we (as a culture) papered over in terms of "correct by construction" focus because we could lean on human trust.

"Just do don't those things that are bad," we'd say!

But if you think about the old IBM sign that makes you think about "where does a notion of responsibility for correctness really come from?", it seems clear that if we can't hold machines accountable (from a technical standpoint) we /can/ build (and have been building) tools and systems that keep outcomes on the rails as much as we can, and help us define with precision what the rails are. https://simonwillison.net/2025/Feb/3/a-computer-can-never-be-held-accountable/

IMO language platforms that enable only-good-things and disallow bad things, in a way that dovetails with really effective ability-to-explore, are going to be key to leveraging AI tech well.

A computer can never be held accountable

This legendary page from an internal IBM training in 1979 could not be more appropriate for our new age of AI. A computer can never be held accountable Therefore a …

Simon Willison’s Weblog