I think often about the introduction of null safety in Dart 3, and the Dart team’s demo of how it reduced the compiler output for some common operations by almost 50%.

With the news about DeepSeek’s ability to run the same algorithms as OpenAI in a fraction of the cycles, it’s making me philosophical.

At runtime, how many processor instructions are actually reflective of the Platonic processes we had in mind when we wrote the code?

How many could be removed with no downside?

#coding

I know optimization is not a new topic, and I’m familiar with the law of leaky abstractions, and how high-level languages have never pretended to be as efficient as Assembly.

Even so, this feels like a new angle. Should programming languages have microsyntactic tokens that can be optionally added, where useful, to help compilers generate more efficient code? Is there a world where programmers could say “I know you’re trying to protect me but don’t worry about that here”?

#coding #programming

@isaaclyman
That's pretty much how #Python optimizing compilers like #cython, #mypyc, #numba, and #TaichiLang work, and iiuc is the idea behind #MOJOlang.

As for leaky abstractions, I'd mitigate that by moving the lower-level algorithms into a separate module and limit the optimization pass to that module. Higher-level modules, like CLI entry points or API server route handlers, shouldn't need the extra optimization.

@dragon0 Thanks, I’ve spent very little time with Python so this is interesting.