Paul Khuong

@pkhuong@discuss.systems
441 Followers
358 Following
1.7K Posts

I can already see my future: writing a constexpr printf -> fmt string converter…

<shatner>Shakes fist @vitaut !</shatner>

Happy "significant AI added value" day to everyone else with a google domain.
I have a method that's (absl) printf-compatible. can i convince clang-tidy to convert calls to a new fmt-compatible method?

I wrote a short rant about what irks me when people anthropomorphize LLMs:

https://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html

A non-anthropomorphized view of LLMs

In many discussions where questions of "alignment" or "AI safety" crop up, I am baffled by seriously intelligent people imbuing almost magic...

FractalFir (GSOC student for the Rust GCC backend) wrote a new blog post on their work: https://fractalfir.github.io/generated_html/cg_gcc_bootstrap.html

Enjoy! :)

#rust #rustlang

Building the Rust compiler with GCC

is it e-graph or egraph

*Boosts very welcome*

A collective I am part of is looking for a SuperMicro X11 server. We're hoping to find a second-hand one we could buy!

Does any of you tech people know where one could buy a second-hand SuperMicro X11 server? Or knows a company that might be getting rid of their old ones?

If you have leads on an X12, we would also like to hear about it.

Ideal situation would be in Montreal, so we could pick it up, but also open to hearing about any and all opportunities!

I think I have a design problem that wants an ECS. Tell me why I'm wrong :D
@cfbolz When sampling at a fixed byte period, I worry about aliasing between the fixed-period sampling process and the profilee's potentially periodic allocation pattern. Does PyPy naturally introduce enough nondetermism to make that a non-issue?
https://mastoxiv.page/@arXiv_csPL_bot/114731825902548197
arXiv cs.PL bot (@arXiv_csPL_bot@mastoxiv.page)

Low Overhead Allocation Sampling in a Garbage Collected Virtual Machine Christoph Jung, C. F. Bolz-Tereick https://arxiv.org/abs/2506.16883 https://arxiv.org/pdf/2506.16883 https://arxiv.org/html/2506.16883 arXiv:2506.16883v1 Announce Type: new Abstract: Compared to the more commonly used time-based profiling, allocation profiling provides an alternate view of the execution of allocation heavy dynamically typed languages. However, profiling every single allocation in a program is very inefficient. We present a sampling allocation profiler that is deeply integrated into the garbage collector of PyPy, a Python virtual machine. This integration ensures tunable low overhead for the allocation profiler, which we measure and quantify. Enabling allocation sampling profiling with a sampling period of 4 MB leads to a maximum time overhead of 25% in our benchmarks, over un-profiled regular execution. toXiv_bot_toot

mastoxiv
Since execution performance is readily quantified, it is most often measured and optimized--even when increased performance is of marginal value; viz. ("Dubious Achievement", Comm. of the ACM 34, 4 (April 1991), 18.)
×

[arXiv] TreeTracker Join: Simple, Optimal, Fast
https://arxiv.org/abs/2403.01631

TreeTracker gives a very simple breakdown of what the core differences are between a naive binary join and an optimal multi-way join.