I can already see my future: writing a constexpr printf -> fmt string converter…
<shatner>Shakes fist @vitaut !</shatner>
I wrote a short rant about what irks me when people anthropomorphize LLMs:
https://addxorrol.blogspot.com/2025/07/a-non-anthropomorphized-view-of-llms.html
FractalFir (GSOC student for the Rust GCC backend) wrote a new blog post on their work: https://fractalfir.github.io/generated_html/cg_gcc_bootstrap.html
Enjoy! :)
*Boosts very welcome*
A collective I am part of is looking for a SuperMicro X11 server. We're hoping to find a second-hand one we could buy!
Does any of you tech people know where one could buy a second-hand SuperMicro X11 server? Or knows a company that might be getting rid of their old ones?
If you have leads on an X12, we would also like to hear about it.
Ideal situation would be in Montreal, so we could pick it up, but also open to hearing about any and all opportunities!
Low Overhead Allocation Sampling in a Garbage Collected Virtual Machine Christoph Jung, C. F. Bolz-Tereick https://arxiv.org/abs/2506.16883 https://arxiv.org/pdf/2506.16883 https://arxiv.org/html/2506.16883 arXiv:2506.16883v1 Announce Type: new Abstract: Compared to the more commonly used time-based profiling, allocation profiling provides an alternate view of the execution of allocation heavy dynamically typed languages. However, profiling every single allocation in a program is very inefficient. We present a sampling allocation profiler that is deeply integrated into the garbage collector of PyPy, a Python virtual machine. This integration ensures tunable low overhead for the allocation profiler, which we measure and quantify. Enabling allocation sampling profiling with a sampling period of 4 MB leads to a maximum time overhead of 25% in our benchmarks, over un-profiled regular execution. toXiv_bot_toot