Juuso Haavisto

@juuso
7 Followers
24 Following
42 Posts
Currently researching #APL at University of Oxford. Part-time #NixOS software entrepreneur https://github.com/ponkila
homepagehttps://juuso.dev
Some trezor changes got finally upstreamed in nixpkgs, so here's a quick writeup how I managed without: https://juuso.dev/blogPosts/patching-nixpkgs-flake/patching-nixpkgs-in-a-flake.html #nixos
Patching nixpkgs in a flake

An advanced flake tactic for a very particular problem.

@geerlingguy Hi, do you plan to benchmark gaming on the Ampere computer using fex-emu.com which seems to be the x86-aarch64 translation layer used by the Steam Frame?
Forth: The programming language that writes itself: The Web Page

An exploration of the evolution and meaning of the Forth programming language and its context in history.

ALGOL 68 Programming Language Support Still Being Worked On For GCC https://www.phoronix.com/news/GCC-ALGOL-68-Still-Going
ALGOL 68 Programming Language Support Still Being Worked On For GCC

At the start of the year, a new GCC compiler front-end was proposed for the half-century old ALGOL 68 programming language

Going abroad this week so prepared by finishing a LUKS setup I should have done eons ago: https://juuso.dev/blogPosts/fido2-luks/multi-token-fido2-luks.html #fido2 #linux
Multi-token FIDO2 LUKS

What happens to your data if your computer gets lost or stolen?

Finland bans smartphones in schools

Pupils will be able to use their phones in some circumstances, but they will need to get permission from teachers.

News
"Discuss preprints based on W3C ActivityPub federation" - https://nlnet.nl/project/Sciety-ActivityPub/
NLnet; An OpenScience flavour of Bonfire on NixOS for preprints

"A 20% win on benchmarks is irrelevant if it comes at a 20% cost to roofline efficiency." https://jax-ml.github.io/scaling-book/
How To Scale Your Model

Training LLMs often feels like alchemy, but understanding and optimizing the performance of your models doesn't have to. This book aims to demystify the science of scaling language models: how TPUs (and GPUs) work and how they communicate with each other, how LLMs run on real hardware, and how to parallelize your models during training and inference so they run efficiently at massive scale. If you've ever wondered “how expensive should this LLM be to train” or “how much memory do I need to serve this model myself” or “what's an AllGather”, we hope this will be useful to you.

"When you talk to someone older today, they have a richer language. They have more words about nature, about formations in nature, animals and reindeer especially" https://www.noemamag.com/the-languages-lost-to-climate-change/
The Languages Lost To Climate Change | NOEMA

Climate catastrophes and biodiversity loss are endangering languages across the globe.

NOEMA
Nix as a Static Site Generator

A pathway incremental builds and reproducability