Blog post: why I continue to blog with WordPress
https://thinking.ajdecon.org/2026/01/12/why-i-continue-to-blog-with-wordpress/Note! I’m open to other recommendations for tools that fit my idiosyncratic requirements. I don’t actually love Wordpress 😬
why I continue to blog with WordPress – thinking out loud
Which is really just a short post about why you should read Henry Oliver’s excellent article about how English has become easier to read
https://www.worksinprogress.news/p/english-prose-has-become-much-easier
The great shift of English prose
It's not because of shorter sentences.
The Works in Progress NewsletterLowering the barrier to posting – thinking out loud
“People We Meet On Vacation” was an adorable, if forgettable, rom-com and a good Sunday evening watch.
My partner had gripes with it as it compared to the novel, but the leads had good chemistry and there was some witty banter.
Some discussion on bsky of the usefulness of Tailscale, and I’ll just note here how very handy it is for running a personal homelab that includes cloud instances. As well as just having lab connectivity from a laptop or phone on the go!
Services I run over Tailscale, just for myself, include:
- An RSS feed reader
- A personal git forge
- An IRC bouncer
- A (poorly maintained) wiki
- JupyterLab
- Open WebUI for playing with local LLMs on a GPU workstation
- SSH to a powerful workstation, hosted at home but without complex configs
And probably a few things I’ve forgotten! It’s really just very neat. Sure I could do it all with manual Wireguard configs. But Tailscale just makes the underlying primitive much more ergonomic.
Blog post: Links to some interesting coverage of the LAVD scheduler on Linux, optimizing for latency-critical tasks for gaming workloads
https://thinking.ajdecon.org/2026/01/10/latency-critical-linux-task-scheduling-for-gaming/Latency-critical Linux task scheduling for gaming – thinking out loud
The closest thing I’m aware of is the EuroLLM project which is planned to train on “open data”, unsure how defined. Will be interested to see its progress tho.
https://www.eurohpc-ju.europa.eu/eurollm_en
EuroLLM
Large language models (LLMs) are at the forefront of enormous progress in natural language processing and AI, as witnessed by models like OpenAI's ChatGPT.
The European High Performance Computing Joint Undertaking (EuroHPC JU)Context here is that I’ve been playing extensively with locally-hosted models (Macbook-scale) and curious to see if a “vegan” model exists and how it acts