
Pascal 1993-2001, PHP/JS/MySQL since 2000, some Rust/Java/Go/Bash

Freue mich wirklich sehr, dass kolibri - das Design System der Verwaltung - die völlig nutzlose und nicht accessible umsetzbare Toast-Component deprecated.
Wenn ihr cool sein wollt, entfernt auch aus den Design-Systemen eurer Unternehmen die 🍞s. Es gibt quasi keine sinnvollen Usecases für sie, wirklich nicht.
https://github.com/public-ui/kolibri/issues/8372#issuecomment-4073102348
which animal? when the panda suddenly barks ...
https://www.sueddeutsche.de/panorama/china-pandas-hunde-zoo-angemalt-lux.EDiiuKXztoGUuP3v1MiuF1
What to put in your AGENTS.md?
llama.cpp has the answer:
https://github.com/ggml-org/llama.cpp/blob/master/AGENTS.md
New update for the slides of my talk "Run LLMs Locally":
Now including Reranking, Qwen 3.5 (slower than Qwen 3, but includes Vision) and loading models with Direct I/O.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf
#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp
One more update for the slides of my talk "Run LLMs Locally":
Now including text to speech with Qwen3-TTS and Model Context Protocol.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf
#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp
I updated the slides for my talk "Run LLMs Locally":
Now including image generation with Qwen3 and content classification from the Qwen3Guard Technical Report paper.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf
#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai