For those using ollama, the m5 is pretty sexy for the cost.
But the m6 may be coming out soon too.
In the rush for getting things out, a price drop on the m5s might be coming soon once apple releases the ultra. Maybe this year.
I suggest waiting.
For those using ollama, the m5 is pretty sexy for the cost.
But the m6 may be coming out soon too.
In the rush for getting things out, a price drop on the m5s might be coming soon once apple releases the ultra. Maybe this year.
I suggest waiting.
New blog post: wiring n8n + Ollama into my K3s homelab so Prometheus alerts get AI triage before hitting Telegram 🤖
Highlights:
- CNPG + Redis as deps
- Python in n8n requires an external task runner sidecar ("Enterprise" feature, but easily bypassed with extraContainers 🙃)
- Telegram MarkdownV2 is cursed. Solved it with a system prompt instead of post-processing 🧠
https://cowley.tech/posts/2026/03/n8n_ollama/
#Kubernetes #Homelab #n8n #Ollama #Prometheus #SelfHosted #LocalAI #DevOps
Once again I have a new tool I have been playing with, once again AI related. One of the tools I have been using is n8n which is a workflow automation platform. It enables us to integrate multiple applications and services through a visual interface. While it is very much an enterprise solution, it is FLOSS and we can deploy it at home, albeit with some caveats and/or workarounds. One of the really powerful parts of N8n is that we can integrate with various AI platforms, including all the usual suspects: Claude, ChatGPT, etc. Of course I want to keep things local, which N8n caters for with Ollama.
Beelink announces Lobster Red OpenClaw mini PCs built for local AI
https://fed.brid.gy/r/https://nerds.xyz/2026/03/beelink-openclaw-mini-pc/
Plugable TBT5-AI enclosure lets Windows laptops run local AI with a desktop GPU
https://fed.brid.gy/r/https://nerds.xyz/2026/03/plugable-tbt5-ai-enclosure/
One more update for the slides of my talk "Run LLMs Locally":
Now including text to speech with Qwen3-TTS and Model Context Protocol.
https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf
#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp
Giving #ZimaBoard2 a brain of its own and building a local AI #agent that EARNS money? 🤯
Such a fun “Building Chappie” experiment!
https://youtu.be/gu14bTBv3eA?si=UES9lRxy-dhRP5us

We've been building ENGIOS because hardware deserves to live longer and software should respect the person running it.
Today — AIDA.
An intelligent OS deserves an intelligent heart.
Your machine deserves a kind one.
AIDA is the intelligence layer woven into ENGIOS. Actual local inference via Ollama — Phi-3 Mini. No internet required. Nothing leaving the machine. Ever.
engios.dev · github.com/ENGIOS-DEV/ENGIOS
Build log — March 10, 2026
Shipped today:
• **Mirror Life Suite** (repo root, docs/, apps/*/manifest.jso