For those using ollama, the m5 is pretty sexy for the cost.

But the m6 may be coming out soon too.

In the rush for getting things out, a price drop on the m5s might be coming soon once apple releases the ultra. Maybe this year.

I suggest waiting.

https://www.notebookcheck.net/Apple-M5-Pro-M5-Max-GPU-Analysis-M5-Max-GPU-on-par-with-the-GeForce-RTX-5070-and-faster-than-Strix-Halo.1246060.0.html

#m5 #ollama #localai

Apple M5 Pro & M5 Max GPU Analysis - M5 Max GPU on par with the GeForce RTX 5070 and faster than Strix Halo

Notebookcheck analysis of the Apple M5 Pro / M5 Max GPU with performance and efficiency measurements compared to Nvidia Blackwell.

Notebookcheck

New blog post: wiring n8n + Ollama into my K3s homelab so Prometheus alerts get AI triage before hitting Telegram 🤖

Highlights:
- CNPG + Redis as deps
- Python in n8n requires an external task runner sidecar ("Enterprise" feature, but easily bypassed with extraContainers 🙃)
- Telegram MarkdownV2 is cursed. Solved it with a system prompt instead of post-processing 🧠

https://cowley.tech/posts/2026/03/n8n_ollama/

#Kubernetes #Homelab #n8n #Ollama #Prometheus #SelfHosted #LocalAI #DevOps

N8n with Ollama on Kubernetes

Once again I have a new tool I have been playing with, once again AI related. One of the tools I have been using is n8n which is a workflow automation platform. It enables us to integrate multiple applications and services through a visual interface. While it is very much an enterprise solution, it is FLOSS and we can deploy it at home, albeit with some caveats and/or workarounds. One of the really powerful parts of N8n is that we can integrate with various AI platforms, including all the usual suspects: Claude, ChatGPT, etc. Of course I want to keep things local, which N8n caters for with Ollama.

Chris' Tech Blog

Beelink announces Lobster Red OpenClaw mini PCs built for local AI

https://fed.brid.gy/r/https://nerds.xyz/2026/03/beelink-openclaw-mini-pc/

Plugable TBT5-AI enclosure lets Windows laptops run local AI with a desktop GPU

https://fed.brid.gy/r/https://nerds.xyz/2026/03/plugable-tbt5-ai-enclosure/

One more update for the slides of my talk "Run LLMs Locally":

Now including text to speech with Qwen3-TTS and Model Context Protocol.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf

#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp

Giving #ZimaBoard2 a brain of its own and building a local AI #agent that EARNS money? 🤯
Such a fun “Building Chappie” experiment!
https://youtu.be/gu14bTBv3eA?si=UES9lRxy-dhRP5us

#LocalAI #SelfHosted #HomeServer #ubuntu #json

【怖いw】AIに完全自由に使えるコンピューターを与えてみた

YouTube
Local AI Text-to-Speech Demo with Coqui TTS

Coqui TTS is an AI-powered text-to-speech synthesis platform that can automatically convert written text into natural-sounding speech. The system is based on modern deep learning models and can run entirely locally, making it particularly suitable for privacy-friendly applications and offline projects.

In this example, Coqui TTS is used directly through the Python API. This allows the model to be flexibly integrated into custom scripts and controlled automatically, for example to convert text into audio files or to process larger amounts of text.

Since many text-to-speech models can only process very long texts to a limited extent, the input text is divided into smaller sections (chunks) before processing. These are synthesized one after another and then combined into a complete audio output.

In this example, the model is executed locally on the CPU. Although some AI models support GPU acceleration, Coqui TTS can run reliably without specialized hardware and can therefore be used on many different systems.

The audio output generated by the model is initially a raw file. To improve sound quality, additional post-processing is recommended, such as removing clicks or artifacts, slightly smoothing audio transitions, or applying other minor corrections.

The Creepypasta used in this demo is in German and contains disturbing content.

https://creepypasta.fandom.com/de/wiki/Trypophobia

Video workflow:

- Recorded with OBS
- Edited in Kdenlive
- Transcoded with VAAPI (H.264)

No cloud, no API keys, real hardware, just Python.
Everything runs on Linux + Python (FOSS), so anyone can set this up.
No GPU? In this case… it doesn't matter.

#AI #TextToSpeech #CoquiTTS #Python #AIVoice #SpeechSynthesis #foss #LocalAI #OpenSourceAI #AItools #Artificialtelligence #AIDevelopment
Big thanks to Prince Canuma for MLX Audio and to Awni Hannun, Angelos Katharopoulos, and David Koski for MLX, MLX Swift, and MLX Swift LM.
This is a preview and we want your help. Find something broken? Drop it in the replies or open an issue.
What would you like to see in Perspective Studio next?
https://github.com/Techopolis/Perspective-Studio
#MLX #OpenSource #AppleSilicon #SwiftUI #LocalAI #macOS #Swift (2/2)
@mikedoise

We've been building ENGIOS because hardware deserves to live longer and software should respect the person running it.

Today — AIDA.

An intelligent OS deserves an intelligent heart.
Your machine deserves a kind one.

AIDA is the intelligence layer woven into ENGIOS. Actual local inference via Ollama — Phi-3 Mini. No internet required. Nothing leaving the machine. Ever.

engios.dev · github.com/ENGIOS-DEV/ENGIOS

#ENGIOS #AIDA #FOSS #LocalAI #Privacy #Linux #OpenSource

Build log — March 10, 2026

Shipped today:
• **Mirror Life Suite** (repo root, docs/, apps/*/manifest.jso

https://youtu.be/_bt9qF1p9Bw

#BuildInPublic #SovereignAI #LocalAI #MirrorBrain

Day 2: Building a Sovereign AI OS [2026] | Mirror Life Suite (No Cloud)

YouTube