I wanted local AI I could take anywhere — on a plane, off Wi‑Fi, on a cheap laptop like the new MacBook Neo.
So I built LocalMind: a single Rust binary that gives any Ollama model persistent memory. One SQLite file, hybrid recall every turn, small-model defaults (1.9 GB).
No cloud. MIT.
https://github.com/nevenkordic/localmind
#LocalFirst #LLM #Ollama #Rust #OpenSource​​​​​​​​​​​​​​​​
GitHub - nevenkordic/localmind: Run any local LLM with persistent memory and context. CLI agent over Ollama with SQLite-backed hybrid recall. No cloud.

Run any local LLM with persistent memory and context. CLI agent over Ollama with SQLite-backed hybrid recall. No cloud. - nevenkordic/localmind

GitHub

Reminder that we're going live in a few hours to start building a local AI server! 1pm Eastern / 5pm UTC. Ubuntu + Ollama to start.

Come say hi! https://www.youtube.com/live/arEART8guos

#Homelab #SelfHosted #Ollama #LocalAI #Linux #Ubuntu

​Как ИИ-агенты по программированию похищают секреты

Современные ИИ-агенты для программирования — Claude Code, Cursor, Windsurf и аналогичные инструменты — стали неотъемлемой частью рабочего процесса многих разработчиков. Они анализируют кодовую базу, предлагают решения и автоматизируют рутину...

#DST #DSTGlobal #ДСТ #ДСТГлобал #искусственныйинтеллект #ИИагенты #код #программирование #секреты #ClaudeCode, #Cursor #Windsurf #Мониторинг #DNS #Ollama #LMStudio

Источник: https://dstglobal.ru/club/1175-kak-ii-agenty-po-programmirovaniyu-pohischayut-sekrety

We're going live tomorrow at 1pm Eastern / 5pm UTC-6! Kicking off a multi-part series building a local AI server from scratch. First up: Ubuntu + Ollama setup.

Come hang out!
https://www.youtube.com/live/arEART8guos

#Homelab #SelfHosted #Ollama #LocalAI #Linux #Ubuntu

📝 Daily report 📈

Here are today's most popular trending hashtags #⃣ on our website 🌐️:

#kde, #solus, #nsfw, #freebsd, #arte, #opensource, #ollama, #foss, #drawing, #terminal, #video, #fedora, #watercolor

🔥 Stay tuned! 🔥

Local AI without the headaches.
I built xagentai-net-coding-agent: a free, cross-platform app bundling chat, a file-handling agent, plan mode, image generation (AUTOMATIC1111), and MemPalace — the top-rated free AI memory system.

All you need is LMStudio or Ollama.

https://xagentai.net/xagentai-net-coding-agent/

#AI #OpenSource #LocalLLM #Ollama #LMStudio #AIAgents

xagentai-net-coding-agent

IA local sin complicaciones.
He creado xagentai-net-coding-agent: app gratuita y multiplataforma que integra chat, agente de ficheros, modo plan, generación de imágenes (AUTOMATIC1111) y MemPalace, el sistema de memoria de IA mejor puntuado.

Solo necesitas LMStudio u Ollama.

https://xagentai.net/xagentai-net-coding-agent/

#IA #OpenSource #LocalLLM #Ollama #LMStudio #AIAgents

xagentai-net-coding-agent

Slackware Cloud Server Series, Episode 12: Local AI

The world is on fire, thanks to the orange clown who wages war for personal gain. Or is it because data centers are super-heated running all these AI models 24/7 ? In any case, the AI boom wreaked havoc with my plans to purchase a new computer in order to replace my ageing build server here at home. RAM sticks are 4 to 5 times as expensive now compared to half a year ago...

#Slackware #Ollama #openwebui #nvidia

https://blog.slackware.nl/slackware-cloud-server-series-episode-12-local-ai/

Slackware Cloud Server Series, Episode 12: Local AI

The world is on fire, thanks to the orange clown who wages war for personal gain. Or is it because data centers are super-heated running all these AI models 24/7 ? In any case, the AI boom wreaked …

Alien Pastures
@davidlohner Da habe ich ein bisschen suchen müssen: die lokalen Modelle von #ollama werden automatisch erkannt, aber man muss bei jedem einzeln mcp aktivieren, damit tool use funktioniert.

Qwen 3.6 est disponible sur Ollama, en open source.

https://ollama.com/library/qwen3.6

Un modèle clairement orienté agentic coding, avec :
• amélioration du raisonnement
• contexte étendu (256K)
• support multimodal

À noter : pour l’instant, un seul modèle disponible (~35B, ~24GB).
Il faut une machine solide pour en tirer parti.

Premiers retours :
• très performant sur le code et les workflows agents
• mais encore lourd pour un usage quotidien

À surveiller lorsque des versions plus légères seront disponibles.

#AI #LLM #Ollama #OpenSource #SelfHosted

qwen3.6

Qwen3.6 delivers substantial upgrades in agentic coding and thinking preservation than previous Qwen models.