I said "hey." One word. Three hours after my last benchmark run. DeepSeek R1 8B responded with 1,360 tokens of unprompted Python code — its best output of the entire test series. Then it explained why. And got everything wrong. Perfect recall. Wrong count. Misread my mood. It didn't lose data — it rewrote the narrative.
Turns out the best output comes when you ask for nothing.

Full breakdown below. 👇

#AIatHome #LocalLLM #DeepSeek #Ollama #HomeLab #AI #MachineLearning

https://goarcherdynamics.com/2026/03/23/deepseek-r1-8b-lost-in-time/?utm_source=mastodon&utm_medium=jetpack_social

DeepSeek R1 8B – Lost in Time

Conditions & context This is a follow-up to my earlier AI@Home DeepSeek R1 8B article. If you haven’t read that one yet, go read it first — this one won’t make nearly as much …

Archer Dynamics

내 맥에서 LM Studio로 LLM AI를 이것저것 로컬 실행해보고 있는데, 가장 만족스러운건 Qwen 3.5 35B A3B 모델이다.

AI알못이라 다른 모델들이랑 왜 차이가 나는지 그 이유는 모르겠는데, 답변 퀄리티랑 답변 속도가 참 만족스럽다. 결정적으로 모르는거 나오면 소설 쓰지 않고 "나 이거 모르는뎁쇼" 하는게 최고ㅋ

#AI #localAI #localLLM #Qwen

Anyone running local LLMs on their shiny new M5 Max MacBook Pros? I’m curious as to how you’re getting on.

#LocalLLM #M5 #Mac #MacbookPro

AI arms race aside - sometimes a small model is exactly enough. RechnungsDoc uses Apple Intelligence to read medical invoices locally - on Mac and iPhone. No cloud, no GDPR headache.
What are you using small LLMs for? 👇
applicay.com/rechnungsdoc
#AppleIntelligence #LocalLLM #Privacy #PKV #Beihilfe

MakeUseOf: I switched to a local LLM for these 5 tasks and the cloud version hasn’t been worth it since. “Local LLMs have also come a long way, to the point where you can run lightweight AI models on just about every device. They’re not good at everything, but they do some tasks so well you’d want to cancel that cloud AI subscription right away.”

https://rbfirehose.com/2026/03/19/makeuseof-i-switched-to-a-local-llm-for-these-5-tasks-and-the-cloud-version-hasnt-been-worth-it-since/
MakeUseOf: I switched to a local LLM for these 5 tasks and the cloud version hasn’t been worth it since

MakeUseOf: I switched to a local LLM for these 5 tasks and the cloud version hasn’t been worth it since. “Local LLMs have also come a long way, to the point where you can run lightweigh…

ResearchBuzz: Firehose

Seit etwas längerer Zeit mal wieder eine schlaflose Nacht, dafür aber ein neuer #OpenClaw Agent im Discord! 🙂

Neben #IronAxon jetzt also auch #ChromeAxon!

IronAxon läuft auf meinem @raspberrypi und ChromeAxon läuft auf meinem @EndeavourOS Arch Linux.

Nächster Step: ChromeAxon mit Ollama verbinden. Habe mir mal zwei Modelle installiert. qwen3:30b & qwen3:14b.

Das 14b sollte ganz gut laufen, mal schauen wie gut es sich im Vergleich zu Codex schlägt, das gerade IronAxon antreibt. #localllm 👨‍💻

Could local LLMs be used to empower people?

https://lemmy.world/post/44388552

Could local LLMs be used to empower people? - Lemmy.World

I understand the sentiments against AI, tech oligarchs investing in data centers, and etc. But could local LLMs, be used to empower people, and ignite more startup projects? I use an LLM to draft all sorts of writing. It’s not perfect, but it’s an easy way to flesh out my ideas in an outline or rough draft. Other open source projects like “openclaw” is a great way to create a personal assistant. Neural networking is here, and isn’t going away anytime soon. It’ll probably get better over time. Should people be thinking about “how I can use local AI to help me” than “anti-AI”?

Find out which AI models your machine can actually run. #localllm
https://www.canirun.ai/
CanIRun.ai — Can your machine run AI models?

Detect your hardware and find out which AI models you can run locally. GPU, CPU, and RAM analysis in your browser.

CanIRun.ai
ObsidianとローカルLLMを連携するAIプラグイン「Local LLM Hub」を作りました - Qiita

はじめに このプラグインは、私が開発しているobsidian-gemini-helper のローカルLLM専用版です。 obsidian-gemini-helperはRAGやMCPやSkillsがObsidianのアプリ上でチャット形式で使えたり、ワークフローを組むことが...

Qiita

CanIRun.ai는 브라우저의 WebGPU로 내 PC·노트북에서 실행 가능한 AI 모델을 추정해 보여주는 웹 도구입니다. 모델별 메모리 요구량·토큰 속도·컨텍스트 길이와 S~F 등급을 제공해 Qwen, Llama, Gemma, Mistral, GPT‑OSS 등 주요 모델의 로컬 실행 가능성을 빠르게 판단하게 해주나, 결과는 추정치이며 MoE·양자화·모바일 인식 등 정확도 개선 요구가 있습니다.

https://news.hada.io/topic?id=27483

#canirun.ai #localllm #webgpu #modelbenchmark #qwen

CanIRun.ai — 내 컴퓨터에서 AI 모델을 실행할 수 있을까?

<ul> <li>로컬 머신이 어떤 <strong>AI 모델을 실제로 실행할 수 있는지</strong>를 확인할 수 있는 웹 기반 도구</li> <li>브라우저의 <strong>We...

GeekNews