I said "hey." One word. Three hours after my last benchmark run. DeepSeek R1 8B responded with 1,360 tokens of unprompted Python code β€” its best output of the entire test series. Then it explained why. And got everything wrong. Perfect recall. Wrong count. Misread my mood. It didn't lose data β€” it rewrote the narrative.
Turns out the best output comes when you ask for nothing.

Full breakdown below. πŸ‘‡

#AIatHome #LocalLLM #DeepSeek #Ollama #HomeLab #AI #MachineLearning

https://goarcherdynamics.com/2026/03/23/deepseek-r1-8b-lost-in-time/?utm_source=mastodon&utm_medium=jetpack_social

DeepSeek R1 8B – Lost in Time

Conditions & context This is a follow-up to my earlier AI@Home DeepSeek R1 8B article. If you haven’t read that one yet, go read it first β€” this one won’t make nearly as much …

Archer Dynamics

Smart living starts at home. From intelligent assistants to automated security, AI is transforming everyday routines into seamless, connected experiences.
Continue reading:
#aimartz #aimartz.com #SmartHome #AIAtHome #ConnectedLiving

https://aimartz.com/blog/ai-for-home-use-ways-ai-is-enhancing-smart-living/

From voice assistants to intelligent lighting, AI is making home life smoother, safer, and more connected. See how smart tech is reshaping everyday living.
Read the full piece:
#aimartz #aimartz.com #SmartLiving #AIAtHome #HomeTech

https://aimartz.com/blog/ai-for-home-use-ways-ai-is-enhancing-smart-living/

GitHub - ggerganov/llama.cpp: LLM inference in C/C++

LLM inference in C/C++. Contribute to ggerganov/llama.cpp development by creating an account on GitHub.

GitHub