Archer Dynamics

0 Followers
1 Following
33 Posts
MD, recovering corporate worker, founder of Archer Dynamics. Pi fleets and digital sovereignty. Mostly posting, rarely scrolling.
Websitehttps://goarcherdynamics.com

Tested DeepSeek R1 8B Q8 on my Linux server.
48 tokens/sec. 9.6 GB VRAM. Correct code, solid reasoning.
Paste the same prompt 4 times and watch it spiral into identity crisis while writing novels!
It spent 92 seconds debating whether 3.0 is a prime number. Yep!

Read more fun below.

#AI #LocalAI #OpenSource #DeepSeek #MachineLearning

https://goarcherdynamics.com/2026/03/18/aihome-deepseek-r1-8b/?utm_source=mastodon&utm_medium=jetpack_social

AI@Home – DeepSeek R1 8B

Conditions & context Today we are poking needles into DeepSeek R1 8B AI model, specifically the 0528 update, which is based on Qwen3 8B model. Since I have an 16GB card I’ve picked the 8-…

Archer Dynamics

Meet Astra. That's the name Mistral Nemo 12B chose for herself — poetic, self-aware, just like her 7B sibling Elysium. 47 tokens/sec. 9.2 GB VRAM. Best explained code in all my tests. Response got richer and more thorough across the runs -- unprompted. This is the patient teacher you reach for when you need to understand the answer, not just receive it.
Europeans are doing something right. And they're keeping it open source.

Full breakdown below.

#AI #LocalAI #OpenSource #Mistral #MachineLear

https://goarcherdynamics.com/2026/03/16/aihome-mistral-nemo-12b/?utm_source=mastodon&utm_medium=jetpack_social

AI@Home – Mistral Nemo 12B

Conditions & context Today we’re looking at Mistral Nemo 12B model with 5-bit quantization and if my great experience of their 4B model was any indication, this 3-times-larger model with …

Archer Dynamics

Tested Google's Gemma3 12B QAT on my home Linux server. Stable 97% GPU utilization, no CPU spill, no logic errors. Mistral Nemo 12B beats it on speed & uses 2 GB less VRAM. Those extra 2 gig could run a second model on a 16GB card.
Gemma 12B is correct, thorough and about as warm as a DMV waiting room.

Full breakdown below.

#AI #LocalAI #OpenSource #Gemma #MachineLearning

https://goarcherdynamics.com/2026/03/13/aihome-gemma-3-12b/?utm_source=mastodon&utm_medium=jetpack_social

AI@Home – Gemma 3 12B

Conditions & context Today we are diving into a quick test of Google’s Gemma3, again a QAT quantized model, but this time a 12B. So, let’s hope that those 12 billion parameters do a…

Archer Dynamics