Over the past few days I experimented with running OpenClaw locally (RX 6700 XT (12 GB) / 16 GB RAM).
For anyone who somehow missed it: OpenClaw is a (in my opinion, overhyped) FOSS AI agent framework that lets an LLM use tools to interact with the system and perform tasks.
I pulled OpenClaw via Docker:
-
https://hub.docker.com/r/alpine/openclaw
For the LLM I used Qwen3:14B via Ollama:
-
https://ollama.com/library/qwen3:14b
Before that I tested several models, including gpt-oss:20B.
However, the tools didn't work reliably.
After doing some research I found that the issue usually isn't the tool itself but the API / function-calling interface. Many models aren't specifically trained to produce structured outputs that exactly match the expected JSON schema. When the JSON format deviates even slightly, the tool call fails.
Qwen3, however, is trained to understand function schemas and tool calling, which makes it much more reliable for this kind of setup.
In the video I tested a few simple tasks:
- generating and saving a short sci-fi story
- writing and compiling a small C program
- plotting a mathematical function using gnuplot
- summarizing detected hardware using lspci (this needed two attempts, because on the first attempt, the LLM received the device list, but didn't know what to do with it.)
The full recording took about 9 minutes.
For the video I cut out the model's reasoning steps and sped up the longer text outputs by about 50%.
(Due to the character limit, the rest is in the comments.)
Video workflow:
- Recorded with OBS
- Edited in Kdenlive
- Transcoded with VAAPI (H.264)
No cloud, real hardware.
Everything runs on Linux + Docker + Ollama (FOSS), so anyone can set this up.
No GPU? No problem, you can also run it using PyTorch’s CPU backend, just much slower.
#OpenClaw #AI #LocalAI #OpenSource #FOSS #LLM #Qwen3 #Ollama #SelfHosted #Linux #Docker #AMD #ROCm