I've never seen more hangs with ROCm 7 than at the Tyburn Tree.

Just tried to use on a 760M with 32GB of RAM. I had to resort to Vulkan.

#AMD #AI #Troubleshooting #ROCm #ROCm7

AMD mise tout sur RADV - LinuxFr.org

L’actualité du logiciel libre et des sujets voisins (DIY, Open Hardware, Open Data, les Communs, etc.), sur un site francophone contributif géré par une équipe bénévole par et pour des libristes enthousiastes

Over the past few days I experimented with running OpenClaw locally (RX 6700 XT (12 GB) / 16 GB RAM).

For anyone who somehow missed it: OpenClaw is a (in my opinion, overhyped) FOSS AI agent framework that lets an LLM use tools to interact with the system and perform tasks.

I pulled OpenClaw via Docker:

- https://hub.docker.com/r/alpine/openclaw

For the LLM I used Qwen3:14B via Ollama:

- https://ollama.com/library/qwen3:14b

Before that I tested several models, including gpt-oss:20B.
However, the tools didn't work reliably.

After doing some research I found that the issue usually isn't the tool itself but the API / function-calling interface. Many models aren't specifically trained to produce structured outputs that exactly match the expected JSON schema. When the JSON format deviates even slightly, the tool call fails.

Qwen3, however, is trained to understand function schemas and tool calling, which makes it much more reliable for this kind of setup.

In the video I tested a few simple tasks:

- generating and saving a short sci-fi story
- writing and compiling a small C program
- plotting a mathematical function using gnuplot
- summarizing detected hardware using lspci (this needed two attempts, because on the first attempt, the LLM received the device list, but didn't know what to do with it.)

The full recording took about 9 minutes.
For the video I cut out the model's reasoning steps and sped up the longer text outputs by about 50%.

(Due to the character limit, the rest is in the comments.)

Video workflow:

- Recorded with OBS
- Edited in Kdenlive
- Transcoded with VAAPI (H.264)

No cloud, real hardware.
Everything runs on Linux + Docker + Ollama (FOSS), so anyone can set this up.
No GPU? No problem, you can also run it using PyTorch’s CPU backend, just much slower.

#OpenClaw #AI #LocalAI #OpenSource #FOSS #LLM #Qwen3 #Ollama #SelfHosted #Linux #Docker #AMD #ROCm
Local math LLM in action: Qwen2.5-Math-7B-Instruct.Q6_K solving calculus and quantum mechanics problems.

I tested it with problems from my university exams (curve length, extrema, gradient fields, 2D quantum box). Calculus worked surprisingly well.

I might also test it on matrix-related problems in the future.

Running fully local on AMD + ROCm.

Model quirks

- Curve length: Interprets inputs context-sensitively, e.g., e-t is correctly read as e^(-t).
- Gradient fields: Often overshoots and automatically computes the antiderivative; needed a stop condition.
- Step-by-step: Solves problems in a very textbook-like manner; some steps could be skipped when doing calculations by hand.

Video workflow:

- Recorded with OBS
- Edited in Kdenlive
- Transcoded with VAAPI (H.264)

No cloud, real hardware.
Everything runs on Linux + Text Generation Web UI (FOSS), so anyone can set this up.
No GPU? No problem, you can also run it using PyTorch’s CPU backend, just much slower.

Background music: Evanescence - Haunted (https://www.youtube.com/watch?v=tjDlL87sHMw)

#LocalAI #LLM #Qwen #MathAI #FOSS #GenerativeAI #Linux #ROCm #math

Joint Training on AMD and NVIDIA GPUs

#CUDA #ROCm #LLM #NVIDIA #AMD

https://hgpu.org/?p=30616

Joint Training on AMD and NVIDIA GPUs

As large language models continue to scale, training demands on compute and system capacity grow rapidly, making single-vendor homogeneous clusters insufficient. This paper presents a technical sol…

hgpu.org

RE: https://framapiaf.org/@journalduhacker/116129981046063955

Petit article mémo relatif à l'ajout d'OpenCL pour les GPU AMD, sous Debian, de mon cru.

Pour rappel, le support d'OpenCL n'est pas inclus dans le pilote Open Source `amdgpu`, il est nécessaire d'ajouter le paquet lié à ROCm… qui lui est aussi fait par AMD et Open Source

#Debian #amdgpu #ROCm #OpenCL #OpenMP #HIP

Diving into LTXV, my latest video diffusion experiments.

I’ve been experimenting with LTXV (ltxv-2b-0.9.8-distilled-fp8), combined with the text encoder umt5_xxl_fp8_e4m3fn_scaled.

The renderings showcase the hackercat, cherry blossoms, and a surreal city tour.

What it does:
- Generates latent video clips from text prompts
- Can produce a wide range of scenes, from surreal to photorealistic and beyond
- Perfect for short 1-2 second clips with creative prompts

Caution! 12 GB VRAM is tight:
- On my RX 6700 XT, it easily runs into OOM
- Frames, steps, and resolution need careful tuning
- FP8 helps, but some layers get upcast → memory can still fill up

Conclusion: Extremely powerful, but you need to tweak VRAM and settings to get stable results.

#AI #VideoDiffusion #LTXV #FP8 #GPU #CreativeAI #ShortVideos #Surreal #Photorealistic #StableVRAM #RX6700XT #AMD #ROCm #ComfyUI
The first two WebPs (WAN 2.1) show a cat on a motorcycle (front view and side view). They're based on a test prompt from Z-Image, adapted for motion.

The third WebP is my (so far never published) first attempt with Stable Video Diffusion from early January 2026... image-to-video instead of text-to-video.

Model: stableVideoDiffusion_img2vidXt11.safetensors

First generated a still image with SD1.5, then added subtle motion using this model.

All rendered locally on my RX 6700 XT

#StableDiffusion #SD15 #SDV #Img2Vid #AIAnimation #LocalAI #AMD #StableVideoDiffusion #ComfyUI #AIVideo #VideoGeneration #OpenSource #FOSS #ROCm #RDNA2 #AIGenerated #CreativeAI #ExperimentalAI #wan21
Urban skate sequence, rear view tracking shot.

Two variations of the same scene, followed by the final workflow setup in ComfyUI.

65 frames rendered locally, iterative prompt refinement, stabilized motion and clean landing.

The animated WebP was downscaled using ImageMagick before upload, as it otherwise stuttered when played in the browser.

#Skateboarding #UrbanSkate #AIVideo #GenerativeArt #WAN21 #Diffusion #LocalAI #OpenSourceAI #ROCm #CreativeWorkflow #AIProcess #Pixelfed #ComfyUI #ImageMagick #WebP
A cat jump is a much better “hello world” than a cyberpunk car. 🐈✨

Two jumps, generated locally with WAN 2.1 (1.3B, fp16) in ComfyUI.
Rendered on my AMD RX 6700 XT via ROCm.

Everything runs locally, no cloud processing, no external APIs.
Just privacy-friendly, open tools and feline physics. 💜

#cat #aicats #CatVideo #AIVideo #TextToVideo #ComfyUI #WAN21 #LocalAI #OpenSource #Privacy #ROCm #AIArt #Fediverse #Cute #linux #foss