Monthly reminder do fuck off about #ROCm.

It will take combined effort between #AMD, #Intel and #Qualcomm to offer a stack that can compete against #NVIDIA and #Apple.

Until then, happy to know software only runs on either Mac or an RTX.

One of the main reasons I want to use #ubuntu 26.04 is for the #framework desktop. 😅 All that #amd #rocm package handling. 😇

I didn't need more reasons for the release. 😁

https://www.phoronix.com/review/ubuntu-2604-ryzen-ai-max

Ubuntu 26.04 Provides More Performance For AMD Ryzen AI Max "Strix Halo"

Last week I provided benchmarks to quantify how the AMD Strix Halo graphics performance has evolved since launch one year ago, in today's article is a look at how the Zen 5 CPU performance with the flagship Ryzen AI Max+ 395 has evolved under Linux in the year since these exciting APUs began making their way to high-end laptops and desktops.

Nachtrag zum Thema Blender, den ich trotz der Installation der restlichen Pakete aus dem Meta-Paket für ROCm zu keiner Meinungsänderung habe überzeugen können.
Scheint wohl an meiner Karte zu liegen, hat wohl falsche Architektur.

Vllt. mache ich noch den Versuch mit der von Blender selbst gebauten Anwendung. So wichtig ist es mir, zumindest damit, nicht.

Danke für die Unterstützung.

#blender #rocm #linux #fedora

#AMD #ROCm 7.12 Tech Preview Brings More Consumer #APU & #GPU Support
Exciting with the ROCm 7.12 Tech Preview is supporting more consumer hardware. #RyzenAI400 series is now supported along with #Ryzen 200 series, the old Instinct #MI100 support is restored, and also there is now official support for #Radeon #RX7600 and #RX7700 XE graphics cards. Radeon RX 7600 is nearly three years old, it's long overdue but nice finally seeing it officially mentioned for ROCm support.
https://www.phoronix.com/news/AMD-ROCm-7.12-Tech-Preview
AMD ROCm 7.12 Tech Preview Brings More Consumer APU & GPU Support

In addition to this week's ROCm 7.2.1 stable point release, ROCm 7.12 was also released as the newest tech preview in working toward what will presumably be called ROCm 8.0.

Run AI locally on your AMD RX 6600! 🚀

Learn how to set up AMD ROCm on Ubuntu with the necessary overrides to run large language models using Ollama.

If you want to see the code, visit the website:
Read More... https://www.ctcservers.com/tutorials/howto/install-rocm-amd-gpu-ubuntu/

#AMD #ROCm #Ubuntu #MachineLearning

Once I got hardware-accelerated #AI working under #Linux on my AI mini workstation from HP, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with #OpenWebUI on @fedora Linux.

https://peter.czanik.hu/posts/new-toy-openwebui-first-steps/

#ollama #AMD #ryzen #ROCm

My new toy: Openwebui First Steps

Once I got hardware-accelerated AI working under Linux on my AI mini workstation from HP, my next goal was to make it easier to use. From this blog, you can read about my initial experiments with Open WebUI on Fedora Linux. Open WebUI talking about central log collection :-) Everything in containers As Open WebUI is not yet available as a package in Fedora, my initial approach was to use containers.

Ever since I bought my #AI mini #workstation from HP, my goal was to run hardware accelerated #ArtificialIntelligence workloads in a #Linux environment. Read more to learn how things turned out on #Ubuntu and @fedora !

https://peter.czanik.hu/posts/new-toy-first-steps-with-ai-on-linux/

#AMD #ROCm #llama #pytorch

My new toy: first steps with AI on Linux

Ever since I bought my AI mini workstation from HP, my goal was to run hardware accelerated artificial intelligence workloads in a Linux environment. Read more to learn how things turned out on Ubuntu and Fedora! I have been using various AI tools for a while now. Generating pictures about some impossible situations, like a dinosaur climbing the Hungarian parliament building, finding information where a simple web search is useless, or explaining syslog-ng code to me.

Just ran Demucs completely locally on my system (RX 6700 XT / 16 GB RAM).

Demucs is an open source AI model for music source separation, developed by Meta. It can split a full song into individual stems like vocals, drums, bass, and other instruments, making it useful for remixing, transcription, and audio analysis.

Test track: Fear of the Dark by Iron Maiden
(https://www.youtube.com/watch?v=bePCRKGUwAY)

Setup:

- Demucs installed via pip
- Model: htdemucs (default)
- Input converted to WAV using ffmpeg
- GPU acceleration via ROCm

Setting it up is tricky because Demucs is tightly pinned to older PyTorch versions, so you have to install dependencies manually and use "--no-deps" to avoid breaking your (ROCm-)PyTorch setup.

Result:
Very clean vocal separation in most parts. Some artifacts appear during very loud or distorted sections (e.g. emotional peaks or shouting).

Next steps / possibilities:

- Normalize and filter audio before separation
- Extract vocals for transcription or remixing
- Create karaoke / instrumental versions
- Combine with Whisper for lyrics
- Batch processing for datasets
- Model: htdemucs_ft (higher quality)

Video workflow:

- Recorded with OBS
- Edited in Kdenlive
- Transcoded with VAAPI (H.264)

No cloud, real hardware.
Everything runs on Linux, so anyone can set this up.
Works on CPU as well, but much slower.

#Demucs #AI #MachineLearning #AudioSeparation #MusicAI #OpenSource #Linux #ROCm #AMD #DeepLearning #AudioProcessing #Vocals #Karaoke #StemSeparation #SelfHosted #NoCloud #FOSS #Tech #LocalAI #MetaAI

Triton-Sanitizer: A Fast and Device-Agnostic Memory Sanitizer for Triton with Rich Diagnostic Context

#Triton #ROCm #DeepLearning #Package

https://hgpu.org/?p=30696

Triton-Sanitizer: A Fast and Device-Agnostic Memory Sanitizer for Triton with Rich Diagnostic Context

Memory access errors remain one of the most pervasive bugs in GPU programming. Existing GPU sanitizers such as compute-sanitizer detect memory access errors by instrumenting every memory instructio…

hgpu.org