Current status: Attempting to fix the build of devel/libclc on #HardenedBSD 16-CURRENT. #FreeBSD recently bumped #llvm in their main development branch to llvm 21.

Whenever the llvm major version is bumped in src, we need to bump the default llvm version in ports. This is because we build base libraries with LTO, and llvm's LTO implementation is not forward-compatible.

Meaning, the default llvm version in ports must be equal to or higher than the version in src.

K3 is now natively supported in the Linux Kernel mainline, with key SoC enablement already merged for Linux 7.0!

You can find the K3 DTS files under arch/riscv/boot/dts/spacemit/ in the upstream tree: k3.dtsi, k3-pico-itx.dts.

What’s in Linux 7.0 already (merged upstream):
Basic DeviceTree
Pinctrl, GPIO
Clock, Reset
UART,PMIC (p1),SDHCI (eMMC)

#RISCV #Spacemit #K3 #Linux #OpenSBI #Uboot #LLVM

My vision for the loop vectorizer in #LLVM has finally crystallized into a first patch! The line of work would have a major impact on optimization result as well as compile time! 🎉

https://github.com/llvm/llvm-project/pull/195385

[VPlan] Embed widening decisions in recipes by artagnon · Pull Request #195385 · llvm/llvm-project

We currently make widening decisions in an ad-hoc fashion, and have helpers that unnecessarily do recursive reasoning over and over: isSingleScalar, onlyFirstLaneUsed, and onlyScalarValuesUsed. The...

GitHub

Low-Compilation-Cost Register Allocation in LLVM-Based Binary Translation

https://dl.acm.org/doi/abs/10.1145/3767295.3803591

#llvm

Title: P4: I have compilled PyTorch with CUDA and CUDNN. [2025-06-03 Tue]

current recent process, which I share at my gentoo
overlay as a package. #dailyreport #deeplearning #gentoo #llvm #clang #programming #toolchain #pytorch #caffe2

Title: P3: I have compilled PyTorch with CUDA and CUDNN. [2025-06-03 Tue]

(GCC:binutils, LLVM:lld) and ABI. Between “toolchain”
and “build pipeline”.

Gentoo STL:
- libc++: sys-devel/gcc
- libstdc++: llvm-runtimes/libcxx

Gentoo libc: sys-libs/glibc and sys-libs/musl

I learned how Nvidia CUDA and CUDNN distribud and what
tools PyTorch have.

Also, I updated my daemon+script to get most heavy #dailyreport #deeplearning #gentoo #llvm #clang #programming #toolchain #pytorch #caffe2

Title: P0: I have compilled PyTorch with CUDA and CUDNN. [2025-06-03 Tue]

PyTorch is mainly a Python library with main part of
Caffe2 C++ library.

Main dependency of Caffe2 with CUDA support is
NVIDIA "cutlass" library (collection of CUDA C++
template abstractions). This library have "CUDA code"
that may be compiled with nvcc NVIDIA CUDA compiler,
distributed with nvidia-cuda-toolkit, or with LLMV #dailyreport #deeplearning #gentoo #llvm #clang #programming #toolchain #pytorch #caffe2

Title: P2: I have compilled PyTorch with CUDA and CUDNN. [2025-06-03 Tue]

compile PyTorch CUDA code with Clang++ compiler.

I learned cmake config files and difference between
Compiler Runtime Library (libgcc and libatomic,
LLVM/Clang: compiler-rt, MSVC:vcruntime.lib) and C
standard library (glibc, musl) and C++ Standard Library
(GCC: libstdc++, LLVM: libc++, MSVC STL) and linker #dailyreport #deeplearning #gentoo #llvm #clang #programming #toolchain #pytorch #caffe2