Interestingly, when building Pocl with the option to statically link against LLVM 18, the crash can be reproduced by simply running `clinfo -l` (which is the least-intrusive OpenCL-using command you can run, basiclaly) and the error is caused by the good old

: CommandLine Error: Option 'internalize-public-api-file' registered more than once!
LLVM ERROR: inconsistency in registered CommandLine options

during the Pocl ICD device numeration.

*sigh*

#llvm #clang #pocl #rusticl #mesa

Pimcore Inspire 2025: The +Pluswerk review

Two inspiring days in Salzburg are behind us - with exciting keynotes, great community spirit and lots of input on the future of digital platforms. Of course, +Pluswerk was also there. In our review, we take a look back at the highlights:

๐Ÿ’ก The path from PIM/DAM to PXM
๐Ÿง‘โ€๐Ÿ’ป The new Pimcore Studio
๐Ÿ” The Pimcore Open Core Licence
๐Ÿค Many exciting conversations

โžก๏ธ Read now: https://t1p.de/gkclz

Many thanks to the Pimcore community and the entire Pimcore team for the great organisation and the inspiring exchange in Salzburg! ๐Ÿ’œ

๐Ÿ“Œ Early Bird Pimcore Dev Day - Code: PimcoreInspire2025 until 9 May ๐Ÿ”— https://t1p.de/0ldye

#PimcoreInspire #Pimcore #POCL #PXM #PIM #DAM #TechEvent

Pimcore Inspire 2025

Oh look, #Nvidia makes CPUs now! And I got my hands on one! ๐Ÿ––๐Ÿ˜‹
Today I benchmarked #FluidX3D on Nvidia's #GH200, both #GPU and #CPU with #PoCL. Finally I can answer the question: How does that exotic 2-chip #HPC APU show up in #OpenCL?
--> It's 2 separate devices, a GPU with 94GB @ 4TB/s and a 72-core CPU with 480GB @ 384GB/s. The NVLink interconnect between the two is much faster than PCIe, achieving ~380GB/s host<->device bandwidth, only limited by poor misaligned VRAM BW on the GPU or RAM BW.
When I say #FluidX3D #CFD runs on every toaster, I mean it. I finally got it running on the AMD Athlon X2 QL-65 dual-core CPU of my very first computer, a Toshiba Satellite L500D I got in 2009. The CPU itself is from 2008, a year before #OpenCL even existed. Modern #PoCL makes it compatible. Does close to 3 MLUPs/s! ๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ”ฅ
https://opencl.gpuinfo.org/listreports.php?devicename=cpu-x86-64-AMD+Athlon%28tm%29+X2+Dual-Core+QL-65&platform=
Reports - OpenCL Hardware Database by Sascha Willems

I really don't understand why the #PoCL team won't review my patches on a Saturday night. It's a #weekend, what are you going to do otherwise?

;-)

So I actually tried to give the #HSA #PoCL driver a go, and while I didn't actually get support for my #AMD integrated #GPU (but it should be doable) I actually discovered something interesting, which I hadn't noticed since I didn't compare the `clinfo` output for my iGP between #Rusticl and the proprietary driver.

So here's the weird thing: they report a different number of compute units!

I'm still moderately annoyed by the fact that there's no single #OpenCL platform to drive all computer devices on this machine. #PoCL comes close because it supports both the CPU and the #NVIDIA dGPU through #CUDA, but the not the #AMD iGPU (there's an #HSA device, but). #Rusticl supports the iGP (radeonsi) and the CPU (llvmpipe), but not the dGPU (partly because I'm running that on proprietary drivers for CUDA). Everything else has at best one supported device out of three available.

#PoCL 4.0 @openclapi Implementation Released With @IntelGraphics #oneAPI Level Zero Driver

https://www.phoronix.com/news/PoCL-4.0-Released

Original tweet : https://twitter.com/phoronix/status/1671820647540957185

PoCL 4.0 OpenCL Implementation Released With Intel oneAPI Level Zero Driver

๐Ÿงต9/9
The source code for the experimental @FluidX3D P2P is available in this branch on #GitHub: https://github.com/ProjectPhysX/FluidX3D/tree/experimental-p2p

The PR for #PoCL with cudaMemcpy is available here: https://github.com/pocl/pocl/pull/1189

GitHub - ProjectPhysX/FluidX3D at experimental-p2p

The fastest and most memory efficient lattice Boltzmann CFD software, running on all GPUs via OpenCL. - GitHub - ProjectPhysX/FluidX3D at experimental-p2p

GitHub
Credit and many thanks to Jan Solanti from Tampere University for visiting me at University of Bayreuth and testing this together with me, in his endeavour to implement/optimize #PoCL-Remote.
Thanks to @ShmarvDogg for testing P2P mode on his 2x A770 16GB "bigboi" PC!
๐Ÿงต8/9