OpenCL 3.1 is Here

On the eve of IWOCL 2026, the Khronos® OpenCL Working Group has released OpenCL™  3.1, bringing widely deployed, field-proven capabilities into the core specification to expand functionality, including SPIR-V ingestion, that developers will be able to rely on across conformant implementations. The new specification arrives into a growing OpenCL ecosystem, with implementations from multiple silicon vendors, particularly in mobile and embedded

The Khronos Group

OpenCL 3.1 is here.

The Khronos Group has moved several capabilities into the core spec, including SPIR-V kernels, subgroups, and integer dot products.

Also includes improvements to the memory model and synchronization, plus better alignment with Vulkan via device UUID queries.

Implementations are already underway across major vendors and open source projects.

- Full Blog: https://www.khronos.org/blog/opencl-3.1-is-here?utm_medium=social&utm_source=mastodon&utm_campaign=OpenCL_3.1_is_here&utm_content=blog
- OpenCL specification GitHub
- Khronos Discord

#OpenCL #HPC #GPU #Compute #SPIRV

OpenCL 3.1 is Here

On the eve of IWOCL 2026, the Khronos® OpenCL Working Group has released OpenCL™  3.1, bringing widely deployed, field-proven capabilities into the core specification to expand functionality, including SPIR-V ingestion, that developers will be able to rely on across conformant implementations. The new specification arrives into a growing OpenCL ecosystem, with implementations from multiple silicon vendors, particularly in mobile and embedded

The Khronos Group
Newest #IntelArc #GPU family member is here, the Panther Lake Arc B390... and it... purrs? 🖖 🥺 🐈‍⬛
My OpenCL-Benchmark on the B390 measures ~7.4 TFlops FP32 and ~120GB/s memory bandwidth. hw-smi also works with the B390.
#FluidX3D benchmarks here: https://github.com/ProjectPhysX/FluidX3D#single-gpucpu-benchmarks
And the #OpenCL infos:
-Arc B390: https://opencl.gpuinfo.org/displayreport.php?id=6718
- Core Ultra X7 358H: https://opencl.gpuinfo.org/displayreport.php?id=6717

The OpenCL Working Group has published the first in a series of cooperative matrix extensions — and your feedback can help shape them before finalization.

cl_khr_cooperative_matrix brings cooperative matrix load, store, and multiply-add to OpenCL, developed with Arm, Intel, and Qualcomm. A companion OpenCL C language extension is also in RFC.

Review and comment:
🔗 Spec draft: https://github.com/KhronosGroup/OpenCL-Docs/pull/1533
🔗 Clang RFC: https://discourse.llvm.org/t/rfc-clang-frontend-changes-for-opencl-c-cooperative-matrix-extension/90148
🔗 Full blog: https://www.khronos.org/blog/opencl-cooperative-matrix-extensions-are-here
#OpenCL #SPIRV

IWOCL 2026 is next week!

Join the global OpenCL and SYCL community in Heilbronn, Germany (May 6–8) for the premier forum dedicated to open compute languages and heterogeneous platform programming. The program includes the latest technical talks, Khronos Working Group updates, application case studies, and ample opportunity to connect with peers across industry and academia.

Registration remains open: www.iwocl.org

See you there.
#IWOCL #OpenCL #SYCL #HPC #Heterogeneous #Compute

The countdown is on — IWOCL 2026 is just two weeks away.

Join the global OpenCL and SYCL community in Heilbronn, Germany (May 6–8) for the premier forum dedicated to open compute languages and heterogeneous platform programming. Expect the latest technical talks, Khronos Working Group updates, and ample opportunity to connect with peers across industry and academia.

Registration is open: www.iwocl.org
#IWOCL #OpenCL #SYCL #HPC #Khronos #HeterogeneousComputing

Обучение LLM с нуля на c#

Напишем с нуля на c# маленькую модель размером 422 Кб, сохраним в GGUF и запустим в LM Studio . А в этом нам поможет всего один единственный компонент: ILGPU , позволяющий обучать модель на OpenCL . А точнее - на встройке AMD.

https://habr.com/ru/articles/1017484/

#opencl #ии_и_машинное_обучение #ai #c# #иимодель #разработка

Обучение LLM с нуля на c#

У меня нет мощной видеокарты от NVidia. Соответственно нет CUDA мощи, принятой в качестве стандарта в машинном обучении (ML - Machine Learning). Но есть желание сделать что то похожее на нейронку. На...

Хабр