Intel killed Arc Celestial gaming GPUs, and said nothing. ๐จ
The B580 has no successor. Druid is "up in the air."
Gamers got ghosted. AI got everything.
Full story ๐
https://geekrealmhub.com/intel-arc-celestial-gaming-gpu-cancelled/
Intel killed Arc Celestial gaming GPUs, and said nothing. ๐จ
The B580 has no successor. Druid is "up in the air."
Gamers got ghosted. AI got everything.
Full story ๐
https://geekrealmhub.com/intel-arc-celestial-gaming-gpu-cancelled/
Akshay (@akshay_pachaar)
NVIDIA์ Unsloth๊ฐ ํ์ธํ๋ ์๋๋ฅผ 25% ๋์ด๋ ๊ฐ์ด๋๋ฅผ ๊ณต๊ฐํ๋ค. GPU ํ์ต์ ๋ ๋น ๋ฅด๊ฒ ๋ง๋๋ ํต์ฌ ์ต์ ํ๋ก packed-sequence ๋ฉํ๋ฐ์ดํฐ ์บ์ฑ๊ณผ ๋๋ธ ๋ฒํผ๋ ์ฒดํฌํฌ์ธํธ ๋ฑ ์์คํ ๋ ๋ฒจ ๊ธฐ๋ฒ์ ์๊ฐํ๋ค. AI ๋ชจ๋ธ ํ์ต ํจ์จ ๊ฐ์ ์ ์ ์ฉํ ์ค์ ์๋ฃ๋ค.

NVIDIA + Unsloth just dropped a guide on making fine-tuning 25% faster. this is hands-down the cleanest systems-level writeup i've read. you'll learn how 3 optimizations help your gpu train models faster: 1. packed-sequence metadata caching 2. double-buffered checkpoint
ACCU on Sea 2026 SESSION ANNOUNCEMENT: Bridging CPUs and GPUs with std::execution - Using Senders / Receivers as a Frame Graph by Al-Afiq Yeong
Register now at https://accuonsea.uk/tickets/
ACCU on Sea 2026 SESSION ANNOUNCEMENT: Bridging CPUs and GPUs with std::execution - Using Senders / Receivers as a Frame Graph by Al-Afiq Yeong
Register now at https://accuonsea.uk/tickets/
Build fixes for #FreeBSD #ports graphics/drm-61-kmod and graphics/drm-66-kmod are landed. This was the show-stopper.
Now submitted patch to upgrade #NVIDIA #GPU #driver set to 595.71.05 as Bug295058
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=295058
and opened corresponding review D56851.
https://reviews.freebsd.org/D56851
This seems to be a bugfix release.
https://www.nvidia.com/en-us/drivers/details/267226/
Info about Linux counterpart is here.
https://www.nvidia.com/en-us/drivers/details/267223/
GPU-accelerated terminal environment in Zig
Attyx๋ Zig ์ธ์ด๋ก ์์ฑ๋ GPU ๊ฐ์ ํฐ๋ฏธ๋ ํ๊ฒฝ์ผ๋ก, ์ธ์
, ๋ถํ , ํญ, ํ์
, ์ํ ํ์์ค, ๋ช
๋ น ํ๋ ํธ ๋ฑ tmux์ ์ ์ฌํ ๊ธฐ๋ฅ์ ๊ธฐ๋ณธ ์ ๊ณตํ๋ค. macOS์์๋ Metal, Linux์์๋ OpenGL์ ์ฌ์ฉํ๋ฉฐ, 5MB ๋ฏธ๋ง์ ๊ฒฝ๋ ํฌ๊ธฐ๋ฅผ ์๋ํ๋ค. ๊ฐ๋ฐ์๋ ํฐ๋ฏธ๋ ์๋ ์๋ฆฌ๋ฅผ ์ดํดํ๊ณ Zig๋ฅผ ๋ฐฐ์ฐ๊ธฐ ์ํด ์ง์ ๊ฐ๋ฐํ์ผ๋ฉฐ, ํ์ฌ ์ค๋ฌด์ ์ฌ์ฉํ ๋งํผ ์์ ์ ์ด๋ค. ๊ธฐ์กด GPU ํฐ๋ฏธ๋์ธ Ghostty๋ Kitty์๋ ๋ณ๊ฐ๋ก ๋
์์ ์ผ๋ก ๊ตฌํ๋์๋ค.
Hackable PyTorch RL Library with Distributional Algorithms (D4PG, DSAC, DPPO)
e3rl์ PyTorch ๊ธฐ๋ฐ์ ๊ฐํํ์ต(RL) ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ก, GPU ์์ ํ์ฉ์ ๋ชฉํ๋ก ์ค๊ณ๋์์ผ๋ฉฐ D4PG, DSAC, DPPO ๋ฑ ๋ถํฌ์ ๊ฐํํ์ต ์๊ณ ๋ฆฌ์ฆ์ ํฌํจํ๋ค. CUDA, Apple Silicon(MPS), CPU๋ฅผ ์ง์ํ๋ฉฐ, ๋ค์ํ gym ํ๊ฒฝ์์ ์ฝ๊ฒ ์คํํ ์ ์๋๋ก ์์ ์ ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ ๊ณตํ๋ค. ์ฐ๊ตฌ ๋ฐ ๊ฐ๋ฐ์๋ค์ด ๋ถํฌ์ ๊ฐํํ์ต์ ๋น ๋ฅด๊ฒ ์ ์ฉํ๊ณ ์คํํ ์ ์๋ ์คํ์์ค ๋๊ตฌ๋ก ํ์ฉ ๊ฐ๋ฅํ๋ค.
https://github.com/e3ntity/e3rl
#reinforcementlearning #pytorch #distributionalrl #gpu #deeplearning
Sudo su (@sudoingX)
๋จ์ผ GPU ํ๊ฒฝ์์ TurboQuant ๋๋ KV-cache ์์ถ ๊ธฐ๋ฒ์ผ๋ก ๋งค์ฐ ๋์ ์ฑ๋ฅ์ ๋ฌ์ฑํ ์ฌ๋ก๊ฐ ์์ผ๋ฉด ๊ณต์ ํด ๋ฌ๋ผ๋ ์์ฒญ์ด๋ค. ์ค์ ๋ก ํจ๊ณผ๊ฐ ๊ฒ์ฆ๋๋ฉด ์ง์ ํ ์คํธํ๊ณ , ๊ฒฐ๊ณผ๋ฅผ ๊ณต๊ฐํด ๋ค์ ๊ฐ๋ฐ์๋ค์ด ์ฐธ๊ณ ํ ์ ์๊ฒ ํ๊ฒ ๋ค๊ณ ๋ฐํ๋ค.

if you or someone you know has hit real crazy numbers on a single gpu setup with turboquant or any kv-cache compression scheme, point me. i will test it on my machines. if it delivers, i amplify you and your work, and ship the receipts publicly so the next builder does not have
O que รฉ AMD? Conheรงa a histรณria de uma das lรญderes nos mercados de CPUs e GPUs
OpenCL 3.1 is here.
The Khronos Group has moved several capabilities into the core spec, including SPIR-V kernels, subgroups, and integer dot products.
Also includes improvements to the memory model and synchronization, plus better alignment with Vulkan via device UUID queries.
Implementations are already underway across major vendors and open source projects.
- Full Blog: https://www.khronos.org/blog/opencl-3.1-is-here?utm_medium=social&utm_source=mastodon&utm_campaign=OpenCL_3.1_is_here&utm_content=blog
- OpenCL specification GitHub
- Khronos Discord

On the eve of IWOCL 2026, the Khronosยฎ OpenCL Working Group has released OpenCLโข 3.1, bringing widely deployed, field-proven capabilities into the core specification to expand functionality, including SPIR-V ingestion, that developers will be able to rely on across conformant implementations. The new specification arrives into a growing OpenCL ecosystem, with implementations from multiple silicon vendors, particularly in mobile and embedded