Intel killed Arc Celestial gaming GPUs, and said nothing. ๐Ÿšจ

The B580 has no successor. Druid is "up in the air."

Gamers got ghosted. AI got everything.

Full story ๐Ÿ‘‡
https://geekrealmhub.com/intel-arc-celestial-gaming-gpu-cancelled/

#IntelArc #GPU #PCGaming

Intel cancels Arc Celestial Gaming GPUs: Xe3 architecture pivots to AI

Intel has cancelled Arc Celestial gaming GPUs. Xe3P pivots to AI and datacenter. Will Druid save Intel's gaming GPU future in 2027?

Gaming & Tech Content for Geeks | Geek Realm Hub

Akshay (@akshay_pachaar)

NVIDIA์™€ Unsloth๊ฐ€ ํŒŒ์ธํŠœ๋‹ ์†๋„๋ฅผ 25% ๋†’์ด๋Š” ๊ฐ€์ด๋“œ๋ฅผ ๊ณต๊ฐœํ–ˆ๋‹ค. GPU ํ•™์Šต์„ ๋” ๋น ๋ฅด๊ฒŒ ๋งŒ๋“œ๋Š” ํ•ต์‹ฌ ์ตœ์ ํ™”๋กœ packed-sequence ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ ์บ์‹ฑ๊ณผ ๋”๋ธ” ๋ฒ„ํผ๋“œ ์ฒดํฌํฌ์ธํŠธ ๋“ฑ ์‹œ์Šคํ…œ ๋ ˆ๋ฒจ ๊ธฐ๋ฒ•์„ ์†Œ๊ฐœํ•œ๋‹ค. AI ๋ชจ๋ธ ํ•™์Šต ํšจ์œจ ๊ฐœ์„ ์— ์œ ์šฉํ•œ ์‹ค์ „ ์ž๋ฃŒ๋‹ค.

https://x.com/akshay_pachaar/status/2052029497386672510

#nvidia #unsloth #finetuning #gpu #llm

Akshay ๐Ÿš€ (@akshay_pachaar) on X

NVIDIA + Unsloth just dropped a guide on making fine-tuning 25% faster. this is hands-down the cleanest systems-level writeup i've read. you'll learn how 3 optimizations help your gpu train models faster: 1. packed-sequence metadata caching 2. double-buffered checkpoint

X (formerly Twitter)

ACCU on Sea 2026 SESSION ANNOUNCEMENT: Bridging CPUs and GPUs with std::execution - Using Senders / Receivers as a Frame Graph by Al-Afiq Yeong

https://accuonsea.uk/2026/sessions/bridging-cpus-and-gpus-with-stdexecution-using-senders-receivers-as-a-frame-graph/

Register now at https://accuonsea.uk/tickets/

#cpu #gpu #cpp #coding

ACCU on Sea

ACCU on Sea 2026 SESSION ANNOUNCEMENT: Bridging CPUs and GPUs with std::execution - Using Senders / Receivers as a Frame Graph by Al-Afiq Yeong

https://accuonsea.uk/2026/sessions/bridging-cpus-and-gpus-with-stdexecution-using-senders-receivers-as-a-frame-graph/

Register now at https://accuonsea.uk/tickets/

#cpu #gpu #cpp #coding

ACCU on Sea

Build fixes for #FreeBSD #ports graphics/drm-61-kmod and graphics/drm-66-kmod are landed. This was the show-stopper.

Now submitted patch to upgrade #NVIDIA #GPU #driver set to 595.71.05 as Bug295058
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=295058

and opened corresponding review D56851.
https://reviews.freebsd.org/D56851

This seems to be a bugfix release.
https://www.nvidia.com/en-us/drivers/details/267226/

Info about Linux counterpart is here.
https://www.nvidia.com/en-us/drivers/details/267223/

295058 โ€“ x11/nvidia-driver{-devel}, x11/nvidia-kmod{-devel}, x11/linux-nvidia-libs{-devel}, graphics/nvidia-drm*-kmod{-devel}, x11/nvidia-settings, x11/nvidia-xconfig: Update to 595.71.05

GPU-accelerated terminal environment in Zig
Attyx๋Š” Zig ์–ธ์–ด๋กœ ์ž‘์„ฑ๋œ GPU ๊ฐ€์† ํ„ฐ๋ฏธ๋„ ํ™˜๊ฒฝ์œผ๋กœ, ์„ธ์…˜, ๋ถ„ํ• , ํƒญ, ํŒ์—…, ์ƒํƒœ ํ‘œ์‹œ์ค„, ๋ช…๋ น ํŒ”๋ ˆํŠธ ๋“ฑ tmux์™€ ์œ ์‚ฌํ•œ ๊ธฐ๋Šฅ์„ ๊ธฐ๋ณธ ์ œ๊ณตํ•œ๋‹ค. macOS์—์„œ๋Š” Metal, Linux์—์„œ๋Š” OpenGL์„ ์‚ฌ์šฉํ•˜๋ฉฐ, 5MB ๋ฏธ๋งŒ์˜ ๊ฒฝ๋Ÿ‰ ํฌ๊ธฐ๋ฅผ ์ž๋ž‘ํ•œ๋‹ค. ๊ฐœ๋ฐœ์ž๋Š” ํ„ฐ๋ฏธ๋„ ์ž‘๋™ ์›๋ฆฌ๋ฅผ ์ดํ•ดํ•˜๊ณ  Zig๋ฅผ ๋ฐฐ์šฐ๊ธฐ ์œ„ํ•ด ์ง์ ‘ ๊ฐœ๋ฐœํ–ˆ์œผ๋ฉฐ, ํ˜„์žฌ ์‹ค๋ฌด์— ์‚ฌ์šฉํ•  ๋งŒํผ ์•ˆ์ •์ ์ด๋‹ค. ๊ธฐ์กด GPU ํ„ฐ๋ฏธ๋„์ธ Ghostty๋‚˜ Kitty์™€๋Š” ๋ณ„๊ฐœ๋กœ ๋…์ž์ ์œผ๋กœ ๊ตฌํ˜„๋˜์—ˆ๋‹ค.

https://github.com/semos-labs/attyx

#gpu #terminal #zig #programming #opensource

Hackable PyTorch RL Library with Distributional Algorithms (D4PG, DSAC, DPPO)
e3rl์€ PyTorch ๊ธฐ๋ฐ˜์˜ ๊ฐ•ํ™”ํ•™์Šต(RL) ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋กœ, GPU ์™„์ „ ํ™œ์šฉ์„ ๋ชฉํ‘œ๋กœ ์„ค๊ณ„๋˜์—ˆ์œผ๋ฉฐ D4PG, DSAC, DPPO ๋“ฑ ๋ถ„ํฌ์  ๊ฐ•ํ™”ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํฌํ•จํ•œ๋‹ค. CUDA, Apple Silicon(MPS), CPU๋ฅผ ์ง€์›ํ•˜๋ฉฐ, ๋‹ค์–‘ํ•œ gym ํ™˜๊ฒฝ์—์„œ ์‰ฝ๊ฒŒ ์‹คํ—˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ์˜ˆ์ œ์™€ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ์—ฐ๊ตฌ ๋ฐ ๊ฐœ๋ฐœ์ž๋“ค์ด ๋ถ„ํฌ์  ๊ฐ•ํ™”ํ•™์Šต์„ ๋น ๋ฅด๊ฒŒ ์ ์šฉํ•˜๊ณ  ์‹คํ—˜ํ•  ์ˆ˜ ์žˆ๋Š” ์˜คํ”ˆ์†Œ์Šค ๋„๊ตฌ๋กœ ํ™œ์šฉ ๊ฐ€๋Šฅํ•˜๋‹ค.

https://github.com/e3ntity/e3rl

#reinforcementlearning #pytorch #distributionalrl #gpu #deeplearning

GitHub - e3ntity/e3rl: Fast and simple implementation of RL algorithms, designed to run fully on GPU.

Fast and simple implementation of RL algorithms, designed to run fully on GPU. - e3ntity/e3rl

GitHub

Sudo su (@sudoingX)

๋‹จ์ผ GPU ํ™˜๊ฒฝ์—์„œ TurboQuant ๋˜๋Š” KV-cache ์••์ถ• ๊ธฐ๋ฒ•์œผ๋กœ ๋งค์šฐ ๋†’์€ ์„ฑ๋Šฅ์„ ๋‹ฌ์„ฑํ•œ ์‚ฌ๋ก€๊ฐ€ ์žˆ์œผ๋ฉด ๊ณต์œ ํ•ด ๋‹ฌ๋ผ๋Š” ์š”์ฒญ์ด๋‹ค. ์‹ค์ œ๋กœ ํšจ๊ณผ๊ฐ€ ๊ฒ€์ฆ๋˜๋ฉด ์ง์ ‘ ํ…Œ์ŠคํŠธํ•˜๊ณ , ๊ฒฐ๊ณผ๋ฅผ ๊ณต๊ฐœํ•ด ๋‹ค์Œ ๊ฐœ๋ฐœ์ž๋“ค์ด ์ฐธ๊ณ ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๊ฒ ๋‹ค๊ณ  ๋ฐํ˜”๋‹ค.

https://x.com/sudoingX/status/2051747777814909353

#kvcache #quantization #gpu #llm #optimization

Sudo su (@sudoingX) on X

if you or someone you know has hit real crazy numbers on a single gpu setup with turboquant or any kv-cache compression scheme, point me. i will test it on my machines. if it delivers, i amplify you and your work, and ship the receipts publicly so the next builder does not have

X (formerly Twitter)

O que รฉ AMD? Conheรงa a histรณria de uma das lรญderes nos mercados de CPUs e GPUs

https://fed.brid.gy/r/https://tecnoblog.net/responde/o-que-e-amd-conheca-a-historia-de-uma-das-lideres-nos-mercados-de-cpus-e-gpus/

OpenCL 3.1 is here.

The Khronos Group has moved several capabilities into the core spec, including SPIR-V kernels, subgroups, and integer dot products.

Also includes improvements to the memory model and synchronization, plus better alignment with Vulkan via device UUID queries.

Implementations are already underway across major vendors and open source projects.

- Full Blog: https://www.khronos.org/blog/opencl-3.1-is-here?utm_medium=social&utm_source=mastodon&utm_campaign=OpenCL_3.1_is_here&utm_content=blog
- OpenCL specification GitHub
- Khronos Discord

#OpenCL #HPC #GPU #Compute #SPIRV

OpenCL 3.1 is Here

On the eve of IWOCL 2026, the Khronosยฎ OpenCL Working Group has released OpenCLโ„ข  3.1, bringing widely deployed, field-proven capabilities into the core specification to expand functionality, including SPIR-V ingestion, that developers will be able to rely on across conformant implementations. The new specification arrives into a growing OpenCL ecosystem, with implementations from multiple silicon vendors, particularly in mobile and embedded

The Khronos Group