merve (@mervenoyann)

@huggingface에 새로운 저장소 Kernels를 출시했다. 최적화된 커널을 패키징·배포할 수 있으며, 성능을 개선한 커널을 벤치마크해 Hub에 공유하는 워크플로우를 제공한다.

https://x.com/mervenoyann/status/2044080953648128073

#huggingface #kernels #opensource #aiinfrastructure #benchmark

merve (@mervenoyann) on X

we just shipped Kernels, it's a new repo at @huggingface 💚 it allows for packaging and distribution of optimized kernels 🔥 vibe-optimize Kernels, benchmark gains and share them on Hub 🫵

X (formerly Twitter)

clem (@ClementDelangue)

Hugging Face Hub에 'Kernels' 기능을 도입했다는 발표입니다. GPU 커널을 모델처럼 쉽게 배포할 수 있으며, GPU·PyTorch·OS에 맞게 사전 컴파일되고 torch.compile 호환 및 1.7~2.5배 성능 향상을 제공합니다.

https://x.com/ClementDelangue/status/2044053580504584349

#huggingface #gpu #pytorch #torchcompile #kernels

clem 🤗 (@ClementDelangue) on X

Introducing Kernels on the Hugging Face Hub ✨ What if shipping a GPU kernel was as easy as pushing a model? - Pre-compiled for your exact GPU, PyTorch & OS - Multiple kernel versions coexist in one process - torch.compile compatible - 1.7x–2.5x speedups over PyTorch baselines

X (formerly Twitter)
Identificar y limpiar Kernels desde la terminal | Gnuxero

Existen muchas maneras para identificar nuestro Kernel en el sistema Gnulinux, tanto de manera gráfica, como usando la l

Gnuxero