La Marine nationale a reçu son premier avion de soutien maritime «Balbuzard»
La Marine nationale a reçu son premier avion de soutien maritime «Balbuzard»
DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation
AutoKernel: Autonomous GPU Kernel Optimization via Iterative Agent-Driven Search
Clean up on aisle 7! Interesting idea - who will pay the bill?
US based Portal Space Systems and Australian startup Paladin Space are combing forces to create and launch a scalable, commercial space debris clean-up service.
Paladin’s supplies their Triton debris identification and capture system with Portal provides its maneuverable Starburst spacecraft. Target launch = Q2 2027. https://www.inc.com/chloe-aiello/these-two-startups-are-teaming-up-to-prevent-a-pearl-harbor-moment-in-space/91318935
#Portal #Paladin #Triton #Space #SpaceJunk #LEO #Starburst #SpaceCraft #SpaceDebris
Triton-Sanitizer: A Fast and Device-Agnostic Memory Sanitizer for Triton with Rich Diagnostic Context

Memory access errors remain one of the most pervasive bugs in GPU programming. Existing GPU sanitizers such as compute-sanitizer detect memory access errors by instrumenting every memory instructio…
SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GPU Kernels Against Hardware Limits
The latest open source "release" branch build of Triton DataCenter release is up. ( #triton )
https://smartdatacenter.topicbox.com/groups/sdc-discuss/Te13cfdcee3cec2a1-M70cf242fe8a7c9b3b23350fe
An Efficient Heterogeneous Co-Design for Fine-Tuning on a Single GPU
Learn the critical failure points when running LLM inference on Kubernetes, including resource constraints, operator compatibility, security, scalability, and monitoring best practices for production workloads.
#Kubernetes #LLM Inference #Dynatrace #GPU Resource Allocation #Service Mesh #Network Policies #KEDA #Triton Inference Server #Redis #Prometheus
https://dasroot.net/posts/2026/02/running-llm-inference-on-kubernetes-what-breaks-first/

Learn the critical failure points when running LLM inference on Kubernetes, including resource constraints, operator compatibility, security, scalability, and monitoring best practices for production workloads.
Github Awesome (@GithubAwesome)
AutoKernel은 GPU 프로파일링과 커널 최적화 작업을 자동화하는 도구로, Andrej Karpathy의 autoresearch에서 영감을 받아 개발된 자율 에이전트를 사용합니다. 사용자가 PyTorch 모델을 지정하면 백그라운드에서 Triton 커널을 자동으로 최적화해 주므로 모델 개발자가 수동으로 프로파일을 관찰·조정하는 시간을 크게 절약할 수 있습니다.

Building AI models and tired of staring at GPU profilers? AutoKernel does it for you. Inspired by Karpathy's autoresearch, it brings autonomous AI agents to GPU kernel optimization. Point it at any PyTorch model, go to sleep, and wake up to optimized Triton kernels. It