Github Awesome (@GithubAwesome)

AutoKernel은 GPU 프로파일링과 커널 최적화 작업을 자동화하는 도구로, Andrej Karpathy의 autoresearch에서 영감을 받아 개발된 자율 에이전트를 사용합니다. 사용자가 PyTorch 모델을 지정하면 백그라운드에서 Triton 커널을 자동으로 최적화해 주므로 모델 개발자가 수동으로 프로파일을 관찰·조정하는 시간을 크게 절약할 수 있습니다.

https://x.com/GithubAwesome/status/2031933791342674364

#autokernel #pytorch #triton #gpuoptimization #autoresearch

Github Awesome (@GithubAwesome) on X

Building AI models and tired of staring at GPU profilers? AutoKernel does it for you. Inspired by Karpathy's autoresearch, it brings autonomous AI agents to GPU kernel optimization. Point it at any PyTorch model, go to sleep, and wake up to optimized Triton kernels. It

X (formerly Twitter)
The Hidden Engineering Behind Fast AI: How LLM Inference Actually Works

A deep dive into PagedAttention, speculative decoding, FlashAttention, and continuous batching — the clever tricks that make modern LLMs respond in milliseconds instead of minutes.

TechLife
YES SUCCEEDED!!!

Just rendered an image at 944×1152 (slightly above 1024×1024) using Flux1-Schnell-FP8 on my 6700 XT, and it works! (Image 1 is the Real-ESRGAN 2× upscaled version)

Workflow 1: Sampling (Image 2)

Prompt executed → UNet generates the latent

Step 1 (model load + latent generation) took 419 seconds

Output: Latent tensor saved to disk

Workflow 2 : VAE Decode (Image 3)

Latent loaded → VAE decodes the image

Duration: 7.5 seconds

Advantage: UNet doesn’t need to stay in VRAM → VRAM freed, even on 12 GB GPUs

The problem with the stock LoadLatent Node

Dropdown only shows files if they were produced / annotated by a previous SaveLatent Node

Node is designed to pass latents inside a graph, not load arbitrary files from disk

Purpose: prevents accidentally loading wrong files

Workaround (Image 4)

Edited /ComfyUI/nodes.py, class LoadLatent

Hardcoded latent path → Node now loads directly from disk

Result: Workflow 2 runs instantly, UNet can be unloaded

Timing

Step 1 (model load + latent generation): 419 s

Step 2 (VAE decode): 7.5 s

Result: High-res images on a 12 GB RDNA2 GPU are now possible on Flux1-Schnell-FP8 without ComfyUI crashing! (Image 5 is the original output)

This might actually become my new Flux workflow: render quick 512×512 previews first (which works perfectly on RDNA2 GPUs), sort out the good ones, extract the seed from the PNG metadata, and then re-render only the selected images with the same seed using the split workflow at higher resolutions. This way, high-resolution Flux1-Schnell-FP8 renders become possible on 12 GB RDNA2 GPUs D:

Question at the end: Has anyone ever done this before? Because I have no clue xD

#ComfyUI #flux #Flux1SchnellFP8 #FP8 #AMD #RDNA2 #VAE #AIArt #Pixelfed #HighResolution #GPUOptimization #LatentWorkflow #AIWorkflow #AIHacks #RealESRGAN #Upscale #AIExperiment #CreativeAI #DigitalArt #AICommunity #python #linux #opensource #foss

⚡️ Tăng 90% PP/s nhưng TPS chỉ cải thiện 10–20% khi dùng 2 GPU (RTX Pro 6000 & 5090). Ai biết cách tối ưu giúp mình với? Đang chạy server AI để cung cấp dịch vụ nhanh! #AI #GPUOptimization #LlamaServer #MáyHọc #CôngNghệThôngTin

https://www.reddit.com/r/LocalLLaMA/comments/1qopgpp/llama_server_using_dual_gpus_pp_is_amazing_tps/

Khám phá mô hình AI phi2 của Microsoft, phù hợp để chạy trên PC với 12GB RAM + 3GB VRAM + GTX 1050 + Linux Mint. Phi2 được lượng tử hóa Q4K, tối ưu hiệu suất trên GPU trung bình. Thử tải về từ Hugging Face hoặc TheBloke và trải nghiệm mô hình AI phi-commercial này! #AIModel #Linux #TechVietnam #LocalLLaMA #Phi2 #GPUOptimization #AICommunity

https://www.reddit.com/r/LocalLLaMA/comments/1qm2yns/any_good_model_for_12_gb_ram_3_gb_vram_gtx_1050/

Qwen3 Next 80B với 250k token context hoàn toàn chạy trên 1 GPU 7900 XTX (24 GB) tốc độ 41 tok/s. Sử dụng lượng tử hóa IQ2_XSS, Q4_0 KV & FA. Thay đổi lớn cho ứng dụng LLM trên 1 card duy nhất, khả năng xử lý code tuyệt vời. #Qwen3 #AILocal #GPUOptimization #LocalLLM #AIProgramming #MôHìnhHóaAI #LậpTrìnhViên

https://www.reddit.com/r/LocalLLaMA/comments/1pnnkxc/qwen3_next_80b_w_250k_tok_context_fits_fully_on/

Công cụ 5060ti nâng cấp RAM (6000MHz) và Switch CUDA giúp tăng tốc độ{LLaMA} từ 22 t/s lên gần 37 t/s. Chi phí ~2200$, ít hơn 5090. #GPUoptimization #LLaMA #AI #tech #Performance #TốiMAXGPU #LLaMAtrong #Tètresjpg #nghiencoded #xuấtkho

https://www.reddit.com/r/LocalLLaMA/comments/1oe8v21/5060ti_chads_ram_overclocking_the_phantom_menace/

Lenovo launches GPU Advanced Services, promising up to 30 percent faster AI performance

https://web.brid.gy/r/https://nerds.xyz/2025/09/lenovo-gpu-ai/

🎨💡 Imagine spending hours optimizing a GPU only to discover it's as pointless as a #penguin with a solar panel. 🤔 But hey, at least it makes for a riveting blog post nobody will read! 📚🔍
https://blog.speechmatics.com/pointless-gpu-optimization-exercise #GPUoptimization #blogpost #humor #techfails #HackerNews #ngated
An Almost Pointless Exercise in GPU Optimization | Speechmatics

Experience converting a multi-threaded C++ application to run faster on GPU. How to interpret NSight Compute recommendations to improve an algorithm on GPU.

An Almost Pointless Exercise in GPU Optimization | Speechmatics

Experience converting a multi-threaded C++ application to run faster on GPU. How to interpret NSight Compute recommendations to improve an algorithm on GPU.