Red Hat and Tesla engineers tackled a real production problem together.
3x output tokens/sec, 2x faster TTFT on Llama 3.1 70B with KServe + llm-d + vLLM. Fixes pushed upstream to KServe along the way.
This is what open source looks like. 🤝 🚀
https://llm-d.ai/blog/production-grade-llm-inference-at-scale-kserve-llm-d-vllm
#RedHat #Tesla #RedHatAI #vLLM #Pytorch #Kubernetes #OpenShift #KServe #llmd #Llama #OpenSource

Production-Grade LLM Inference at Scale with KServe, llm-d, and vLLM | llm-d
How migrating from a simple vLLM deployment to a robust MLOps platform utilizing KServe, llm-d's intelligent routing, and vLLM solved significant scaling and operational challenges in LLM deployment through deep customization and prefix-cache aware routing to maximize GPU utilization.
