Introducing Phind-405B and faster, high quality #AI answers for everyone

๐Ÿš€ Phind-405B: New flagship #llm, based on Meta Llama 3.1 405B, designed for programming & technical tasks. #Phind405B

โšก 128K tokens, 32K context window at launch, 92% on HumanEval, great for web app design. #Programming #AIModel

๐Ÿ’ก Trained on 256 H100 GPUs with FP8 mixed precision, 40% memory reduction. #DeepSpeed #FP8

โšก Phind Instant Model: Super fast, 350 tokens/sec, based on Meta Llama 3.1 8B. #PhindInstant

๐Ÿš€ Runs on NVIDIA TensorRT-LLM with flash decoding, fused CUDA kernels. #NVIDIA #GPUs

๐Ÿ” Faster Search: Prefetches results, saves up to 800ms latency, better embeddings. #FastSearch

๐Ÿ‘จโ€๐Ÿ’ป Goal: Help developers experiment faster, new features coming soon! #DevTools #Innovation

https://www.phind.com/blog/introducing-phind-405b-and-better-faster-searches