LLaMA: Open and Efficient Foundation Language Models - Meta Research | Meta Research

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to...

Meta Research

#ChatLLaMa is here!?

It only took a few days for someone to implement a #ChatGPT clone based on Meta AI's #LLaMA

So far this is just an optimized open source implementation of the training code, created by Nebuly. We don't really know how good the model will be with currently available training data

https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama

nebullvm/apps/accelerate/chatllama at main · nebuly-ai/nebullvm

Plug and play modules to optimize the performances of your AI systems 🚀 - nebullvm/apps/accelerate/chatllama at main · nebuly-ai/nebullvm

GitHub