LLaMA: Open and Efficient Foundation Language Models - Meta Research | Meta Research

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to...

Meta Research

#ChatLLaMa is here!?

It only took a few days for someone to implement a #ChatGPT clone based on Meta AI's #LLaMA

So far this is just an optimized open source implementation of the training code, created by Nebuly. We don't really know how good the model will be with currently available training data

https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama

nebullvm/apps/accelerate/chatllama at main · nebuly-ai/nebullvm

Plug and play modules to optimize the performances of your AI systems 🚀 - nebullvm/apps/accelerate/chatllama at main · nebuly-ai/nebullvm

GitHub
@gdm The license on the savepoint, and the approval process, doesn't seem very "open" to me...

@msw

Indeed, for now this is just open science, the models are not genuine open source.

I have heard that there are plans to make the models available via HuggingFace, which should make it easier to access them.

More broadly, this is just one of several efforts towards more open models.