LLaMA: Open and Efficient Foundation Language Models - Meta Research | Meta Research

We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to...

Meta Research
@gdm The license on the savepoint, and the approval process, doesn't seem very "open" to me...

@msw

Indeed, for now this is just open science, the models are not genuine open source.

I have heard that there are plans to make the models available via HuggingFace, which should make it easier to access them.

More broadly, this is just one of several efforts towards more open models.