Llama 3.1 AI Models Have Officially Released

https://lemm.ee/post/37785388

Llama 3.1 AI Models Have Officially Released - lemm.ee

Big day for people who use AI locally. According to benchmarks this is a big step forward to free, small LLMs.

128k token context is pretty sweet. Mistral nemo also just launched with a similar context. Good times.
How does the Nemo 12B compare to the Llama 3.1 8B?

At long context, Nemo is way better than llama 8B in my testing.

Turns out they are both very sensitive to quantization though.

Yeah, there’s a massive negative circlejerk going on, but mostly with parroted arguments. Being able to locally run a model with this kind of context is huge. Can’t wait for the finetunes that will result from this (*cough* NeverSleep’s *-maid models come to mind).
I am looking into doing it on the 12B for myself TBH, not so much for RP but novel style prose.

Ah, that’s a wonderful use case. One of my favourite models has a storytelling lora applied to it, maybe that would be useful to you too?

At any rate, if you’d end up publishing your model, I’d love to hear about it.

NyxKrage/Chronomaid-Storytelling-13b · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

[Oh, my friend, you have to switch to this: huggingface.co/BeaverAI/mistral-doryV2-12b

It’s so much smarter than llama 13B. And it goes all the way out to 128K!

BeaverAI/mistral-doryV2-12b · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Oof - not on my 12gb 3060 it doesn’t :/ Even at 48k context and the Q4_K quantization, it’s ollama its doing a lot of offloading to the cpu. What kind of hardware are you running it on?

A 3090.

But it should be fine on a 3060

Dump ollama for long context. Grab a 6bpw exl2 quantization and load it with Q4 or Q6 cache depending on how much context you want. I personally use EXUI, but text-gen-webu- and tabbyapi (with some other frontend) will also load them.