Llama 3.1 AI Models Have Officially Released

https://lemm.ee/post/37785388

Llama 3.1 AI Models Have Officially Released - lemm.ee

Big day for people who use AI locally. According to benchmarks this is a big step forward to free, small LLMs.

128k token context is pretty sweet. Mistral nemo also just launched with a similar context. Good times.
How does the Nemo 12B compare to the Llama 3.1 8B?

I haven’t given it a very thorough testing, and I’m by no means an expert, but from the few prompts I’ve ran so far, I’d have to hand it to Nemo concerning quality.

Using openrouter.ai, I’ve also given llama3.1 405B a shot, and that seems to be at least on par with (if not better than) Claude 3.5 Sonnet, whilst being a bit cheaper as well.

Llama 70B is probably where its at, if you go the API route. It’s distilled from 405B, and its benchmarks are pretty close.

At long context, Nemo is way better than llama 8B in my testing.

Turns out they are both very sensitive to quantization though.

Yeah, there’s a massive negative circlejerk going on, but mostly with parroted arguments. Being able to locally run a model with this kind of context is huge. Can’t wait for the finetunes that will result from this (*cough* NeverSleep’s *-maid models come to mind).
I am looking into doing it on the 12B for myself TBH, not so much for RP but novel style prose.

Ah, that’s a wonderful use case. One of my favourite models has a storytelling lora applied to it, maybe that would be useful to you too?

At any rate, if you’d end up publishing your model, I’d love to hear about it.

NyxKrage/Chronomaid-Storytelling-13b · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

[Oh, my friend, you have to switch to this: huggingface.co/BeaverAI/mistral-doryV2-12b

It’s so much smarter than llama 13B. And it goes all the way out to 128K!

BeaverAI/mistral-doryV2-12b · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.

Oof - not on my 12gb 3060 it doesn’t :/ Even at 48k context and the Q4_K quantization, it’s ollama its doing a lot of offloading to the cpu. What kind of hardware are you running it on?

A 3090.

But it should be fine on a 3060

Dump ollama for long context. Grab a 6bpw exl2 quantization and load it with Q4 or Q6 cache depending on how much context you want. I personally use EXUI, but text-gen-webu- and tabbyapi (with some other frontend) will also load them.

If forced to characterize the attitude of lemmy towards LLM/“AI,” I’d say people here are broadly interested in the tech but critical of the way it’s often used.

I dunno, with image models specifically it seems like they’re the devil because of the datasets they’re trained on, killing artists, and… that’s that. And LLMs to a lesser extent.

I think most people don’t realize how much of an inflection point local running vs. corporate hosting could be, which is especially ironic on Lemmy.

If by interested you mean willing to bullshit… Talking about AI here is like talking about evolution at bible camp in the deep south.
The loud minority is really loud.
My impression is the general consensus is we don’t want huge corporations stealing data to train their AI models only to turn around and cram it down our throats anywhere they can with increasingly negative experiences. That being said, while I would generally agree with that, I still find it interesting and especially if I can host it myself.