Sam Altman says ChatGPT should be 'much less lazy now'

https://lemmy.world/post/11597944

Sam Altman says ChatGPT should be 'much less lazy now' - Lemmy.World

Sam Altman says ChatGPT should be ‘much less lazy now’::ChatGPT users previously complained that the chatbot was slacking off and refusing to complete some tasks.

PSA: give open-source LLMs a try folks. If you’re on Linux or macOS, ollama makes it incredibly easy to try most of the popular open-source LLMs like Mistral 7B, Mixtral 8x7B, CodeLlama etc… Obviously it’s faster if you have a CUDA/ROCm-capable GPU, but it still works in CPU-mode too (albeit slow if the model is huge) provided you have enough RAM.

You can combine that with a UI like ollama-webui or a text-based UI like oterm.

Ollama

Get up and running with large language models, locally.

ROCm? Is that even supported now? Last time I checked it was still a dumpster fire. What are the RAM and VRAM reqs for the Mixtral8x7b?

ROCm is decent right now, I can do deep learning stuff and CUDA programming with it with an AMD APU. However, ollama doesn’t work out-of-the-box yet with APUs, but users seem to say that it works with dedicated AMD GPUs.

As for Mixtral8x7b, I couldn’t run it on a system with 32GB of RAM and an RTX 2070S with 8GB of VRAM, I’ll probably try with another system soon. But that same system runs CodeLlama-34B fine.

So far I’m happy with Mistral 7b, it’s extremely fast on my RTX 2070S, and it’s not really slow when running in CPU-mode on an AMD Ryzen 7. Its speed is okayish (~1 token/sec) when I try it in CPU-mode on an old Thinkpad T480 with an 8th gen i5 CPU.

I have a ryzen apu, so I was curious. I tried yesterday to fiddle with it, and managed to up the “vram” to 16gb. But installing xformers and flash-attention for LLM support on igpus is not officially supported and was not possible to install anything past pytorch. It’s step further for sure, but still needs lots of work.