Going into the rabbithole of testing local LLMs right now. I don't have a dedicated GPU, but 32 GiB of RAM should be enough for anyone.
#ai #huggingface #selfhost #localai #ollama #heretic #qwen #mistral