Bị hack não với những model nhẹ như Smollm2, Granite3.1-3B hay micro Qwen giờ không thể chạy trên CPU mini PC như trước. Cộng đồng đã bàn rôm rốp trên Reddit/Discord mà chưa có解决方案. Người dùng CPU kêu gọi Ollama发布 bản cũ标记 "cho dânCPU" thay vì bắt dk 5090. #Ollama #AIđộibộ #CPUVina #Smollm2 #AIcreator

(Đếm: 342 ký tự)

https://www.reddit.com/r/ollama/comments/1okhoit/since_123_ollama_doesnt_work_on_cpu_only_how_has/

Today I just want to quote smollm2:1.7b by HuggingFace:

"I don't know what to say, so I'll just say this:
Merry Christmas!"

hf_commit_hash=80befba1f034a5408e46a9aa03834e804170d7dc
prompt="Merry Christmas to you!"
seed=42
temperature=0.4
do_sample=true
num_beams=1
max length=50
top_k=50
top_p=0.1

#huggingface #llm #smollm #smollm2 #python #torch #transformers #ai #citation #quote #quotes #commit #merrychristmas #merryxmas #christmas #tech #programming

Hugging Face's SmolLM2 is a new family of compact language models that outperform Llama 3.2 and Qwen 2.5 on various benchmarks, available in sizes of 135M, 360M, and 1.7B parameters.

I just tested the 135M version with #ollama; it's very fast but not particularly intelligent, making it better suited for simpler tasks like text classification or data preprocessing, especially since it supports tool use.

#smollm2 #huggingface #ollama #llm #ai #programming #compact #llama #qwen #benchmarks