llama.cpp for GPU only
llama.cpp for GPU only
What use case would that be?
I can get like 12 tokens/s running 13b models in q_3_k_L quantization on my laptop, about 2.2 for 33b, and 1.5 for 65b (I bought 64gb of RAM to be able to run larger models lol). 7B was STUPID fast because the entire model fits inside my (8gb) GPU, but 7b models mostly suck (wizard-vicuna-uncensored is decent, every other one I've tried was Not).