Show HN: sllm – Split a GPU node with other developers, unlimited tokens

Running DeepSeek V3 (685B) requires 8×H100 GPUs which is about $14k/month. Most developers only need 15-25 tok/s. sllm lets you join a cohort of developers sharing a dedicated node. You reserve a spot with your card, and nobody is charged until the cohort fills. Prices start at $5/mo for smaller models.

The LLMs are completely private (we don't log any traffic).

The API is OpenAI-compatible (we run vLLM), so you just swap the base URL. Currently offering a few models.

https://sllm.cloud

sllm

Shared LLM access via cohort subscriptions

This is a great idea! I saw a similar (inverse) idea the other day for pooling compute (https://github.com/michaelneale/mesh-llm). What are you doing for compute in the backend? Are you locked into a cohort from month to month?
GitHub - michaelneale/mesh-llm: reference impl with llama.cpp compiled to distributed inference across machines, with real end to end demo

reference impl with llama.cpp compiled to distributed inference across machines, with real end to end demo - michaelneale/mesh-llm

GitHub