Want to experiment with LLM serving? With the Docker backend, MLOX can provision LiteLLM + Ollama and wire them into your stack. Quickly test prompt flows, local models, or gateways without learning yet another deployment toolchain.

#LLM #MLOps #SelfHosting

@drbusysloth i have never tried litellm, what is that?

@radhitya
It’s quite useful 🙂

LiteLLM is basically an open-source LLM gateway (plus UI and much more). Ie. it gives you a single, OpenAI-style API to talk to many different models (OpenAI, Anthropic, local via Ollama, etc.).

So instead of rewriting code for every provider, you can:
1. switch models easily
2. route between them
3. mix local + cloud setups

In MLOX I use it as a simple layer to experiment with different models / setups without much friction 🦥

@drbusysloth It looks helpful for me, to be honest.