Prompt caching: 10x cheaper LLM tokens, but how? | ngrok blog

A far more detailed explanation of prompt caching than anyone asked for.

ngrok blog