If you're interesting in using Mistral Vibe for LLM-aided development - here's a tidbit of info that will make your life easier:

The context length you set in Vibe's config needs to be some X amount of tokens _lower_ than the largest context your hosted LLM supports.

Vibe will run automatic compact on the history when you're close to maxing out the context length, but it will not manage to stay below the setting so if you don't do this your hosted LLM will run out of memory.

#MistralVibe