Python Trending (@pythontrending)
Mistral이 'mistral-vibe'라는 최소 기능의 CLI 기반 코딩 에이전트를 공개했습니다. 경량 명령줄 도구로 코드 작성 및 자동화 보조 목적의 에이전트로 보이며, 개발자가 CLI 환경에서 에이전트를 간편히 사용할 수 있도록 설계된 새로운 도구 발표입니다.
I started experimenting with #opencode and #mistralvibe for some part of my work, and either I'm dumb or missing something, but can we somehow see individual changes inside #neovim?
I'm pretty sure I saw some solution in past, but can't find it now, and when I recently opened undotree after changes made by Vibe, it was empty and even my previous changes weren't in undotree
#mistral #vibecoding #ai #gemini #geminicli #claudecode (last two because solution I saw was about them)
FUCK
I just had my first "LLMs can actually do shit" moment when it comes to software development. I've created two Python scripts since before that took the output from the meshtastic cli application and rendered some nice output.
Today I just asked - still the local small LLM that is in no way as capable as the large models - for the LLM to refactor those two Python scripts into a full fledged backend service with a web frontend and a few bells and whistles too.
... and it just did it. I had to fix some extremely minor stuff but all in all, yes, this was mostly just a background thing running on my own local hardware for a few hours today.
Well ok. I have a nefarious reason for doing this. Once complete I will then attack the security of the application/service. I expect there to be issues, but my confidence has been slightly shaken just now.
If you're interesting in using Mistral Vibe for LLM-aided development - here's a tidbit of info that will make your life easier:
The context length you set in Vibe's config needs to be some X amount of tokens _lower_ than the largest context your hosted LLM supports.
Vibe will run automatic compact on the history when you're close to maxing out the context length, but it will not manage to stay below the setting so if you don't do this your hosted LLM will run out of memory.
I know I am late to the party. But holy cow, #vibecoding feels like magic.
Using #mistralvibe with iTerm2 on a simple laptop and … it just makes things work 🤯
🚀 #Devstral2 & #MistralVibe CLI: State-of-the-art #opensource agentic coding models #AI #coding #developers
🔥 #Devstral2 (123B) achieves 72.2% on SWE-bench Verified — SOTA for open-weight code agents under modified #MIT license
⚡ #DevstralSmall2 (24B) scores 68.0% on SWE-bench, runs locally on consumer hardware under #Apache2 license
💰 Up to 7x more cost-efficient than Claude Sonnet at real-world tasks
🧵 👇