Gotta say, I have been quite impressed with MiMo V2 Pro. I have been kicking the tires (via free preview) in OpenCode for a few weeks. So far, it is the only LLM that I have used where the code output is mostly correct.

> "Yeah, this is how I would write it."

For context, I have access to both Claude Code and Cursor. My employer is footing the bill. But using MiMo is the first time AI output actually seems worthwhile, from a "bang for the buck" standpoint.

> "Hmm. I could see myself paying out-of-pocket for this one."

Whereas with both Claude and Cursor, it is more like… Meh. They exist. I can see their merits. But a personal recurring subscription would hardly seem justifiable, given the rework required to produce something usable.

That said, Cursor is absolutely unrivaled in terms of UX in their app. Pressing `Ctrl+L` to send lines of code to the AI agent chat is genius. Beyond that though, the actual quality generated by Cursor (and Claude) tends to be hit-or-miss.

https://mimo.xiaomi.com/mimo-v2-pro

#ai #claude #cursor #mimo

MiMo-V2-Pro | Xiaomi

@nathansmith Trying it now, seems quite good so far - but do you find it slow?

@theothernt

In what way?

– Laggy

– Or the "think" time

Occasionally, the connection itself can be a little flaky.

But I assume that is because it is being stress tested via lots of free users.

So far today, I had a few throttling issues. First time I have run into that though. Normally it has been pretty stable.

@nathansmith The "Think" time, but I think you're right - it was probably due to very high usage at the time.

Results were very good with a Kotlin/Android project. I just wish I knew about the free promotion a week ago so I had more time to test the model.

There are lots of Android/Compose related things that most models, even Codex 5.4, get wrong - Opus 4.6 usually saves the day!