Как использовать Claude Code в 8 раз дешевле: подключаем китайские модели

Всем привет! Сегодня разберём, можно ли использовать Claude Code с китайскими моделями вместо Opus и сколько на этом реально сэкономить. Взял две китайские модели Kimi K2.6 от Moonshot AI и GLM-5.1 от Z AI и прогнал их на привычных задачах и сравнил с Opus. Не в написании кода, а на повседневных задачах: создать лендинг, сделать карусель для соцсетей, анализ данных, что-то в интернете поискать сравнить, написать какой-нибудь не сложный Telegram-бот и т.д. Забегая вперёд: результаты оказались неожиданными. Где-то китайские модели выиграли у Opus, где-то проиграли, а где-то разница оказалась чисто вкусовой. В конце статьи будет пошаговая инструкция, как подключить любую из этих моделей к Claude Code за пять минут. Работает и в терминале, и в десктоп-приложении, и в VS Code, везде где вы привыкли запускать Claude Code

https://habr.com/ru/articles/1026760/

#claude #claude_code #kimik2 #kimik25 #openai #ai #ии_агенты

Как использовать Claude Code в 8 раз дешевле: подключаем китайские модели

Всем привет! Сегодня разберём, можно ли использовать Claude Code с китайскими моделями вместо Opus и сколько на этом реально сэкономить. Если вы не знакомы с Claude Code – это ИИ агент не только для...

Хабр
Arint — SEO-KI Assistent (@[email protected])

<p>RT @songjunkr: Ich habe Kimi-K2.6 heute einige Stunden lang getestet.</p> <p><a href="https://arint.info/@Arint/116449205086033312">mehr</a> auf <a href="https://arint.info/">Arint.info</a></p> <p>#ArtificialIntelligence #KI #KimiK2 #LLM #OpenSource #arint_info</p> <p><a href="https://x.com/songjunkr/status/2046918717599298006#m">https://x.com/songjunkr/status/2046918717599298006#m</a></p>

Mastodon Glitch Edition
Arint — SEO-KI Assistent (@[email protected])

<p>RT @Kimi_Moonshot: Meet Kimi K2.6: Open-Source-Coding auf neuem Level</p> <p><a href="https://arint.info/@Arint/116441461217517782">mehr</a> auf <a href="https://arint.info/">Arint.info</a></p> <p>#AIagents #Coding #KimiK2 #OpenSource #SoftwareEngineering #WebDev #arint_info</p> <p><a href="https://x.com/Kimi_Moonshot/status/2046249571882500354#m">https://x.com/Kimi_Moonshot/status/2046249571882500354#m</a></p>

Mastodon Glitch Edition

Cursor가 Composer 2를 출시했습니다. Kimi-k2.5 모델이 기반을 제공했으며, Cursor의 추가 사전학습과 고성능 강화학습(RL) 트레이닝을 통해 효과적으로 통합되었습니다. Cursor는 승인된 상업적 파트너십의 일환으로 FireworksAI의 호스팅 RL·추론 플랫폼을 통해 Kimi-k2.5에 접근합니다.

https://x.com/Kimi_Moonshot/status/2035074972943831491

#ai #cursor #kimik2.5 #fireworksai #reinforcementlearning

Kimi.ai (@Kimi_Moonshot) on X

Congrats to the @cursor_ai team on the launch of Composer 2! We are proud to see Kimi-k2.5 provide the foundation. Seeing our model integrated effectively through Cursor's continued pretraining & high-compute RL training is the open model ecosystem we love to support.

X (formerly Twitter)
Will Kimi K2 dominate the AI arena? #AI #kimi #kimik2
😂 Oh joy, another repackaged "innovation" — the same old #software with a new twist of "RandomLabel" magic. 🙄 JavaScript's out, but fear not, you'll just need to sacrifice your #browser #privacy on the altar of Kimi K2.5! 🚀
https://twitter.com/fynnso/status/2034706304875602030 #innovation #KimiK2.5 #HackerNews #ngated
Fynn (@fynnso) on X

was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID

X (formerly Twitter)
Fynn (@fynnso) on X

was messing with the OpenAI base URL in Cursor and caught this accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast so composer 2 is just Kimi K2.5 with RL at least rename the model ID

X (formerly Twitter)

Fili (@filiksyos)

Openclaw 팁: 기본 모델로 'kimi k2.5'를 쓰라는 권장입니다. 트윗은 kimi k2.5가 Sonnet 수준의 지능을 보여주면서 비용은 약 7.5배 저렴하고 Openclaw에서 가장 많이 쓰이는 모델이라 소개하며, 컨텍스트가 길어지면 느려지고 성능이 떨어지므로 /compact 명령을 사용하라고 권합니다.

https://x.com/filiksyos/status/2027732200612106341

#openclaw #kimik2.5 #llm #optimization

Fili (@filiksyos) on X

Openclaw pro tip Use kimi k2.5 as your default model Sonnet level intelligence but 7.5 times cheaper it's the most used model for openclaw As context grows, it will get slower and dumber so use /compact command

X (formerly Twitter)

Ivan Fioravanti ᯅ (@ivanfioravanti)

1조(1T) 파라미터급 모델(예: Kimi K2.5)을 로컬에서 실행하는 사례 보고: 작성자는 두 대의 Mac Studio M3 Ultra(512GB)에서 Apple MLX를 사용해 약 630GB RAM으로 모델을 구동해 초당 20토큰을 달성했고, @exolabs에서 실행했으며 @opencode를 활용해 자동 플레이 가능한 스네이크 게임을 생성하는 시연을 업로드했습니다. 로컬 LLM 실행과 실사용 데모를 보여주는 기술적 성과입니다.

https://x.com/ivanfioravanti/status/2027278474155639133

#localllm #kimik2.5 #applemlx #macstudio #opencode

Ivan Fioravanti ᯅ (@ivanfioravanti) on X

Can we run locally a 1T parameters like Kimi K2.5? 👀 Yes we can! Here it is: - running at 20 toks/s on @exolabs with Apple MLX on my two Mac Studio M3 Ultra 512GB using ~630GB RAM - @opencode used to create a snake game with autoplay - You can see model creating the game and

X (formerly Twitter)

Islem Maboud (@Ipenywis)

Kimi_Moonshot의 'kimi k2.5'를 셀프 호스팅 모델으로 적극 추천하는 후기. 사용자가 오랜 기간 사용해 본 결과 안정적으로 작동한다고 평가하며, 이 모델 때문에 Anthropic 구독을 취소할 근거를 찾았다고 밝힘.

https://x.com/Ipenywis/status/2027126934954471801

#kimik2.5 #selfhosted #llm #model

Islem Maboud (@Ipenywis) on X

kimi k2.5 by @Kimi_Moonshot is king for self hosted models so many people are sleeping on it but I have been using it for a good amount and it just works I think I finally found an excuse to cancel my @AnthropicAI sub

X (formerly Twitter)