Облачные модели Ollama в задачах code review — честное сравнение на примерах

AI всё чаще используется в разработке: генерация кода, автодополнение, агентные IDE. Но возникает логичный вопрос - можно ли доверить LLM полноценный code review? В этой статье я решил проверить это на практике. Я сравнил несколько моделей, доступных через Ollama Cloud - Qwen 3.5, GPT-OSS и DeepSeek v3.1 - и дал им проанализировать реальные Pull Request из легаси-проекта на Python. Спойлер: некоторые модели показали неожиданно хороший результат.

https://habr.com/ru/articles/1010048/

#code_review #ollama #llm #ai_code_review #pull_request #github #open_source #deepseek #qwen #gptoss

Облачные модели Ollama в задачах code review — честное сравнение на примерах

С недавних пор AI-инструменты стали важной частью разработки. Такие решения, как Cursor , Codex и Claude Code позволяют разработчикам генерировать код, ускорять написание функций и автоматизировать...

Хабр

New update for the slides of my talk "Run LLMs Locally":

Now including Reranking, Qwen 3.5 (slower than Qwen 3, but includes Vision) and loading models with Direct I/O.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf

#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp

One more update for the slides of my talk "Run LLMs Locally":

Now including text to speech with Qwen3-TTS and Model Context Protocol.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf

#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp

I updated the slides for my talk "Run LLMs Locally":

Now including image generation with Qwen3 and content classification from the Qwen3Guard Technical Report paper.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf

#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai

Python Trending (@pythontrending)

gpt-oss는 OpenAI에서 공개 가중치로 제공하는 언어모델 프로젝트로, 120B와 20B 두 가지 오픈-웨이트 모델(예: gpt-oss-120b, gpt-oss-20b)을 포함한다고 소개됨. 오픈 웨이트 모델 출시는 커뮤니티 기반 연구·응용 확장에 중요한 의미를 가짐.

https://x.com/pythontrending/status/2029150901940719972

#gptoss #openai #openweight #opensource #languagemodels

Python Trending 🇺🇦 (@pythontrending) on X

gpt-oss - 120b and gpt-oss-20b are two open-weight language models by OpenAI https://t.co/9P4PDeljog

X (formerly Twitter)
Tell ya what, #Mailroute + #Sortana / #gptoss has made my #email manageable for the first time in maybe 2 decades. Truly amazing.

Nico Martin (@nic_o_martin)

브라우저에서 노트북으로 GPT-OSS(21B 파라미터)를 초당 40토큰으로 실행하는 방법을 설명하는 트윗입니다. 핵심은 MoE(혼합 전문가) 아키텍처이며, @ariG23498가 이 복잡한 주제를 이해하기 쉽게 잘 설명했다는 내용입니다. 브라우저 기반 대형 모델 실행의 성능 향상 사례로 중요합니다.

https://x.com/nic_o_martin/status/2027005638874730872

#gptoss #moe #llm #browser

🤷 Nico Martin (@nic_o_martin) on X

Do you want to know how on earth it is possible to run GPT-OSS (a 21B parameter model) at 40 tokens/second in the browser on a laptop? The key is MoE! And @ariG23498 did a great job of making this complex topic easy to understand. 👇

X (formerly Twitter)

Naoto Iwase (@naoto_iwase)

제120회 의사국가시험(2026)을 여러 LLM으로 풀어본 벤치마크 결과를 공개했다는 보고입니다. 특히 GPT-OSS-Swallow-120B가 gpt-oss-120b를 능가했으며, 비전 기능이 없어도 GPT-5.2나 Qwen3.5-397B 같은 거대 모델에 근접한 성능을 보였다고 언급합니다. 데이터셋·코드·모든 모델 출력은 GitHub에 공개되어 재현 가능성을 제공합니다.

https://x.com/naoto_iwase/status/2026878501769589016

#llm #benchmark #gptoss #medical #opensource

Naoto Iwase (@naoto_iwase) on X

第120回医師国家試験(2026)をLLMに解かせてみました。 特にGPT-OSS-Swallow-120Bは、gpt-oss-120bを超え、Vision非対応にもかかわらずGPT-5.2やQwen3.5-397Bといった巨大モデルに迫っているのが印象的でした。 データセット・コード・全モデルの出力を公開しています。 https://t.co/5fodosXmXv

X (formerly Twitter)

New talk coming tomorrow: Run LLMs locally

I will present it @phpugmrn in Mannheim.

#llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #ocr #localai #security

Successfully plugged #cerebras #gptoss into the framework from the Recursive Language Models paper @ https://arxiv.org/abs/2512.24601

Getting ready to test it out on some long context problems. I've been thinking for a long time that a better way to handle context would be for the LLM to touch its content directly as little as possible. And that's what the paper authors did, putting everything in a REPL and manipulating context via variables. "Infinite context"!

#AI #ML #paper