Sudo su (@sudoingX)

27B 로컬 모델이 자신의 벤치마크 보고서를 직접 작성하는 사례를 소개한다. Carnice-v2 27B가 하드웨어, 모델 파일, llama.cpp 커밋을 찾아 자기 평가를 수행하는 등 로컬 에이전트형 AI의 가능성을 보여준다.

https://x.com/sudoingX/status/2052051592770469894

#localmodel #benchmark #agenticai #llamacpp #qwen

Sudo su (@sudoingX) on X

watching a 27b local model write its own benchmark report just now and i'm sitting with this for a sec. gave carnice-v2 27b (kaios SFT on qwen 3.6 dense, trained on hermes agent traces) a self-report card task, find your hardware, find your model file, find the llama.cpp commit

X (formerly Twitter)

Unsloth AI (@UnslothAI)

Claude Code, Codex, OpenClaw에서 오픈 LLM을 실행하는 가이드를 공개했다. Gemma 4와 Qwen3.6 GGUF를 24GB RAM 환경에서 로컬 에이전트 코딩에 활용할 수 있으며, self-healing tool calls, 코드 실행, 웹 검색을 Unsloth API와 llama.cpp로 지원한다.

https://x.com/UnslothAI/status/2051669222011683045

#openllm #claudecode #codex #unsloth #llamacpp

Unsloth AI (@UnslothAI) on X

We made a guide on how to run open LLMs in Claude Code, Codex and OpenClaw. Use Gemma 4 and Qwen3.6 GGUFs for local agentic coding on 24GB RAM Run with self-healing tool calls, code execution, web search via the Unsloth API endpoint and llama.cpp Guide: https://t.co/VienFDSwcg

X (formerly Twitter)

New week, more slides: Run LLMs Locally

Now with LFM 2 and new slides for using Transformers.js with WebGPU for Privacy Filter, Function Calling and Embeddings, running completely in your browser.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf

#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

Ivan Fioravanti ᯅ (@ivanfioravanti)

M3 Ultra와 M5 Max에서 llamacpp를 활용해 작업 중이며, Neural Accelerators는 프롬프트 처리에 도움이 되지만 텍스트 생성에서는 M3 Ultra가 더 낫다고 언급합니다. batch inference에서 -np 설정이 성능에 큰 영향을 주고 이상한 결과도 관찰했다고 공유합니다.

https://x.com/ivanfioravanti/status/2051399357572759972

#llamacpp #apple #m3ultra #m5max #inference

Ivan Fioravanti ᯅ (@ivanfioravanti) on X

Working more and more on llamacpp on my M3 Ultra and M5 Max. Neural Accelerators help on prompt processing, but on text generation M3 Ultra wins. Still fighting on the batch inference, -np has dramatic impact on it and I get very strange results.

X (formerly Twitter)

Гефестыч: наш опыт автоматизации Code Review через LLM. «Грабли», решения, код

Привет, Хабр! Меня зовут Данил Чечков, я Team Lead команды High End Meta Backend в «Леста Игры». Мы занимаемся всей web-составляющей «Мира кораблей». В нашем арсенале огромное количество микросервисов, работающих на Python и Go. Мы отвечаем за покупки в meta-валюте, авторизацию, стабильность инвентаря и профиля игрока, клановые сервисы, а также многое-многое другое. Наш основной продукт – высококачественные web-сервисы на стыке интеграции с игрой. И, да, интеграция – часть нашей работы. А ещё мы любим новые технологии и стараемся с ними знакомиться, чтобы оценить, как они могут принести выгоду бизнесу и нам. Одна из таких технологий – LLM

https://habr.com/ru/companies/lesta/articles/1029670/

#llm #pydanticai #openwebui #llamacpp #ollama #rag #code_review #selfhosted #atlassian

Гефестыч: наш опыт автоматизации Code Review через LLM. «Грабли», решения, код

Введение Кто мы? Привет, Хабр! Меня зовут Данил Чечков, я Team Lead команды High End Meta Backend в «Леста Игры». Мы занимаемся всей web-составляющей «Мира кораблей». В нашем арсенале огромное...

Хабр

antirez (@antirez)

DS4 관련 글을 읽고, 작성자가 자신의 GGUF 2-bit 양자화 모델을 지금 바로 실행할 수 있다고 알리며 llama.cpp 기반 DeepSeek V4 Flash용 저장소를 공유했다. 또한 곧 수직형 DS4 추론 엔진을 출시할 예정이라고 밝혀 오픈소스 추론 도구의 신규 발전을 예고했다.

https://x.com/antirez/status/2050628563380920509

#llamacpp #gguf #quantization #deepseek #opensource

antirez (@antirez) on X

@simonw Hi! Just read your post on DS4, please note that you can run my GGUF 2-bit quantized right now if you wish: https://t.co/etJop0b3VX And a vertical ds4 inference engine is coming soon, I'm on it. https://t.co/G3JkWaoLXk

X (formerly Twitter)

CVE-2026-34159: llama.cpp RPC backend has an unauthenticated, no-bounds-check RCE. Zero buffer field in deserialize_tensor() allows arbitrary memory read/write. No auth, low complexity, CVSS 9.8. Patch to b8492 immediately. #infosec #llamacpp #rce

https://www.valtersit.com/cve/2026/04/cve-2026-34159/

CVE-2026-34159 | Valters IT Hub

Today's #anvilope update, you can now use llama.cpp for inference! Sort those emails!

https://git.sr.ht/~dvshkn/anvilope/commit/d5ad9a3

#llamacpp #rustlang

New week, new slides: Run LLMs Locally

Now including Nemotron 3 Nano Omni from Nvidia, Llama.cpp built-in tools and new slides about using Transformers.js with WebGPU for Image Recognition and OCR.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2026_ThomasBley.pdf

#ai #llm #llamacpp #stablediffusion #gptoss #qwen3 #glm #localai #gemma4 #nemotron #webgpu

My another #canitbedone #software #project. #bash #AI #coding assistant. Very hacky combination of bash, sed, awk, wc, grep, jq and of course #curl glued together in about 3k lines of bash source code. As #LLM it is using local instance of #modelGemma4 running in #llamacpp. Surprisingly it works better than expected, have less miss edits than GitHub copilot on the same task. Except these #linux command line tools it has zero dependency on any AI frameworks.