Das chinesische #llm #MiniMax M2.7 übertrifft seinen Vorgänger M2.5 bei weitem und ist im Bereich von Athropics #Opus angekommen.
Leider scheint sich nun auch die Veröffentlichungsstrategie von #OpenWeight zu propritär zu wandeln.
Ich bin gespannt wie sich die chinesische Konkurrenz #Kimi, #GLM und #Qwen in Zukunft verhält.
Ich halte OpenWeight für unverzichtbar - der Markt vielleicht schon.

Zixuan Li (@ZixuanLi_)

GLM-5.1이 오픈소스로 공개될 예정이라고 언급됐다. 세부 일정이나 라이선스는 없지만, 대형 모델의 오픈소스 전환 소식이라 AI 개발자들에게 주목할 만하다.

https://x.com/ZixuanLi_/status/2035039071894933546

#glm #opensource #llm #model #ai

Zixuan Li (@ZixuanLi_) on X

Don't panic. GLM-5.1 will be open source.

X (formerly Twitter)

Kilo (@kilocode)

Kilo Coders 커뮤니티(혹은 그룹)에서 최신 모델 GLM-5-Turbo에 큰 호응을 보이고 있다는 소식입니다. 출시 직후부터 KiloClaw 내에서 선두주자로 자리잡으며 주목받고 있다는 내용으로, GLM-5-Turbo의 초기 채택과 트렌드 형성을 알리는 발표성 트윗입니다.

https://x.com/kilocode/status/2033545223750484299

#glm5turbo #glm #ai #languagemodel

Kilo (@kilocode) on X

Kilo Coders are head over heals for GLM-5-Turbo! Fresh out of the gate, it is already leading the charge in KiloClaw 🦞🏇🦞🏇🦞

X (formerly Twitter)

Charm (@charmcli)

신규 모델 출시 공지로 GLM-5-Turbo가 방금 출시되었고 Crush 플랫폼에서 즉시 사용 가능하다는 안내입니다. 별도 업데이트가 필요 없다는 점을 강조해 개발자·운영자에게 빠른 도입 가능성을 알립니다.

https://x.com/charmcli/status/2033235294246388165

#glm #glm5turbo #ai #release #llm

Charm (@charmcli) on X

GLM-5-Turbo just released Available in Crush now No update required

X (formerly Twitter)

New update for the slides of my talk "Run LLMs Locally":

Now including Reranking, Qwen 3.5 (slower than Qwen 3, but includes Vision) and loading models with Direct I/O.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf

#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp

Keli (@hkdagadu)

작성자가 많은 사람이 주목하지 않는다고 지적하며 GLM과 Zai_org의 활동을 칭찬하는 내용. GLM(모델)과 Zai_org(조직)가 좋은 연구·개발 성과를 내고 있다는 점을 강조, 특정 모델/연구 그룹의 주목 필요성을 언급.

https://x.com/hkdagadu/status/2032370481798488315

#glm #zai_org #research #nlp

One more update for the slides of my talk "Run LLMs Locally":

Now including text to speech with Qwen3-TTS and Model Context Protocol.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf

#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai #mcp

#statstab #501 Prediction intervals for GLMs

Thoughts: Sometimes the prediction of the next dat point can be [0,1]. Not very useful.

#prediction #uncertainty #predictionintervals #glm #binomial #probability

https://fromthebottomoftheheap.net/2017/05/01/glm-prediction-intervals-i/

Prediction intervals for GLMs part I

One of my more popular answers on StackOverflow concerns the issue of prediction intervals for a generalized linear model (GLM). My answer really only addresses how to compute confidence intervals for parameters but in the comments I discuss the more substantive points raised by the OP in their question. Lately...

From the Bottom of the Heap

I updated the slides for my talk "Run LLMs Locally":

Now including image generation with Qwen3 and content classification from the Qwen3Guard Technical Report paper.

https://codeberg.org/thbley/talks/raw/branch/main/Run_LLMs_Locally_2025_ThomasBley.pdf

#llm #llamacpp #ollama #stablediffusion #gptoss #qwen3 #glm #opencode #localai

Z.ai (@Zai_org)

3월 1일 런던 해커톤이 성공적으로 진행되었으며 'agentic' 분위기가 강조되었습니다. @louszbd와 @DamianZ16026의 공동 키노트가 있었고, GLM은 OpenClaw 생태계에 계속 기여할 예정입니다. 또한 Agentic Engineering Award를 신설하고 수상자에게 기회 제공을 예고했습니다.

https://x.com/Zai_org/status/2029193593576202437

#hackathon #agentic #openclaw #glm #award

Z.ai (@Zai_org) on X

The March 1 hackathon in London was a huge success. The agentic vibe was everywhere. @louszbd and @DamianZ16026 gave a keynote together. GLM will continue contributing to the OpenClaw🦞 ecosystem. We set up an Agentic Engineering Award🏆. And winners will have the opportunity to

X (formerly Twitter)