Stürzt bei noch jemandem in #lmstudio das #model ohne weitere Info ab, wenn er eine Datei in den Chat gibt und Vision verwendet? Bei mir jedes mal. An #qwen oder der Systemlast kann es nicht liegen.

Hab schon mit #ollama probiert. Hat zwar 5 Minuten gebraucht, um die Katze zu beschreiben, weil er noch Thinking gemacht hat, aber das Ergebnis war zufriedenstellend und ich hatte keinen Crash.

"The model has crashed without additional information. (Exit code: null)"

#ai #linux

Finally, an open-source alternative that's easy to use and performs better than #LMStudio on Mac with support for MLX!

A bit shady that it uploads the benchmark result without noticing or asking tho

https://omlx.ai

#oMLX

#lmstudio model loading panel has an extremely useful and time saving interface that shows you an estimate of the effective memory usage based on your model runtime parameter settings.

Great tool 🙏

Ich teste gerade bisschen mit goose und lmstudio rum. Wenn ich das mit den sub-agents gut hinbekomme, eröffnet mir das ganz neue Möglichkeiten 🤓 ich hoffe es wird so gut wie in meiner Vorstellung 😃 #goose #gooseai #lmstudio #llm #localllm
Been playing with local #AI models and lately I have been really impressed with #Qwen open-source #LLM models. Qwen-3.5 and Qwen-Next recently dropped and have been great for assisting on projects! I also recommend #Zed IDE, which pairs great with #Ollama or #LMStudio. No cloud needed, 100% local!

Nobu-Kobayashi : Generative AI Technology (@nyaa_toraneko)

LM Studio에 직접 웹 검색 기능을 붙이는 오픈소스 프로젝트(mrkrsl/web-search-mcp)를 소개합니다. 이를 통해 로컬 LLM의 지식 컷오프를 신경 쓰지 않고 최신 웹 정보를 검색해 활용할 수 있어, 로컬 LLM 개발·운용에서 매우 편리하고 실무적으로 유용합니다.

https://x.com/nyaa_toraneko/status/2030341297240961415

#lmstudio #websearch #localllm #opensource

Nobu-Kobayashi : Generative AI Technology (@nyaa_toraneko) on X

LM Studioに直接WEB検索機能を持たせると、とても便利ですね。 ローカルLLMのナレッジカットオフを気にしないで使えるようになる。 https://t.co/Ej3gKOa65n

X (formerly Twitter)
goose und lm-studio

ki, ai, ml, llm oder was es noch so alles gibt. zeit für einen post dazu. am anfang dachte ich immer was soll ich denn damit. so ähnlich...

wiulinus log

Current coding setup:

- Editor: Zed
- Local Inference Engine: LM Studio
- Models: qwen3.5:35b-a3b and qwen3-coder-next

Using AI for questions and suggestions regarding code, not for vibe coding.
This constellation works pretty well for me and the new qwen models definitely are pretty usable for my case \o/
And: no worries about code leaving my device :)

#ai #coding #lmstudio #zed #qwen

我是用LM Studio來運行Qwen3.5,它預設應該是禁用了thinking,可以修改Prompt Template手動打開,只需要在最開始加入這句便可:

{%- set enable_thinking = true %}

https://www.reddit.com/r/LocalLLaMA/comments/1rhr5ko/is_there_a_way_to_disable_thinking_on_qwen_35_27b/

#Qwen35 #LmStudio

Daniel T. Vela (@danieltvela)

mlx-community의 qwen3.5-35b-a3b가 M4 Pro(14c)에서 83.87 tok/sec로 매우 빠르게 동작하는 반면, 같은 프롬프트로 Qwen의 GGUF 버전은 LM Studio에서 35.45 tok/sec로 절반 수준의 성능을 보여 큰 차이가 발생함을 보고하며 원인(포맷·옵티마이제이션·런타임 등)을 묻고 있음.

https://x.com/danieltvela/status/2028123896600211792

#qwen #gguf #lmstudio #benchmark #m4pro

Daniel T. Vela (@danieltvela) on X

mlx-community/qwen3.5-35b-a3b runs at 83.87 tok/sec on my M4 Pro (14c). Impressive!!! But GGUF version by Qwen only runs at 35.45 tok/sec with the same prompt. Both using LM Studio. Anyone knows why? @alexocheema @Prince_Canuma @ivanfioravanti

X (formerly Twitter)