Artificial Analysis (@ArtificialAnlys)

Mistral이 Mistral Small 4를 공개했습니다. 이 모델은 오픈 웨이트, 하이브리드 추론, 이미지 입력을 지원하며 Artificial Analysis Intelligence Index에서 27점을 기록했습니다. 119B MoE 구조와 토큰당 6.5B 활성 파라미터를 갖춘 것이 특징입니다.

https://x.com/ArtificialAnlys/status/2034960206736892365

#mistral #openweights #multimodal #reasoning #llm

Artificial Analysis (@ArtificialAnlys) on X

Mistral has released Mistral Small 4, an open weights model with hybrid reasoning and image input, scoring 27 on the Artificial Analysis Intelligence Index @MistralAI's Small 4 is a 119B mixture-of-experts model with 6.5B active parameters per token, supporting both reasoning

X (formerly Twitter)
Midjourney veröffentlicht die V8 Alpha, kämpft aber weiterhin mit anatomischen Darstellungsfehlern und unleserlichen Texten. Gleichzeitig erhöht der Anbieter die Preise aufgrund hoher Inferenz-Kosten. Die kreative Community wendet sich ab und wechselt zu Googles Nano Banana 2 oder Open-Weights-Alternativen wie Flux.2 und Qwen-Image, die lokales Fine-Tuning erlauben. #Midjourney #StableDiffusion #Flux2 #OpenWeights #News
https://www.all-ai.de/news/beitrage2026/midjourney-stablediffusion-crash
Was ist mit Midjourney v8 und Stable Diffusion passiert?

Der einst unangefochtene Marktführer kämpft mit massiver Kritik. Anwender wechseln zu schnellen Open-Source-Alternativen.

All-AI.de

Artificial Analysis (@ArtificialAnlys)

NVIDIA가 Nemotron 3 Super를 공개했다. 120B(활성 12B) 규모의 오픈 가중치(reasoning) 모델로, 하이브리드 Mamba-Transformer MoE 아키텍처를 채용했으며 Artificial Analysis Intelligence Index에서 36점을 기록했다고 보고되었다. 작성자는 출시 전 접근권으로 모델을 평가했다고 언급해 성능 검증이 일부 이루어졌음을 시사한다.

https://x.com/ArtificialAnlys/status/2031765321233908121

#nvidia #nemotron #moe #transformer #openweights

Artificial Analysis (@ArtificialAnlys) on X

NVIDIA has released Nemotron 3 Super, a 120B (12B active) open weights reasoning model that scores 36 on the Artificial Analysis Intelligence Index with a hybrid Mamba-Transformer MoE architecture We were given access to this model ahead of launch and evaluated it across

X (formerly Twitter)

Can anyone recommend a privacy friendly SaaS #llm #inference provider? It needs to support *function calling* on at least one of the more recent #openweights models:

- gpt-oss
- Olma3
- Apertus? (I did not yet succeed using it)

There should be some level of cost control. Ideally a hourly rate limit. European solutions are preferred.

Use case is to have a fallback for demos or experiments where local inference is not practical. Monthly costs should go towards 0 when not used.

#selfhosting

« J'ai découvert le modèle Open Weights GLM-5 »

https://notes.sklein.xyz/2026-02-27_1746/zen/

#LLM #OpenWeights #TIL #GLM5

tomaarsen (@tomaarsen)

Perplexity AI가 검색(retrieval)용으로 설계된 4개의 오픈-웨이트(state-of-the-art) 다국어 임베딩 모델을 공개했습니다. 대표 모델로 pplx-embed-v1과 pplx-embed-context-v1이 있으며, int8 및 바이너리 임베딩에 특화되어 대규모 검색 문제에 적합하도록 훈련되었다고 알립니다.

https://x.com/tomaarsen/status/2027392224879595949

#perplexity #embeddings #openweights #retrieval #pplxembed

tomaarsen (@tomaarsen) on X

🤗 @perplexity_ai has released 4 open-weights state-of-the-art multilingual embedding models designed for retrieval tasks! pplx-embed-v1 and pplx-embed-context-v1 Specifically trained for int8 and binary embeddings, they'll be viable for massive search problems. Details in 🧵

X (formerly Twitter)

Great resource for open-weight LLM releases, covering 10 architectures from early 2026. A whirlwind tour of how diverse open models are evolving and converging architecturally. 

What stands out:
• hybrid attention & MoE are showing up everywhere
• smaller models are pushing hard on coding/efficiency
• the open-weight ecosystem is very active right now

Fascinating how fast these are evolving 🦥
https://magazine.sebastianraschka.com/p/a-dream-of-spring-for-open-weight

#OpenSource #AI #LLMs #OpenWeights

A Dream of Spring for Open-Weight LLMs: 10 Architectures from Jan-Feb 2026

A Round Up And Comparison of 10 Open-Weight LLM Releases in Spring 2026

Ahead of AI

Artificial Analysis (@ArtificialAnlys)

Tri-21B-think Preview의 가중치(모델 weights)가 Hugging Face에 업로드되어 링크로 제공되었음을 알리는 안내입니다. 공개 가중치 배포를 통해 개발자 및 연구자가 모델을 직접 다운로드해 실험할 수 있습니다.

https://x.com/ArtificialAnlys/status/2024386631596462225

#huggingface #openweights #tri21b #modelhub

Artificial Analysis (@ArtificialAnlys) on X

Link to weights on @huggingface: https://t.co/7Ejhxyabta

X (formerly Twitter)

Artificial Analysis (@ArtificialAnlys)

한국 AI 스타트업 Trillion Labs가 소형 오픈 웨이트 추론 모델 'Tri-21B-think Preview'를 발표했습니다. 이 모델은 Artificial Analysis Intelligence Index에서 20점을 기록했으며, 작은 모델 치고는 높은 수준의 지능을 보여주나 동급 최상위 수준은 아닌 것으로 평가되었습니다. 공개 가중치 기반의 추론 모델로 주목됩니다.

https://x.com/ArtificialAnlys/status/2024381202959118807

#trillionlabs #tri21b #openweights #reasoning #aimodel

Artificial Analysis (@ArtificialAnlys) on X

Trillion Labs, a Korean AI startup, has launched Tri-21B-think Preview, a small open weights reasoning model that scores 20 on the Artificial Analysis Intelligence Index Key benchmarking takeaways: ➤ High but not leading intelligence for its small size: Tri-21B-think Preview

X (formerly Twitter)

Qwen (@Alibaba_Qwen)

Qwen3.5-397B-A17B-FP8 모델 가중치가 공개되었다는 발표입니다. SGLang 지원이 병합되었고 vLLM용 PR이 제출되어(vLLM 리포 연동 예정) 주요 추론 프레임워크에서 곧 사용 가능해진다는 기술·오픈소스 업데이트를 알립니다. 모델 카드와 예제 코드도 제공됩니다.

https://x.com/Alibaba_Qwen/status/2024161147537232110

#qwen3.5 #openweights #vllm #sglang

Qwen (@Alibaba_Qwen) on X

🚀 Qwen3.5-397B-A17B-FP8 weights are now open! It took some time to adapt the inference frameworks, but here we are: ✅ SGLang support is merged 🔄 vLLM PR submitted → https://t.co/rJkuitOBWs Check the model card for example code. vLLM support landing in the next couple of days!

X (formerly Twitter)