Meta Platforms postpones launch of new AI model codenamed 'Avocado' to at least May after it failed to match performance of rivals Google, OpenAI, and Anthropic in key areas including reasoning, coding, and writing capabilities, despite CEO Mark Zuckerberg's billions in AI investments.
#YonhapInfomax #MetaPlatforms #ArtificialIntelligence #MarkZuckerberg #GoogleGemini #ModelPerformance #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
https://en.infomaxai.com/news/articleView.html?idxno=109761
Meta Delays New AI Model Launch Due to Performance Shortfall Against Rivals

Meta Platforms postpones launch of new AI model codenamed 'Avocado' to at least May after it failed to match performance of rivals Google, OpenAI, and Anthropic in key areas including reasoning, coding, and writing capabilities, despite CEO Mark Zuckerberg's billions in AI investments.

Yonhap Infomax

Rohan Paul (@rohanpaul_ai)

Ant Open Source가 LLaDA2.1 Flash를 공개했습니다. 100B 파라미터 규모의 언어 diffusion MoE(혼합 전문가) 모델로, 최대 892 토큰/초의 추론 속도를 기록해 Qwen3-30B-A3B보다 2.5배 빠른 성능을 냈다고 보고되었습니다. 높은 실시간 추론 성능을 강조한 릴리스입니다.

https://x.com/rohanpaul_ai/status/2021643743313756658

#llm #inferencespeed #mixtureofexperts #antopensource #modelperformance

Rohan Paul (@rohanpaul_ai) on X

Ant Open Source just dropped LLaDA2.1 Flash. Insane inference speed for a 100B param language diffusion MoE model. Achieved a peak speed of 892 tokens per second beating the much smaller Qwen3-30B-A3B by 2.5x. The reason it could achieve this incredible speed is because it

X (formerly Twitter)

StepFun (@StepFun_ai)

Step 3.5 Flash 모델이 MathArena에서 1위를 차지했으며 전체 점수 96.11%, AIME 2026 I에서 97% 성능을 기록했습니다. 런당 비용은 $0.40로, 11B 액티브 파라미터 규모의 모델이 높은 성능과 저비용을 동시에 보여준 사례입니다.

https://x.com/StepFun_ai/status/2021721309567221772

#step3.5 #matharena #llm #modelperformance #cost

StepFun (@StepFun_ai) on X

Step 3.5 Flash is now #1 on MathArena 🏆 96.11% overall. 97% AIME 2026 I. $0.40/run. not bad for an 11B active param model 😤 https://t.co/SaOMQ32hYO

X (formerly Twitter)

Tibo (@thsottiaux)

작성자는 최신 조합으로 코딩 성능에서 SoTA를 달성했고, 토큰 효율성(token-efficiency)과 추론 최적화(inference optimizations)를 결합해 지난주 버전보다 빠르다고 주장합니다. 고·극고(reasoning effort) 환경에서 GPT-5.3-Codex가 GPT-5.2-Codex보다 약 60~70% 더 빠르다고 명시합니다.

https://x.com/thsottiaux/status/2019495904395612474

#gpt5.3 #codex #modelperformance #llm

Tibo (@thsottiaux) on X

First time we combine SoTA on coding performance AND it is objectively the fastest thanks to combination of token-efficiency and inference optimizations. At high and xhigh reasoning effort, the two combine to make GPT-5.3-Codex ~60-70% faster than GPT-5.2-Codex from last week.

X (formerly Twitter)

Chased/acc (@ChaseWang)

Qwen3 30B 모델이 가정 환경에서도 구동되어 초당 약 20 token 처리 속도를 낸다고 보고되었으며, 이 성능은 @exolabs 덕분이라고 언급하고 있습니다.

https://x.com/ChaseWang/status/2011713487916187764

#qwen #qwen3 #exolabs #llm #modelperformance

Chase📈d/acc🦇🔊 (@ChaseWang) on X

Qwen3 - 30B 自己在家跑也能 20 token/s thanks to @exolabs

X (formerly Twitter)
Master SAS/STAT for Complex Statistical Analysis

Prepare for SAS/STAT certification, focusing on variance analysis, regression, and model performance. | CoListy