Mira Murati Wants Her #AI to ‘Keep Humans in the Loop’

The #ThinkingMachinesLab founder and former #CTO of #OpenAI tells WIRED she isn’t interested in #automating people out of jobs. Instead, she’s building AI that can collaborate.
#artificialintelligence

https://www.wired.com/story/mira-murati-humans-in-the-loop-ai-models-thinking-machines/

Mira Murati Wants Her AI to ‘Keep Humans in the Loop’

The Thinking Machines Lab founder and former CTO of OpenAI tells WIRED she isn’t interested in automating people out of jobs. Instead, she’s building AI that can collaborate.

WIRED

Mira Murati bets against the autonomous agent

Mira Murati, 전 OpenAI CTO이자 현재 Thinking Machines Lab 설립자는 자율 에이전트 중심 AI 개발에 반대하며, 인간과의 실시간 협업을 중시하는 멀티모달 상호작용 모델 TML-Interaction-Small을 공개했다. 기존 AI 연구들이 인간을 루프에서 배제하는 방향으로 나아가는 것과 달리, Murati는 대역폭이 병목이며 인간과의 협력이 더 효과적이라고 주장한다. 이는 Anthropic의 자율 에이전트 전략과 대비되는 관점으로, AI 개발자들에게 인간-모델 인터페이스 설계에 대한 새로운 시사점을 제공한다.

https://vector.news/mira-murati-bets-against-the-autonomous-agent/

#llm #multimodal #autonomousagent #aiinteraction #thinkingmachineslab

Mira Murati bets against the autonomous agent

Thinking Machines released its first model on Monday, arguing the AI bottleneck is bandwidth, not autonomy — and timed to a round that would value the lab at $50 billion

Vector

https://winbuzzer.com/2026/05/13/thinking-machines-wants-to-build-an-ai-that-actual-xcxwbn/

Thinking Machines Lab has previewed a research-stage full-duplex AI system built to keep listening while it responds, rather than waiting for turn-based exchanges.

#AI #ThinkingMachinesLab #MiraMurati #VoiceAI #AIModels #ConversationalAI #MultimodalAI #VoiceAssistants

Interaction models by Thinking Machines Lab [video]

Thinking Machines Lab이 실시간 협업을 위해 설계된 새로운 AI 'interaction models'를 소개했다. 이 모델은 사람처럼 동시에 듣고, 말하고, 보고, 보여주고, 생각하는 능력을 갖춰 AI와 인간 간의 자연스러운 협업을 목표로 한다. 해당 기술은 실시간 상호작용에 최적화된 AI 모델로, 자세한 기술 보고서는 공식 블로그에서 확인할 수 있다.

https://www.youtube.com/watch?v=A12AVongNN4

#interactionmodels #realtimeai #aicollaboration #thinkingmachineslab

Introducing interaction models | Thinking Machines Lab

YouTube
2 von 3 Co-Gründern verlassen Thinking Machines Lab. OpenAI holt sich Barret Zoph und Luke Zettlemoyer von Mira Murati zurück. Zoph ist spezialisiert auf Post-Training, Zettlemoyer bringt akademische Expertise der University of Washington mit. Für Muratis Deep-Tech-Startup ist der Verlust des Gründungsteams kurz nach Start ein kritisches Signal an Investoren. #OpenAI #MiraMurati #ThinkingMachinesLab
https://www.all-ai.de/news/news26/openai-murati-talente
Mira Muratis Fehlstart: OpenAI wildert brutal bei der Ex-Chefin

Kaum gestartet, schon am Ende? Der Abgang der Co-Gründer ist ein fatales Signal für Muratis ambitioniertes KI-Startup.

All-AI.de

Exciting news in the AI world! A VIT alumnus steps into a leadership role at Thinking Machines Lab as their new CTO. Bringing cutting-edge expertise from top tech circles, this professional is set to drive innovative research and technological advancement in artificial intelligence. What groundbreaking developments will emerge? 🚀 #ThinkingMachinesLab #ArtificialIntelligence #TechLeadership #VITAlumni

🔗 https://aidailypost.com/news/vit-alumnus-joins-thinking-machines-lab-new-chief-technology-officer

Andrew Tulloch, đồng sáng lập Thinking Machines Lab, đã rời công ty để gia nhập Meta. Điều đáng chú ý là trước đó ông từng từ chối lời đề nghị trị giá 1.3 tỷ USD.
#AndrewTulloch #Meta #ThinkingMachinesLab #AI #TechNews #CôngNghệ #TríTuệNhânTạo #TinTứcCôngNghệ

https://www.reddit.com/r/singularity/comments/1o46zre/thinking_machines_lab_cofounder_andrew/

🚀🤖 "Brilliant" minds at Thinking Machines Lab have unearthed groundbreaking revelations that bigger is, indeed, better when it comes to #AI models. 🙄 Turns out, posttraining on smaller datasets is just a total waste because who needs efficiency when you have infinite computing power and time, right? 😂
https://thinkingmachines.ai/blog/lora/ #Research #BiggerIsBetter #ThinkingMachinesLab #DataEfficiency #InfiniteComputing #HackerNews #ngated
LoRA Without Regret

How LoRA matches full training performance more broadly than expected.

Thinking Machines Lab