NVIDIA Pushes "Physical AI" Onto the Real World, Sidestepping Data Bottlenecks

NVIDIA's GTC 2026 focuses on 'Physical AI' for robots, using new tools like Cosmos and Isaac GR00T to train them with data from computers, not just the real world.

#PhysicalAI, #NVIDIAGTC, #Robotics, #AI, #EmbodiedAI

https://newsletter.tf/nvidia-physical-ai-robots-gtc-2026/

NVIDIA is making robots smarter by training them with computer-generated data, a new way to help them work in the real world.

#PhysicalAI, #NVIDIAGTC, #Robotics, #AI, #EmbodiedAI
https://newsletter.tf/nvidia-physical-ai-robots-gtc-2026/

NVIDIA GTC 2026: Physical AI Moves Robots From Labs to Real World

NVIDIA's GTC 2026 focuses on 'Physical AI' for robots, using new tools like Cosmos and Isaac GR00T to train them with data from computers, not just the real world.

NewsletterTF

NVIDIA Pushes "Physical AI" Onto the Real World, Sidestepping Data Bottlenecks

NVIDIA's GTC 2026 focuses on 'Physical AI' for robots, using new tools like Cosmos and Isaac GR00T to train them with data from computers, not just the real world.

#PhysicalAI, #NVIDIAGTC, #Robotics, #AI, #EmbodiedAI

https://newsletter.tf/nvidia-physical-ai-robots-gtc-2026/

NVIDIA is making robots smarter by training them with computer-generated data, a new way to help them work in the real world.

#PhysicalAI, #NVIDIAGTC, #Robotics, #AI, #EmbodiedAI
https://newsletter.tf/nvidia-physical-ai-robots-gtc-2026/

NVIDIA GTC 2026: Physical AI Moves Robots From Labs to Real World

NVIDIA's GTC 2026 focuses on 'Physical AI' for robots, using new tools like Cosmos and Isaac GR00T to train them with data from computers, not just the real world.

NewsletterTF
SAP and ANYbotics drive industrial adoption of physical AI

ANYboticsโ€™ four-legged autonomous robots will be connected straight into SAPโ€™s backend enterprise resource planning software.

AI News

Microsoft Research (@MSFTResearch)

AsgardBench๋Š” ์‹œ๊ฐ ๊ด€์ฐฐ์„ ๋ฐ”ํƒ•์œผ๋กœ ์ž„๋ฒ ๋””๋“œ ์—์ด์ „ํŠธ๊ฐ€ ์ž‘์—… ์ค‘ ๊ณ„ํš์„ ์ˆ˜์ •ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ํ‰๊ฐ€ํ•˜๋Š” ๋ฒค์น˜๋งˆํฌ๋‹ค. ์ง€๊ฐ ๊ธฐ๋ฐ˜ ๊ณ„ํš ๋Šฅ๋ ฅ์— ์ดˆ์ ์„ ๋งž์ถฐ ์—์ด์ „ํŠธ์˜ ํ•œ๊ณ„๋ฅผ ๋“œ๋Ÿฌ๋‚ด๊ณ , ์‹ ๋ขฐ์„ฑ ํ–ฅ์ƒ์— ํ•„์š”ํ•œ ๊ฐœ์„  ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•œ๋‹ค.

https://x.com/MSFTResearch/status/2037244033475453210

#ai #benchmark #agents #embodiedai #planning

Microsoft Research (@MSFTResearch) on X

AsgardBench evaluates whether embodied agents can revise their plans based on visual observations as tasks unfold. By focusing on perception-driven planning, it exposes key limitations and guides improvements in agent reliability. https://t.co/6jAXzgCLvH

X (formerly Twitter)

็”ฐไธญ็พฉๅผ˜ | taziku CEO / AI ร— Creative (@taziku_co)

Roadrunner๋Š” ํšก๋ ฌ/์ธ๋ผ์ธ ๋ฐ”ํ€ด์™€ ์Šคํ…Œํ•‘์„ ๋ชจ๋‘ ๋‹ค๋ฃจ๋Š” ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ ์ด๋™ ๋กœ๋ด‡์œผ๋กœ, ๋‹จ์ผ ์ •์ฑ…์œผ๋กœ ๋‹ค์–‘ํ•œ ์ฃผํ–‰ ๋ฐฉ์‹์„ ํ•™์Šตํ–ˆ๋‹ค. ๋„˜์–ด์ง ๋ณต๊ตฌ์™€ ํ•œ์ชฝ ๋ฐ”ํ€ด ๊ท ํ˜•๊นŒ์ง€ ์‹ค๊ธฐ๊ธฐ์—์„œ ์ œ๋กœ์ƒท์œผ๋กœ ๋™์ž‘ํ•ด ๋ฒ”์šฉ ๋กœ๋ด‡ ์ด๋™ ์ œ์–ด์˜ ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด์—ฌ์คฌ๋‹ค.

https://x.com/taziku_co/status/2036373975698465266

#robotics #multimodal #reinforcementlearning #zeroshot #embodiedai

็”ฐไธญ็พฉๅผ˜ | taziku CEO / AI ร— Creative (@taziku_co) on X

ใƒžใƒซใƒใƒขใƒผใƒ€ใƒซ็งปๅ‹•ใ‚’ๅฎŸ็พ ใ€ŒRoadrunner๏ผˆ@rai_inst๏ผ‰ใ€ใฏใ€ๆจชไธฆใณ่ปŠ่ผชใ€ใ‚คใƒณใƒฉใ‚คใƒณ่ปŠ่ผชใ€ใ‚นใƒ†ใƒƒใƒ”ใƒณใ‚ฐใ‚’ไฝฟใ„ใ“ใชใ™่ปŠ่ผช่žๅˆใƒญใƒœใƒƒใƒˆใ€‚ ๆจชไธฆใณ่ปŠ่ผชใจใ‚คใƒณใƒฉใ‚คใƒณ่ตฐ่กŒใชใฉใ‚’ๅ˜ไธ€ใƒใƒชใ‚ทใƒผใงๅญฆ็ฟ’ใ—ใ€ ่ตทใไธŠใŒใ‚Šใ‚„็‰‡่ผชใƒใƒฉใƒณใ‚นใพใงใ‚ผใƒญใ‚ทใƒงใƒƒใƒˆใงๅฎŸๆฉŸๅฑ•้–‹ใ€‚

X (formerly Twitter)

Danfei Xu (@danfei_xu)

์‚ฌ๋žŒ์˜ 1์ธ์นญ ์‹œ์  ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•ด ๋กœ๋ด‡์„ ํ•™์Šต์‹œํ‚ค๋Š” ์ƒํƒœ๊ณ„ EgoVerse๊ฐ€ ์†Œ๊ฐœ๋๋‹ค. 4๊ฐœ ์—ฐ๊ตฌ์†Œ์™€ 3๊ฐœ ์‚ฐ์—… ํŒŒํŠธ๋„ˆ๊ฐ€ ์ฐธ์—ฌํ–ˆ์œผ๋ฉฐ, 1300์‹œ๊ฐ„ ์ด์ƒยท240๊ฐœ ์žฅ๋ฉดยท2000๊ฐœ ์ด์ƒ ๊ณผ์ œ๋ฅผ ํฌํ•จํ•˜๋Š” ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ์…‹๊ณผ ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋ฅผ ์ œ๊ณตํ•œ๋‹ค.

https://x.com/danfei_xu/status/2036108953017368960

#robotics #dataset #embodiedai #machinelearning #research

Danfei Xu (@danfei_xu) on X

Introducing EgoVerse: an ecosystem for robot learning from egocentric human data. Built and tested by 4 research labs + 3 industry partners, EgoVerse enables both science and scaling 1300+ hrs, 240 scenes, 2000+ tasks, and growing Dataset design, findings, and ecosystem ๐Ÿงต

X (formerly Twitter)

fly51fly (@fly51fly)

๊ฐ•ํ™”ํ•™์Šต์„ ํ™œ์šฉํ•ด ๋กœ๋ด‡์ด ์–ธ์ œ ์ƒ๊ฐํ•ด์•ผ ํ•˜๋Š”์ง€ ์ž์› ์ธ์ง€ํ˜• ์ถ”๋ก ์„ ํ•™์Šตํ•˜๋Š” โ€˜Resource-Aware Reasoningโ€™ ์—ฐ๊ตฌ๊ฐ€ ์†Œ๊ฐœ๋๋‹ค. ์ž„๋ฒ ๋””๋“œ ๋กœ๋ณดํ‹ฑ ์˜์‚ฌ๊ฒฐ์ •์—์„œ ๊ณ„์‚ฐ ์ž์›์„ ์ ˆ์•ฝํ•˜๋ฉด์„œ๋„ ํšจ์œจ์ ์ธ ์ถ”๋ก ์„ ๋ชฉํ‘œ๋กœ ํ•˜๋Š” ์ƒˆ๋กœ์šด ๋กœ๋ด‡ AI ์ ‘๊ทผ์ด๋‹ค.

https://x.com/fly51fly/status/2034385797588418905

#robotics #reinforcementlearning #embodiedai #reasoning #arxiv

fly51fly (@fly51fly) on X

[RO] When Should a Robot Think? Resource-Aware Reasoning via Reinforcement Learning for Embodied Robotic Decision-Making J Liu, P Zhao, Z Kong, X Shenโ€ฆ [CMU & Northeastern University & Harvard University] (2026) https://t.co/NdTpWenMq2

X (formerly Twitter)
How the Eon Team Produced a Virtual Embodied Fly

A technical deep dive into integrating the adult fly connectome, connectome-constrained brain models, and neuromechanical body simulations to build the first virtual embodied fly.