Ilir Aliu (@IlirAliu_)
시뮬레이션에서 잘 동작하던 모델이 실제 배포 시 그리퍼의 2mm 오차, 지연(레이턴시) 스파이크, sim-to-real 갭 등 인프라적 문제로 실패한다는 경험적 경고입니다. 핵심은 단지 모델이 아니라 실제 운영을 위한 인프라라는 주장입니다.
Ilir Aliu (@IlirAliu_)
시뮬레이션에서 잘 동작하던 모델이 실제 배포 시 그리퍼의 2mm 오차, 지연(레이턴시) 스파이크, sim-to-real 갭 등 인프라적 문제로 실패한다는 경험적 경고입니다. 핵심은 단지 모델이 아니라 실제 운영을 위한 인프라라는 주장입니다.
Lukas Ziegler (@lukas_m_ziegler)
브레이킹: ABB Robotics와 NVIDIA가 협력해 NVIDIA Omniverse 라이브러리를 ABB의 RobotStudio에 통합한다고 발표했습니다. 이를 통해 가상훈련에서 실제 배포로의 sim-to-real 격차를 최대 99%까지 줄여 산업용 'physical AI' 적용을 가속화하겠다는 기술 통합·제품 업데이트 소식입니다.

🚨 BREAKING: @ABBRobotics + @nvidia close the sim-to-real gap with 99% accuracy! 👾 ABB Robotics is integrating NVIDIA Omniverse libraries into RobotStudio to deliver physical AI for industry, closing the gap from virtual training to real-world deployment with up to 99%
NATIX Network (@NATIXNetwork)
NVIDIA의 Omniverse는 기존 시뮬레이션 기반의 한계(시뮬→리얼 격차)가 있었으나, Cosmos는 영상으로부터 물리 규칙을 학습해 이를 극복한다고 주장합니다. Cosmos를 통해 NVIDIA가 시뮬레이션에서 생성적(제너레이티브) 세계 모델링으로 전환했으며, 이는 시뮬투리얼 문제 해결의 전환점이라고 소개됩니다.
RS DesignSpark (@RSDesignSpark)
시리즈 'Boots on the Floor' 4부에서 NVIDIA 기반 시스템을 연구실에서 공장 현장으로 옮길 때 생기는 문제를 다룹니다. 먼지·진동으로 신호가 바뀌고 교대 패턴이 엣지 케이스를 만들며, 연구환경에서 정한 임계값이 실제 속도에서는 지나치게 민감해지는 등 현장 적용 시 고려할 점들을 설명합니다.

Boots on the Floor: When the Model Meets the Mess Part 4 takes the @nvidia‑powered system from lab to factory, and reality hits fast. Dust and vibration change signals. Shift patterns create edge cases. A “perfect” threshold becomes too sensitive at full speed. Find out what
Lukas Ziegler (@lukas_m_ziegler)
ProtoMotions가 v3.1을 공개했습니다. 이번 대대적 업그레이드는 시뮬레이션에서 로봇 모션을 학습하고 실제 하드웨어에 배포하는 과정을 쉽게 만드는 데 중점을 둔 sim-to-real(시뮬→실물) 모션 트래킹 레포지토리의 주요 업데이트입니다. 릴리스 직후 이미 1천 건 이상의 반응을 얻었으며, 향후 실제 배포 성능이 주목됩니다.

Sim-to-real motion tracking repo! 👀 ProtoMotions just released v3.1, a HUGE upgrade focused on making it easier to train robot motion in simulation and deploy it on real hardware. It already has +1k ⭐️ - let's see in couple of weeks after this upgrade is released. The new
Evaluating Text-to-Image Diffusion Models for Texturing Synthetic Data
Authors: Thomas Lips, Francis wyffels
pre-print -> https://arxiv.org/abs/2411.10164
Building generic robotic manipulation systems often requires large amounts of real-world data, which can be dificult to collect. Synthetic data generation offers a promising alternative, but limiting the sim-to-real gap requires significant engineering efforts. To reduce this engineering effort, we investigate the use of pretrained text-to-image diffusion models for texturing synthetic images and compare this approach with using random textures, a common domain randomization technique in synthetic data generation. We focus on generating object-centric representations, such as keypoints and segmentation masks, which are important for robotic manipulation and require precise annotations. We evaluate the efficacy of the texturing methods by training models on the synthetic data and measuring their performance on real-world datasets for three object categories: shoes, T-shirts, and mugs. Surprisingly, we find that texturing using a diffusion model performs on par with random textures, despite generating seemingly more realistic images. Our results suggest that, for now, using diffusion models for texturing does not benefit synthetic data generation for robotics. The code, data and trained models are available at \url{https://github.com/tlpss/diffusing-synthetic-data.git}.
RoboGSim: A Real2Sim2Real Robotic Gaussian Splatting Simulator
Authors: Xinhai Li, Jialin Li, Ziheng Zhang, Rui Zhang, Fan Jia, Tiancai Wang, Haoqiang Fan, Kuo-Kun Tseng, Ruiping Wang
pre-print -> https://arxiv.org/abs/2411.11839
website -> https://robogsim.github.io
#robotics #manipulation #data_generation #sim2real #real2sim #real2sim2real
Efficient acquisition of real-world embodied data has been increasingly critical. However, large-scale demonstrations captured by remote operation tend to take extremely high costs and fail to scale up the data size in an efficient manner. Sampling the episodes under a simulated environment is a promising way for large-scale collection while existing simulators fail to high-fidelity modeling on texture and physics. To address these limitations, we introduce the RoboGSim, a real2sim2real robotic simulator, powered by 3D Gaussian Splatting and the physics engine. RoboGSim mainly includes four parts: Gaussian Reconstructor, Digital Twins Builder, Scene Composer, and Interactive Engine. It can synthesize the simulated data with novel views, objects, trajectories, and scenes. RoboGSim also provides an online, reproducible, and safe evaluation for different manipulation policies. The real2sim and sim2real transfer experiments show a high consistency in the texture and physics. Moreover, the effectiveness of synthetic data is validated under the real-world manipulated tasks. We hope RoboGSim serves as a closed-loop simulator for fair comparison on policy learning. More information can be found on our project page https://robogsim.github.io/ .
RL-GSBridge: 3D Gaussian Splatting Based Real2Sim2Real Method for Robotic Manipulation Learning
Authors: Yuxuan Wu, Lei Pan, Wenhua Wu, Guangming Wang, Yanzi Miao, Hesheng Wang
pre-print -> https://arxiv.org/abs/2409.20291
#robotics #deformable_manipulation #gaussian_splatting #gaussiansplatting #sim2real #real2sim #real2sim2real
Sim-to-Real refers to the process of transferring policies learned in simulation to the real world, which is crucial for achieving practical robotics applications. However, recent Sim2real methods either rely on a large amount of augmented data or large learning models, which is inefficient for specific tasks. In recent years, with the emergence of radiance field reconstruction methods, especially 3D Gaussian splatting, it has become possible to construct realistic real-world scenes. To this end, we propose RL-GSBridge, a novel real-to-sim-to-real framework which incorporates 3D Gaussian Splatting into the conventional RL simulation pipeline, enabling zero-shot sim-to-real transfer for vision-based deep reinforcement learning. We introduce a mesh-based 3D GS method with soft binding constraints, enhancing the rendering quality of mesh models. Then utilizing a GS editing approach to synchronize the rendering with the physics simulator, RL-GSBridge could reflect the visual interactions of the physical robot accurately. Through a series of sim-to-real experiments, including grasping and pick-and-place tasks, we demonstrate that RL-GSBridge maintains a satisfactory success rate in real-world task completion during sim-to-real transfer. Furthermore, a series of rendering metrics and visualization results indicate that our proposed mesh-based 3D GS reduces artifacts in unstructured objects, demonstrating more realistic rendering performance.