XGRIDS (@XGRIDS_OFFICIAL)
AWS 부스(#921)에서 Real2Sim 파이프라인 시연 예정이라는 안내 트윗입니다. 모바일로 스캔한 실제 공간을 고해상도 3D 월드로 변환해 로봇 트레이닝에 활용하는 워크플로우(Real2Sim)를 GTC2026 행사에서 3–5PM에 시연한다고 홍보합니다. Spatial AI 및 로보틱스용 시뮬·데이터 생성의 실무 적용 사례로 주목됩니다.
XGRIDS (@XGRIDS_OFFICIAL)
AWS 부스(#921)에서 Real2Sim 파이프라인 시연 예정이라는 안내 트윗입니다. 모바일로 스캔한 실제 공간을 고해상도 3D 월드로 변환해 로봇 트레이닝에 활용하는 워크플로우(Real2Sim)를 GTC2026 행사에서 3–5PM에 시연한다고 홍보합니다. Spatial AI 및 로보틱스용 시뮬·데이터 생성의 실무 적용 사례로 주목됩니다.
"Unlike vision and language, data for learning is not available passively[...]. This makes applying the same recipes we did in vision and language challenging"
#real2sim is a strong emergent tendency in robotics this year.
See previously shared articles:
RoboGSim -> https://arxiv.org/abs/2411.11839
RL-GSBridge -> https://arxiv.org/abs/2409.20291
GARField -> https://arxiv.org/abs/2410.05038
Efficient acquisition of real-world embodied data has been increasingly critical. However, large-scale demonstrations captured by remote operation tend to take extremely high costs and fail to scale up the data size in an efficient manner. Sampling the episodes under a simulated environment is a promising way for large-scale collection while existing simulators fail to high-fidelity modeling on texture and physics. To address these limitations, we introduce the RoboGSim, a real2sim2real robotic simulator, powered by 3D Gaussian Splatting and the physics engine. RoboGSim mainly includes four parts: Gaussian Reconstructor, Digital Twins Builder, Scene Composer, and Interactive Engine. It can synthesize the simulated data with novel views, objects, trajectories, and scenes. RoboGSim also provides an online, reproducible, and safe evaluation for different manipulation policies. The real2sim and sim2real transfer experiments show a high consistency in the texture and physics. Moreover, the effectiveness of synthetic data is validated under the real-world manipulated tasks. We hope RoboGSim serves as a closed-loop simulator for fair comparison on policy learning. More information can be found on our project page https://robogsim.github.io/ .
RoboGSim: A Real2Sim2Real Robotic Gaussian Splatting Simulator
Authors: Xinhai Li, Jialin Li, Ziheng Zhang, Rui Zhang, Fan Jia, Tiancai Wang, Haoqiang Fan, Kuo-Kun Tseng, Ruiping Wang
pre-print -> https://arxiv.org/abs/2411.11839
website -> https://robogsim.github.io
#robotics #manipulation #data_generation #sim2real #real2sim #real2sim2real
Efficient acquisition of real-world embodied data has been increasingly critical. However, large-scale demonstrations captured by remote operation tend to take extremely high costs and fail to scale up the data size in an efficient manner. Sampling the episodes under a simulated environment is a promising way for large-scale collection while existing simulators fail to high-fidelity modeling on texture and physics. To address these limitations, we introduce the RoboGSim, a real2sim2real robotic simulator, powered by 3D Gaussian Splatting and the physics engine. RoboGSim mainly includes four parts: Gaussian Reconstructor, Digital Twins Builder, Scene Composer, and Interactive Engine. It can synthesize the simulated data with novel views, objects, trajectories, and scenes. RoboGSim also provides an online, reproducible, and safe evaluation for different manipulation policies. The real2sim and sim2real transfer experiments show a high consistency in the texture and physics. Moreover, the effectiveness of synthetic data is validated under the real-world manipulated tasks. We hope RoboGSim serves as a closed-loop simulator for fair comparison on policy learning. More information can be found on our project page https://robogsim.github.io/ .
GARField: Addressing the visual Sim-to-Real gap in garment manipulation with mesh-attached radiance fields
Authors: Donatien Delehelle, Darwin G. Caldwell, Fei Chen
pre-print -> https://arxiv.org/abs/2410.05038
website -> https://ddonatien.github.io/garfield-website/
#robotics #deformable_manipulation #garment_manipulation #NeRF #deep_learning #synthetic_data #data_generation #real2sim
While humans intuitively manipulate garments and other textile items swiftly and accurately, it is a significant challenge for robots. A factor crucial to human performance is the ability to imagine, a priori, the intended result of the manipulation intents and hence develop predictions on the garment pose. That ability allows us to plan from highly obstructed states, adapt our plans as we collect more information and react swiftly to unforeseen circumstances. Conversely, robots struggle to establish such intuitions and form tight links between plans and observations. We can partly attribute this to the high cost of obtaining densely labelled data for textile manipulation, both in quality and quantity. The problem of data collection is a long-standing issue in data-based approaches to garment manipulation. As of today, generating high-quality and labelled garment manipulation data is mainly attempted through advanced data capture procedures that create simplified state estimations from real-world observations. However, this work proposes a novel approach to the problem by generating real-world observations from object states. To achieve this, we present GARField (Garment Attached Radiance Field), the first differentiable rendering architecture, to our knowledge, for data generation from simulated states stored as triangle meshes. Code is available on https://ddonatien.github.io/garfield-website/
RL-GSBridge: 3D Gaussian Splatting Based Real2Sim2Real Method for Robotic Manipulation Learning
Authors: Yuxuan Wu, Lei Pan, Wenhua Wu, Guangming Wang, Yanzi Miao, Hesheng Wang
pre-print -> https://arxiv.org/abs/2409.20291
#robotics #deformable_manipulation #gaussian_splatting #gaussiansplatting #sim2real #real2sim #real2sim2real
Sim-to-Real refers to the process of transferring policies learned in simulation to the real world, which is crucial for achieving practical robotics applications. However, recent Sim2real methods either rely on a large amount of augmented data or large learning models, which is inefficient for specific tasks. In recent years, with the emergence of radiance field reconstruction methods, especially 3D Gaussian splatting, it has become possible to construct realistic real-world scenes. To this end, we propose RL-GSBridge, a novel real-to-sim-to-real framework which incorporates 3D Gaussian Splatting into the conventional RL simulation pipeline, enabling zero-shot sim-to-real transfer for vision-based deep reinforcement learning. We introduce a mesh-based 3D GS method with soft binding constraints, enhancing the rendering quality of mesh models. Then utilizing a GS editing approach to synchronize the rendering with the physics simulator, RL-GSBridge could reflect the visual interactions of the physical robot accurately. Through a series of sim-to-real experiments, including grasping and pick-and-place tasks, we demonstrate that RL-GSBridge maintains a satisfactory success rate in real-world task completion during sim-to-real transfer. Furthermore, a series of rendering metrics and visualization results indicate that our proposed mesh-based 3D GS reduces artifacts in unstructured objects, demonstrating more realistic rendering performance.