Brie Wensleydale (@SlipperyGem)

VNCCS가 3D Pose Studio를 출시했습니다. 이 도구는 포즈와 조명에 대해 높은 수준의 제어를 제공한다고 소개되며, 3D 포즈 편집과 라이팅 조정에 특화된 기능을 갖춘 새로운 툴입니다.

https://x.com/SlipperyGem/status/2016560249260691839

#vnccs #3dpose #poseestimation #3d

Brie Wensleydale🧀🐭 (@SlipperyGem) on X

Oh NICE! VNCCS's 3D Pose Studio is out! It offers an unrivaled amount of control for both pose and lighting ~

X (formerly Twitter)

We launched our project page for 3D-MuPPET https://alexhang212.github.io/3D-MuPPET/.

A framework to estimate and track 3D poses of up to 10 #pigeons at interactive speed. We show that 3D-MuPPET also works in natural environments without model fine-tuning on additional annotations.

#MuPPET
#PoseEstimation
#3dpose
#tracking
#computervision
#collectivebehaviour
#UniKonstanz
#CBehav
#cv4animals

3D-MuPPET: 3D Multi-Pigeon Pose Estimation and Tracking

Project page for '3D-MuPPET: 3D Multi-Pigeon Pose Estimation and Tracking'

@lili congrats and good luck! I am shamelessly self serving when I say that I hope you have great success with the #3Dpose estimation work. Especially if it can generalize to #monkey!

Our latest work "Neural Puppeteer" is published at https://link.springer.com/chapter/10.1007/978-3-031-26316-3_15.

We estimate 3D keypoints from multi-view silhouettes only, using our inverse neural rendering pipeline. In this way our 3D keypoint estimation is robust against transformations that leave silhouettes unchanged like texture and lighting.

#NePu #NeuralRendering #PoseEstimation #3dpose #computervision #CBehav #UniKonstanz

Neural Puppeteer: Keypoint-Based Neural Rendering of Dynamic Shapes

We introduce Neural Puppeteer, an efficient neural rendering pipeline for articulated shapes. By inverse rendering, we can predict 3D keypoints from multi-view 2D silhouettes alone, without requiring texture information. Furthermore, we can easily predict 3D...

SpringerLink