iconLogo
Published:2025/12/25 8:30:04

単眼動画でロボット動かすってよ!🤖✨(超要約)

1. キラキラポイント✨

● 単眼(カメラ1つ)で人の動きをロボットに完コピ!👀✨ ● 高性能な機械とか使わずに、お手軽にできちゃうんだって!🥳 ● 色んなロボットに応用できて、ビジネスチャンス爆上がり!💰

2. 詳細解説

  • 背景 人の動きをロボットに真似させる(リターゲティング)研究はあったけど、難しかったんだよね💦 高性能な機械とか、専門知識が必要だったり…。 でも、この研究は、もっと手軽にできる方法を開発したってこと!

続きは「らくらく論文」アプリで

World-Coordinate Human Motion Retargeting via SAM 3D Body

Zhangzheng Tu / Kailun Su / Shaolong Zhu / Yukun Zheng

Recovering world-coordinate human motion from monocular videos with humanoid robot retargeting is significant for embodied intelligence and robotics. To avoid complex SLAM pipelines or heavy temporal models, we propose a lightweight, engineering-oriented framework that leverages SAM 3D Body (3DB) as a frozen perception backbone and uses the Momentum HumanRig (MHR) representation as a robot-friendly intermediate. Our method (i) locks the identity and skeleton-scale parameters of per tracked subject to enforce temporally consistent bone lengths, (ii) smooths per-frame predictions via efficient sliding-window optimization in the low-dimensional MHR latent space, and (iii) recovers physically plausible global root trajectories with a differentiable soft foot-ground contact model and contact-aware global optimization. Finally, we retarget the reconstructed motion to the Unitree G1 humanoid using a kinematics-aware two-stage inverse kinematics pipeline. Results on real monocular videos show that our method has stable world trajectories and reliable robot retargeting, indicating that structured human representations with lightweight physical constraints can yield robot-ready motion from monocular input.

cs / cs.RO