超要約:3D Gaussian Splatting (3DGS) で、人間アバターをリアルに動かす技術だよ🌟
✨ ギャル的キラキラポイント ✨ ● 3DGSってスゴイ!写真みたいにリアルな映像が作れるんだって!📸 ● モーション(動き)も自然で、まるで本物みたいに動くアバターを作れるの!💃 ● ゲーム🎮、VR、eコマース…色んな分野で使えて、未来がマジ卍💖
詳細解説いくよ~!
背景 3Dアバターって、ゲームとかVRでよく見るけど、もっとリアルに動かせたら良くない?🥺 従来の技術だと、映像がちょっとショボかったり、動きがぎこちなかったりしたんだよね。
続きは「らくらく論文」アプリで
We present a novel framework for animating humans in 3D scenes using 3D Gaussian Splatting (3DGS), a neural scene representation that has recently achieved state-of-the-art photorealistic results for novel-view synthesis but remains under-explored for human-scene animation and interaction. Unlike existing animation pipelines that use meshes or point clouds as the underlying 3D representation, our approach introduces the use of 3DGS as the 3D representation for animating humans in scenes. By representing humans and scenes as Gaussians, our approach allows geometry-consistent free-viewpoint rendering of humans interacting with 3D scenes. Our key insight is that rendering can be decoupled from motion synthesis, and each sub-problem can be addressed independently without the need for paired human-scene data. Central to our method is a Gaussian-aligned motion module that synthesizes motion without explicit scene geometry, using opacity-based cues and projected Gaussian structures to guide human placement and pose alignment. To ensure natural interactions, we further propose a human-scene Gaussian refinement optimization that enforces realistic contact and navigation. We evaluate our approach on scenes from Scannet++ and the SuperSplat library, and on avatars reconstructed from sparse and dense multi-view human capture. Finally, we demonstrate that our framework enables novel applications such as geometry-consistent free-viewpoint rendering of edited monocular RGB videos with newly animated humans, showcasing the unique advantages of 3DGS for monocular video-based human animation. To assess the full quality of our results, we encourage readers to view the supplementary material available at https://miraymen.github.io/aha/ .