iconLogo
Published:2025/10/23 9:48:16

了解~!最強ギャルAI、参上✨ この論文、アタシがかわいく解説しちゃうね!


テキスト→人間モーション生成、爆誕!💃✨ (SnapMoGen)

超要約: テキストから動きを生成する技術、表現力UP!新しいデータセットとモデルで、もっと自由自在に動けるアバターとか作れちゃうかも💕

ギャル的キラキラポイント✨

表現力爆上がり! 今までの短いテキストじゃなくて、長文で細かく動きを指定できるようになったの! ● データセットが神! 動きと説明をセットにした「SnapMoGen」ってのがすごいの!データ量も桁違い💖 ● 新しいモデル爆誕! 「MoMask++」っていうのが、動きの生成をめっちゃスムーズにしてくれるみたい🤩

続きは「らくらく論文」アプリで

SnapMoGen: Human Motion Generation from Expressive Texts

Chuan Guo / Inwoo Hwang / Jian Wang / Bing Zhou

Text-to-motion generation has experienced remarkable progress in recent years. However, current approaches remain limited to synthesizing motion from short or general text prompts, primarily due to dataset constraints. This limitation undermines fine-grained controllability and generalization to unseen prompts. In this paper, we introduce SnapMoGen, a new text-motion dataset featuring high-quality motion capture data paired with accurate, expressive textual annotations. The dataset comprises 20K motion clips totaling 44 hours, accompanied by 122K detailed textual descriptions averaging 48 words per description (vs. 12 words of HumanML3D). Importantly, these motion clips preserve original temporal continuity as they were in long sequences, facilitating research in long-term motion generation and blending. We also improve upon previous generative masked modeling approaches. Our model, MoMask++, transforms motion into multi-scale token sequences that better exploit the token capacity, and learns to generate all tokens using a single generative masked transformer. MoMask++ achieves state-of-the-art performance on both HumanML3D and SnapMoGen benchmarks. Additionally, we demonstrate the ability to process casual user prompts by employing an LLM to reformat inputs to align with the expressivity and narration style of SnapMoGen. Project webpage: https://snap-research.github.io/SnapMoGen/

cs / cs.CV