超要約: 一人称視点(エゴセントリック)動画を、三人称視点(エキセントリック)の知識を使って賢く理解するAIだよ!
ギャル的キラキラポイント✨
● エゴ動画理解度が爆上がり!スマートグラスとかVR/ARがもっと楽しくなる予感♪ ● エキセントリックな情報をエゴ動画に活かす、斬新(ざんしん)な発想がスゴすぎ💖 ● データセットとか学習方法も全部オリジナルで、まさにオンリーワンな技術だよ😎
詳細解説
続きは「らくらく論文」アプリで
AI personal assistants, deployed through robots or wearables, require embodied understanding to collaborate effectively with humans. However, current Multimodal Large Language Models (MLLMs) primarily focus on third-person (exocentric) vision, overlooking the unique challenges of first-person (egocentric) videos. Additionally, high acquisition costs limit data size, impairing MLLM performance. To address these challenges, we propose learning the mapping between exocentric and egocentric domains, leveraging the extensive exocentric knowledge within existing MLLMs to enhance egocentric video understanding. To this end, we introduce Ego-ExoClip, a pre-training dataset comprising 1.1M synchronized ego-exo clip-text pairs derived from Ego-Exo4D, together with the instruction-tuning dataset EgoIT, which is collected from multiple sources to enhance the model's instruction-following capabilities. Building upon the datasets, we propose a migration strategy and further design a progressive mapping learning pipeline with three stages: Demonstrator Self-Preparation, Demonstrator-Learner Guidance, and Learner Self-Practice. Extensive experiments across diverse egocentric tasks reveal that existing MLLMs perform inadequately in egocentric video understanding, while our model significantly outperforms these leading models.