超要約: 動画の勘違い(ハルシネーション)を減らす方法を発見!動画の理解度UPを目指すよ✨
ギャル的キラキラポイント✨
● 動画の「行動」と「時間の流れ」の間違いをなくすんだって!すごい✨ ● カウンターファクト動画っていう、ちょっと違う動画を作るのがポイントみたい💡 ● AIの動画理解がレベルアップして、色んなことに役立つってこと🫶
詳細解説
続きは「らくらく論文」アプリで
Video-language models (VLMs) achieve strong multimodal understanding but remain prone to hallucinations, especially when reasoning about actions and temporal order. Existing mitigation strategies, such as textual filtering or random video perturbations, often fail to address the root cause: over-reliance on language priors rather than fine-grained visual dynamics. We propose a scalable framework for counterfactual video generation that synthesizes videos differing only in actions or temporal structure while preserving scene context. Our pipeline combines multimodal LLMs for action proposal and editing guidance with diffusion-based image and video models to generate semantic hard negatives at scale. Using this framework, we build CounterVid, a synthetic dataset of ~26k preference pairs targeting action recognition and temporal reasoning. We further introduce MixDPO, a unified Direct Preference Optimization approach that jointly leverages textual and visual preferences. Fine-tuning Qwen2.5-VL with MixDPO yields consistent improvements, notably in temporal ordering, and transfers effectively to standard video hallucination benchmarks. Code and models will be made publicly available.