iconLogo
Published:2026/1/11 10:12:11

動画をリアルタイムで理解!未来がアツい✨

超要約: 動画を"見ながら"理解するAI、爆誕!遅延(ディレイ)ゼロで、色々できちゃう!

🌟 ギャル的キラキラポイント✨ ● 動画を"同時に"処理!今までのAIは動画全部見ないとダメだったけど、これは違うの! ● 遅延が少ないから、色んな事がリアルタイムでできちゃう! ● 既存のAIをちょこっと変えるだけ!お手軽なのにスゴイって最高じゃん?

詳細解説 ● 背景 いままでのAI(MLLM)は、動画を全部見終わらないと理解できなかった😭 だけど、これからは違う! リアルタイムで動画を理解できるAIが求められてるから、研究したんだって!

● 方法 "位置エンコーディング"っていう、AIが動画を理解する上での邪魔者を、新しい方法で解決したの! OSPE、GDPE、GIPEっていう3つの新しい方法で、動画を分割して処理できるようにしたんだって!

続きは「らくらく論文」アプリで

Speak While Watching: Unleashing TRUE Real-Time Video Understanding Capability of Multimodal Large Language Models

Junyan Lin / Junlong Tong / Hao Wu / Jialiang Zhang / Jinming Liu / Xin Jin / Xiaoyu Shen

Multimodal Large Language Models (MLLMs) have achieved strong performance across many tasks, yet most systems remain limited to offline inference, requiring complete inputs before generating outputs. Recent streaming methods reduce latency by interleaving perception and generation, but still enforce a sequential perception-generation cycle, limiting real-time interaction. In this work, we target a fundamental bottleneck that arises when extending MLLMs to real-time video understanding: the global positional continuity constraint imposed by standard positional encoding schemes. While natural in offline inference, this constraint tightly couples perception and generation, preventing effective input-output parallelism. To address this limitation, we propose a parallel streaming framework that relaxes positional continuity through three designs: Overlapped, Group-Decoupled, and Gap-Isolated. These designs enable simultaneous perception and generation, allowing the model to process incoming inputs while producing responses in real time. Extensive experiments reveal that Group-Decoupled achieves the best efficiency-performance balance, maintaining high fluency and accuracy while significantly reducing latency. We further show that the proposed framework yields up to 2x acceleration under balanced perception-generation workloads, establishing a principled pathway toward speak-while-watching real-time systems. We make all our code publicly available: https://github.com/EIT-NLP/Speak-While-Watching.

cs / cs.CV / cs.CL