超要約:自動運転シミュレーション、動画キレイだけど動きが変? 原因と対策を探るよ!
✨ ギャル的キラキラポイント ✨
● 動画の見た目だけじゃダメ🙅♀️動きの正確さが大事ってコト! ● ファインチューニング(微調整)で起こる、意外な落とし穴を指摘👀 ● 継続学習(アップデート)で、もっと良いシミュレーション目指せるかも✨
詳細解説
続きは「らくらく論文」アプリで
Recent advancements in video generation have substantially improved visual quality and temporal coherence, making these models increasingly appealing for applications such as autonomous driving, particularly in the context of driving simulation and so-called "world models". In this work, we investigate the effects of existing fine-tuning video generation approaches on structured driving datasets and uncover a potential trade-off: although visual fidelity improves, spatial accuracy in modeling dynamic elements may degrade. We attribute this degradation to a shift in the alignment between visual quality and dynamic understanding objectives. In datasets with diverse scene structures within temporal space, where objects or perspective shift in varied ways, these objectives tend to highly correlated. However, the very regular and repetitive nature of driving scenes allows visual quality to improve by modeling dominant scene motion patterns, without necessarily preserving fine-grained dynamic behavior. As a result, fine-tuning encourages the model to prioritize surface-level realism over dynamic accuracy. To further examine this phenomenon, we show that simple continual learning strategies, such as replay from diverse domains, can offer a balanced alternative by preserving spatial accuracy while maintaining strong visual quality.