きゃ~、今回の論文もめっちゃイケてる内容だよ!✨
タイトル & 超要約(15字以内) 交差モダリティ地理位置特定、ビジネスで大活躍!
ギャル的キラキラポイント✨ ×3 ● ドローンとか衛星画像って、なんかロマンチックじゃん? ● AIが賢くなって、言葉で場所が特定できるってすごくない?😳 ● インフラ点検とか災害対応とか、社会貢献もできちゃう!
詳細解説
リアルでの使いみちアイデア💡 ×2
続きは「らくらく論文」アプリで
We present a winning solution to RoboSense 2025 Track 4: Cross-Modal Drone Navigation. The task retrieves the most relevant geo-referenced image from a large multi-platform corpus (satellite/drone/ground) given a natural-language query. Two obstacles are severe inter-platform heterogeneity and a domain gap between generic training descriptions and platform-specific test queries. We mitigate these with a domain-aligned preprocessing pipeline and a Mixture-of-Experts (MoE) framework: (i) platform-wise partitioning, satellite augmentation, and removal of orientation words; (ii) an LLM-based caption refinement pipeline to align textual semantics with the distinct visual characteristics of each platform. Using BGE-M3 (text) and EVA-CLIP (image), we train three platform experts using a progressive two-stage, hard-negative mining strategy to enhance discriminative power, and fuse their scores at inference. The system tops the official leaderboard, demonstrating robust cross-modal geo-localization under heterogeneous viewpoints.