iconLogo
Published:2025/10/23 6:51:52

ドメインシフト克服!変化検出VQA✨(超要約:汎化能力UP↑)

1. 社会問題解決と学術的課題に挑戦! 2. BrightVQAデータセット&TCSSM開発! 3. 災害、都市計画、環境モニタリングに貢献!

● ドメインシフト(データの偏り)に対応! ● 専門知識なしで変化を質問できる! ● 実世界の様々なデータに対応できる!

詳細解説

背景 リモートセンシング(遠隔探査)画像で、変化を検出する技術は重要だよ!でも、データが地域とかセンサーで変わると、性能が落ちちゃう問題があったの😱 専門知識ないと使いこなせないのもネックだったみたい。

続きは「らくらく論文」アプリで

Text-conditioned State Space Model For Domain-generalized Change Detection Visual Question Answering

Elman Ghazaei / Erchan Aptoula

The Earth's surface is constantly changing, and detecting these changes provides valuable insights that benefit various aspects of human society. While traditional change detection methods have been employed to detect changes from bi-temporal images, these approaches typically require expert knowledge for accurate interpretation. To enable broader and more flexible access to change information by non-expert users, the task of Change Detection Visual Question Answering (CDVQA) has been introduced. However, existing CDVQA methods have been developed under the assumption that training and testing datasets share similar distributions. This assumption does not hold in real-world applications, where domain shifts often occur. In this paper, the CDVQA task is revisited with a focus on addressing domain shift. To this end, a new multi-modal and multi-domain dataset, BrightVQA, is introduced to facilitate domain generalization research in CDVQA. Furthermore, a novel state space model, termed Text-Conditioned State Space Model (TCSSM), is proposed. The TCSSM framework is designed to leverage both bi-temporal imagery and geo-disaster-related textual information in an unified manner to extract domain-invariant features across domains. Input-dependent parameters existing in TCSSM are dynamically predicted by using both bi-temporal images and geo-disaster-related description, thereby facilitating the alignment between bi-temporal visual data and the associated textual descriptions. Extensive experiments are conducted to evaluate the proposed method against state-of-the-art models, and superior performance is consistently demonstrated. The code and dataset will be made publicly available upon acceptance at https://github.com/Elman295/TCSSM.

cs / cs.CV