BiPrompt、最強やん!視覚&テキストのバイアス対策✨
タイトル & 超要約 BiPrompt!VLMのバイアスを両方消す魔法🪄
ギャル的キラキラポイント✨ ● 視覚とテキスト、両方のバイアスに同時にアタック👊 ● テスト時に調整するだけ!追加学習ナシで楽ちん🎵 ● AIちゃんの信頼度が爆上がり⤴︎、安心して使えるね💖
詳細解説
リアルでの使いみちアイデア💡
続きは「らくらく論文」アプリで
Vision language foundation models such as CLIP exhibit impressive zero-shot generalization yet remain vulnerable to spurious correlations across visual and textual modalities. Existing debiasing approaches often address a single modality either visual or textual leading to partial robustness and unstable adaptation under distribution shifts. We propose a bilateral prompt optimization framework (BiPrompt) that simultaneously mitigates non-causal feature reliance in both modalities during test-time adaptation. On the visual side, it employs structured attention-guided erasure to suppress background activations and enforce orthogonal prediction consistency between causal and spurious regions. On the textual side, it introduces balanced prompt normalization, a learnable re-centering mechanism that aligns class embeddings toward an isotropic semantic space. Together, these modules jointly minimize conditional mutual information between spurious cues and predictions, steering the model toward causal, domain invariant reasoning without retraining or domain supervision. Extensive evaluations on real-world and synthetic bias benchmarks demonstrate consistent improvements in both average and worst-group accuracies over prior test-time debiasing methods, establishing a lightweight yet effective path toward trustworthy and causally grounded vision-language adaptation.