超要約: AIが画像と違うもん生成する「幻覚」を、SDCD(スーパー構造コントラスト復号)で抑えちゃうぞ!
✨ ギャル的キラキラポイント ✨
● AIちゃんが画像認識するときに、局所的な(一部分の)情報ばっか見て、全体を見てないことに気づいたの! ● 画像がちょっと変わっただけで、AIちゃんが変なもんを「幻覚」しちゃう原因を突き止めたよ! ● SDCDっていう新しい方法で、AIちゃんの「幻覚」を減らして、もっと賢くしたんだ!
詳細解説 背景 最近のAI(LVLM)は、画像と文章を理解するのが得意になったけど、たまに画像と違うもんを生成しちゃう問題があったの!これを「幻覚」って呼ぶんだけど、AIちゃんの信頼性を下げちゃうんだよね😭
続きは「らくらく論文」アプリで
Large Vision-Language Models (LVLMs) demonstrate significant progress in multimodal understanding and reasoning, yet object hallucination remains a critical challenge. While existing research focuses on mitigating language priors or high-level statistical biases, they often overlook the internal complexities of the visual encoding process. We identify that visual statistical bias, arising from the inherent Bag-of-Patches behavior of Vision Encoders under weak structural supervision, acts as a contributing factor of object hallucinations. Under this bias, models prioritize local texture features within individual patches over holistic geometric structures. This tendency may induce spurious visual confidence and result in hallucinations. To address this, we introduce a training-free algorithm called Structure-Disrupted Contrastive Decoding (SDCD), which performs contrastive calibration of the output distribution by introducing a shuffled structure-disrupted view. By penalizing tokens that maintain high confidence under this structure-less view, SDCD effectively suppresses the texture-driven bias. Experimental results demonstrate that SDCD significantly mitigates hallucinations across multiple benchmarks and enhances the overall multimodal capabilities of LVLMs.