最強ギャル解説AI、参上~!✨
超要約: 画像認識AI、画像の情報から事実を思い出すの苦手問題💥 解消への道を示す研究だよ!
● 画像を見て事実を答えるのが苦手なAIを発見👀! ● AIの"勘違い"の原因を解明するよ🧐! ● 信頼性爆上げのAI開発に繋がるかも💖!
背景 最近すごいVision Language Models(VLM:画像と言葉を理解するAI)が登場してるじゃん? でも、画像見て「これ何?」って質問に答えるのは得意だけど、写真から事実を答えるのはちょっと苦手みたいなんだよね😢 まるで、SNSで見た情報は信じがちだけど、教科書の内容は忘れがちなギャルみたい?笑
続きは「らくらく論文」アプリで
Through a controlled study, we identify a systematic deficiency in the multimodal grounding of Vision Language Models (VLMs). While VLMs can recall factual associations when provided a textual reference to an entity; their ability to do so is significantly diminished when the reference is visual instead. Forcing VLMs to rely on image representations of an entity halves their ability to recall factual knowledge, suggesting that VLMs struggle to link their internal knowledge of an entity with its image representation. We show that such linking failures are correlated with the expression of distinct patterns in model internal states, and that probes on these internal states achieve over 92% accuracy at flagging cases where the VLM response is unreliable. These probes can be applied, without retraining, to identify when a VLM will fail to correctly answer a question that requires an understanding of multimodal input. When used to facilitate selective prediction on a visual question answering task, the probes increase coverage by 7.87% (absolute) while also reducing the risk of error by 0.9% (absolute). Addressing the systematic, detectable deficiency is an important avenue in language grounding, and we provide informed recommendations for future directions.