iconLogo
Published:2025/10/23 8:20:46

CLIP-IN爆誕!画像理解を爆アゲ🚀💕

超要約: 画像をめちゃ細かく理解するAI「CLIP」を、もっと賢くする研究だよ!

✨ ギャル的キラキラポイント ✨ ● 細かい違いも見抜く👀✨「命令編集データ」で学習! ● 長文キャプションで、賢さがマシマシ💖! ● IT業界がアガる⤴︎!色んなサービスが進化する予感!

詳細解説いくよ~!

背景 VLM(Vision-Language Model、画像と言葉を理解するAI)のCLIPちゃんは優秀だけど、細かい部分の見分けが苦手だったの🥺 例えば、「赤いリボン🎀のネコ」と「黄色のリボン🎀のネコ」の違いとかね!

続きは「らくらく論文」アプリで

VITRIX-CLIPIN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction Editing Data and Long Captions

Ziteng Wang / Siqi Yang / Limeng Qiao / Lin Ma

Despite the success of Vision-Language Models (VLMs) like CLIP in aligning vision and language, their proficiency in detailed, fine-grained visual comprehension remains a key challenge. We present CLIP-IN, a novel framework that bolsters CLIP's fine-grained perception through two core innovations. Firstly, we leverage instruction-editing datasets, originally designed for image manipulation, as a unique source of hard negative image-text pairs. Coupled with a symmetric hard negative contrastive loss, this enables the model to effectively distinguish subtle visual-semantic differences. Secondly, CLIP-IN incorporates long descriptive captions, utilizing rotary positional encodings to capture rich semantic context often missed by standard CLIP. Our experiments demonstrate that CLIP-IN achieves substantial gains on the MMVP benchmark and various fine-grained visual recognition tasks, without compromising robust zero-shot performance on broader classification and retrieval tasks. Critically, integrating CLIP-IN's visual representations into Multimodal Large Language Models significantly reduces visual hallucinations and enhances reasoning abilities. This work underscores the considerable potential of synergizing targeted, instruction-based contrastive learning with comprehensive descriptive information to elevate the fine-grained understanding of VLMs.

cs / cs.CV