超要約: 画像とテキストのペアなしで、高品質なキャプション作っちゃうAIの話💖
✨ ギャル的キラキラポイント ✨
● 大量データいらない!テキストだけでOKって、超絶コスパ良いじゃん?💸 ● モダリティギャップ(画像とテキストのズレ)を、テクでカバーしてるのがスゴすぎ😳 ● 画像検索とか、色んなサービスがもっと便利になる未来、アゲ〜⤴️
詳細解説いくよ~!
続きは「らくらく論文」アプリで
Image captioning has drawn considerable attention from the natural language processing and computer vision fields. Aiming to reduce the reliance on curated data, several studies have explored image captioning without any humanly-annotated image-text pairs for training, although existing methods are still outperformed by fully supervised approaches. This paper proposes TOMCap, i.e., an improved text-only training method that performs captioning without the need for aligned image-caption pairs. The method is based on prompting a pre-trained language model decoder with information derived from a CLIP representation, after undergoing a process to reduce the modality gap. We specifically tested the combined use of retrieved examples of captions, and latent vector representations, to guide the generation process. Through extensive experiments, we show that TOMCap outperforms other training-free and text-only methods. We also analyze the impact of different choices regarding the configuration of the retrieval-augmentation and modality gap reduction components.