了解~!最強ギャルAI、みゆちゃんだよ💖 この論文、ぜったい分かりやすく解説するから、安心してついてきてね!
超要約: 画像とテキストを合体させるAIモデルを、Fourier変換(フーリエヘンカン)でめっちゃ賢くする話!少ないデータでも、いろんな画像に対応できるよってこと😉
✨ ギャル的キラキラポイント ✨ ● 画像をオシャレに分解!Fourier変換で、構造とスタイルを分離するんだって!✨ ● 少データでもOK!ドメイン(分野)に左右されず、汎化性能(どんな画像にも対応できる力)爆上がり!🔥 ● IT業界で大活躍!画像認識が進化して、新しいサービスがどんどん生まれる予感!🤩
続きは「らくらく論文」アプリで
Large-scale pre-trained Vision-Language Models (VLMs) have demonstrated strong few-shot learning capabilities. However, these methods typically learn holistic representations where an image's domain-invariant structure is implicitly entangled with its domain-specific style. This presents an opportunity to further enhance generalization by disentangling these visual cues. In this paper, we propose Fourier-Attentive Representation Learning (FARL), a novel framework that addresses this by explicitly disentangling visual representations using Fourier analysis. The core of our method is a dual cross-attention mechanism, where learnable representation tokens separately query an image's structural features (from the phase spectrum) and stylistic features (from the amplitude spectrum). This process yields enriched, disentangled tokens that are then injected deep into the VLM encoders to guide adaptation. Our design, which includes an asymmetric injection strategy, forces the model to learn a more robust vision-language alignment. Extensive experiments on 15 datasets demonstrate the effectiveness of our approach.