超要約: 視覚と文章を合体するAI(MLLM)を、双曲空間(ふたつの曲線で囲まれた空間みたいなやつ)で学習させたら、ちょー効率よくなったって話✨
✨ ギャル的キラキラポイント ✨ ● GPU(ジーピーユー)代が浮く! コスパ最強ってコト💖 ● 粒度(ざっくりさとか細かさ)調整が神レベル✨ ● 既存モデルの性能も爆上げ⤴️
詳細解説いくよ~!
背景 画像と文章を理解するAI、MLLMちゃんはすごいけど、学習コストが鬼高い💀 たくさんのGPUが必要なの! でも、HyperETなら双曲空間を使うことで、もっと効率的に学習できるんだって!
続きは「らくらく論文」アプリで
Multi-modal large language models (MLLMs) have emerged as a transformative approach for aligning visual and textual understanding. They typically require extremely high computational resources (e.g., thousands of GPUs) for training to achieve cross-modal alignment at multi-granularity levels. We argue that a key source of this inefficiency lies in the vision encoders they widely equip with, e.g., CLIP and SAM, which lack the alignment with language at multi-granularity levels. To address this issue, in this paper, we leverage hyperbolic space, which inherently models hierarchical levels and thus provides a principled framework for bridging the granularity gap between visual and textual modalities at an arbitrary granularity level. Concretely, we propose an efficient training paradigm for MLLMs, dubbed as HyperET, which can optimize visual representations to align with their textual counterparts at an arbitrary granularity level through dynamic hyperbolic radius adjustment in hyperbolic space. HyperET employs learnable matrices with M\"{o}bius multiplication operations, implemented via three effective configurations: diagonal scaling matrices, block-diagonal matrices, and banded matrices, providing a flexible yet efficient parametrization strategy. Comprehensive experiments across multiple MLLM benchmarks demonstrate that HyperET consistently improves both existing pre-training and fine-tuning MLLMs clearly with less than 1\% additional parameters.