🌟 ギャル的キラキラポイント✨ ● 低リソース言語(データ少なめ言語)でも、AIが賢くなる方法を発見したってコト! ● 翻訳は全部するんじゃなくて、困ったときだけ!賢い選択がポイント💖 ● AIが世界中の人に役立つように、もっともっと進化するんだね!
詳細解説いくよ~!
背景 最近のAI、スゴイじゃん?でも、言語によって得意・不得意があるみたい。英語は得意だけど、他の言語だと理解が難しいってコトあるよね?🤔 それを「多言語推論ギャップ」って言うらしい!
方法 AIが言葉を理解できない「理解の失敗」を発見!😳 それを解決するために、困ったときだけ英語に翻訳する「選択的翻訳」って方法を使うんだって! これで、低リソース言語でもAIがちゃんと理解できるようになるらしいの!
続きは「らくらく論文」アプリで
Reasoning language models (RLMs) achieve strong performance on complex reasoning tasks, yet they still exhibit a multilingual reasoning gap, performing better in high-resource languages than in low-resource ones. While recent efforts have been made to address this gap, its underlying causes remain largely unexplored. In this work, we show that this gap primarily stems from failures in language understanding-specifically, the model's inability to translate multilingual inputs into the language dominating its reasoning traces (typically English). As identifying understanding failures can enable targeted mitigation of the gap, we evaluate a range of detection methods and find that understanding failures are detectable to a meaningful extent, with supervised approaches performing best. Building on this, we propose Selective Translation, a strategy that incorporates an English translation into the initial reasoning trace only when an understanding failure is detected. Experimental results using Qwen3-4B show that Selective Translation substantially bridges the multilingual reasoning gap, achieving near full-translation performance while translating only about 20% of inputs. Together, our results show that failures in language understanding are the primary driver of the multilingual reasoning gap and can be detected and selectively mitigated, clarifying its origin and suggesting a path toward more equitable multilingual reasoning. Our code and data are publicly available at https://github.com/deokhk/RLM_analysis